Skip to content

Concerns rise as AI-generated NSFW app ads proliferate Meta

  • by
  • 3 min read

Meta, the parent company of Facebook, Instagram, and Messenger, is facing intense scrutiny over hosting AI-generated Not Safe For Work (NSFW) app ads, including tools promoting ‘AI girlfriends’.

Despite Meta’s strict policies against adult content, these apps, driven by chatbots, offer adult explicit interactions and content, sparking debates about the ethical boundaries of AI in social media.

A recent investigation by Wired delved into Meta’s online ad library, revealing a substantial presence of ads for AI-generated NSFW companion apps. These ads feature chatbots engaging users with explicit images, text, and simulated relationships.

The controversy underscores a significant policy dilemma for Meta. While the company’s guidelines prohibit adult content and overly suggestive material, the proliferation of these AI-driven NSFW apps suggests a gap in enforcement or a need for more robust content moderation strategies.

A central aspect of the debate revolves around the treatment of human adult workers compared to AI-generated content. Similar content posted by humans would face immediate removal, whereas AI chatbots seemingly operate with impunity, showcasing that it has become quite difficult for the platforms to remove AI ads.

“When we identify violating ads we work quickly to remove them, as we’re doing here,” Meta spokesperson Ryan Daniels told Wired. “We continue to improve our systems, including how we detect ads and behaviour that go against our policies.”

However, researchers found that several thousand ads are still active on the platforms, raising questions about the effectiveness of enforcement measures.

The investigation explores particular AI companion applications like Hush and Rosytalk, uncovering various functionalities such as chat interactions, allowing users to modify AI avatars’ appearances, and assurances of providing emotional assistance. Despite their advertised purpose of combating loneliness, these applications blend fantasy and actuality, prompting discussion about societal standards and potential implications for susceptible individuals.

Experts such as Carolina Are from the Center for Digital Citizens at Northumbria University highlight the complexities of AI-generated content. She emphasises the personalised nature of human interactions compared to the potentially exploitative and generic aspect of AI companion apps, highlighting the emotional labour involved in human interactions.

In October 2023, there were reports that AI-generated images of children are proliferating the internet.

There have always been concerns regarding Facebook and children’s safety. This prompted Facebook to remove 8.7 million child exploitation posts in 2018.

On April 16, Meta’s oversight board took up two celebrity deepfake cases. This incident shows how dangerous AI-generated content can be on the internet. With platforms such as Sora, these cases are likely to rise in future.

In the News: Garry’s Mod takes down Nintendo content from Steam WorkshopIn the News:

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: kumarhemant@pm.me

>