Skip to content

Meta’s Oversight Board takes up two celebrity deepfake cases

  • by
  • 4 min read

Photo by mundissima / Shutterstock.com

The Oversight Board, responsible for reviewing content decisions made by Meta, has announced two new cases for consideration involving explicit-AI generated images posted on Instagram and Facebook.

The first case, labelled 2024-007-IG-UA, involved an AI-generated image of a nude woman resembling a public figure from India, posted on Instagram. This case highlights the challenges posed by deepfake content, particularly in regions like India, where deepfakes are increasingly problematic due to lax laws.

As per the Oversight Board, the user reported the content to Meta for pornography, but the initial report was automatically closed without review. Subsequent appeals to Meta were also automatically closed, leading the user to appeal to the Oversight Board. The Board’s intervention led Meta to remove the content, citing a violation of the Bullying and Harassment Community Standard.

“As we cannot hear every appeal, the Board prioritises cases that have the potential to affect lots of users around the world, are of critical importance to public discourse or raise important questions about Meta’s policies,” said the board.

The second case, 2024-008-FB-UA, featured an AI-generated image of a nude woman resembling an American public figure. In this case, another user had already posted the image, and the matter was escalated to subject matter experts at Meta, who then removed the content, citing the provision of ‘derogatory sexualised photoshop or drawings’ under the bullying and harassment policy.

The morphed image was also added to the Media Matching Service Bank, a system that finds and removes images that have already been flagged as violating Meta’s policies. The image was deleted and removed; therefore, when the affected user complained, the ticket was closed. The user then escalated the report to the Oversight Board.

Both the above issues did not fall under Meta’s policies on porn and were dealt with as bullying and harassment. The Board selected these two cases as they involved using AI-generated imagery. This situation will likely escalate, given the general masses’ proliferation of generative AI apps and tools.

The proliferation of tons of generative AI platforms will likely escalate the deepfake problem.

Another main problem that the platforms face is detection. With billions of such images being posted per month, platforms struggle to remove the malicious ones. “I can tentatively already say that the main problem is probably detection,” said Julie Owono, a member of the Oversight Board, in an interview with Wired. “Detection is not as perfect or at least is not as efficient as we would wish.”

Both cases underscore an important disparity: the case involving the Indian victim received notably less attention than the American counterpart. The Indian individual had to escalate the matter to the Board due to Facebook’s apparent indifference, whereas the resolution for the American case was swift; however, the affected party was unaware of this resolution and subsequently reported it to the Board.

“It’s critical that this matter is addressed, and the board looks forward to exploring whether Meta’s policies and enforcement practices are effective at addressing this problem,” said Helle Thorning-Schmidt, co-chair of the Oversight Board.

Meta’s decisions by the Oversight Board are not binding, but the company must respond within 60 days. The Board has also welcomed public comments on the issue by Tuesday, April 30.

In March 2024, it was reported that Google had received more than 13,000 copyright complaints against several thousand URLs distributing deepfake adult content. Furthermore, a report by the Internet Watch Foundation (IWF) highlighted the proliferation of AI-generated imagery of young children.

In 2018, it was noticed that Facebook has emerged as a major grooming platform for child abusers. In 2022, subsequent investigations revealed that the problem was still not resolved. Although Facebook keeps on removing millions of posts to showcase its fight against child exploitation, they keep coming back.

Not only this, social media platforms are perhaps the most common reasons for mental health problems in teenagers and causing an internet obsession among them. This is why countries like the United Kingdom are further tightening online safety laws.

In the News: Mamont banking trojan masquerades as Chrome to infect devices

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: kumarhemant@pm.me

>