Skip to content

DMCA flags lead to quicker removal of non-consensual adult images on X: Research

  • by
  • 4 min read

Photo: Ascannio / Shutterstock.com

X has long maintained that non-consensual nudity, including Non-Consensual Intimate Imagery (NCII), is prohibited on its platform. However, new research from the University of Michigan suggests that X’s response to NCII takedown requests may be slower and less effective than expected — unless victims flag the content under the Digital Millennium Copyright Act (DMCA).

The study, which has not yet been peer-reviewed, highlights the challenges that victims of NCII face when attempting to have harmful content removed. Researchers found a stark difference between how X handles NCII flagged through its standard reporting process and content reported under the DMCA mechanism.

The researchers used AI-generated adult images to simulate NCII, testing two approaches to reporting the content on X. In one instance, the images were flagged as violating X’s non-consensual nudity policy. In contrast, in the other, the images were reported through the DMCA takedown mechanism, typically used to protect copyrighted material.

Out of 50 images posted on the platform, the researchers reported 25 under each system. The results revealed a significant discrepancy: while all 25 images reported via DMCA were removed within a day, X’s NCII reporting tool failed to trigger any content removal within the same period.

Researchers found that the platform’s internal mechanism appeared largely unresponsive, even though NCII is widely recognised as a serious violation of user privacy.

This disparity in enforcement raises concerns, especially since the DMCA takedown process is not accessible to all NCII victims. The law primarily serves those who hold the copyright to the material, but many NCII victims either did not create the content themselves or cannot prove ownership. As a result, they are left with no other option than to rely on X’s seemingly inefficient NCII reporting mechanism.

The study emphasised the urgency of removing NCII within 48 hours of its posting, as the harm is most severe. Yet, X’s internal process offers little relief, leaving victims with the threat of prolonged exposure.

Researchers argue that platforms like X are not sufficiently incentivised to act swiftly, leaving victims vulnerable to further harm.

AI-generated adult images have proliferated social media platforms. | Photo: Vicki Hamilton | Pixabay

The situation becomes even more complex with the advent of AI-generated NCII, which poses a growing threat. As more people become at risk of image-based abuse, the limitations of the DMCA in addressing this issue are becoming clearer.

The University of Michigan researchers suggest that the growing problem of AI-generated NCII is only exacerbating the situation, further highlighting the inadequacies of current platform policies.

The study calls for legislative reform, arguing that a dedicated federal NCII law should be introduced. Such a law would impose mandatory takedown obligations on platforms like X and ensure more timely and consistent responses.

Researchers believe that copyright laws, while helpful, are not designed to address the complex issue of sexual privacy and that a shift towards enforceable legal regulations is necessary to protect victims effectively.

In defence of its policies, X has reported removing more than 150,000 posts violating its non-consensual nudity policy during the first half of 2024, reports Ars Technica. The company also suspended more than 50,000 accounts linked to such violations.

However, the data shows that the majority of these actions were carried out by human moderators, with only a small percentage handled automatically. This raises questions about whether cuts to X’s safety team following Elon Musk’s acquisition have affected its ability to respond quickly to NCII reports.

As the platform continues to rebuild its trust and safety team, X has launched a new ‘centre of excellence’ to improve its moderation practices. However, without broader legal reforms, researchers warn that victims of NCII may continue to struggle to get harmful content removed from the platform promptly.

In the News: Turkey blocks access to Discord following court decision

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: kumarhemant@pm.me

>