Skip to content

AI-generated images of children are proliferating the internet: IWF

  • by
  • 3 min read

There is a growing crisis in the realm of artificial intelligence — the proliferation of AI-generated images depicting children, some as young as two years old, subjected to heinous sexual abuse.

Internet Watch Foundation (IWF), a UK-based organisation responsible for detecting and removing child sexual abuse imagery from the internet, published a report detailing the findings.

The report highlights the alarming reality that AI has advanced to the point where most AI-generated child sexual abuse imagery can now be legally indistinguishable from genuine photographs under UK law.

The most convincing of these images are so realistic that even trained analysts struggle to differentiate them from authentic photographs. Experts are warning that advancements in text-to-image technology will only compound the challenge that organisations like the IWF and law enforcement agencies face.

What is perhaps most chilling is that these AI-generated images are not figments of imagination. They are built using the faces and bodies of real children who have suffered sexual abuse. These malevolent actors are also using AI to create disturbing imagery of celebrities, de-aged and depicted as children in sexual abuse scenarios.

Moreover, the IWF’s report reveals that AI technology is being employed to ‘nudify’ children from clothed images that were uploaded online for legitimate purposes. The researchers have also found evidence to suggest that this reprehensible content is being commercialised, further emphasising the need for a united front against this threat.

AI tools like Midjourney and Stable Diffusion can be used for creating images.

The study conducted by the IWF concentrated on a single dark web forum devoted to child sexual abuse imagery, yielding a horrifying snapshot:

  • 11,108 AI images were scrutinised, with 2,978 confirmed breaching UK law by depicting child sexual abuse.
  • Of these, 2,562 were deemed so realistic that they merited the same legal treatment as genuine abuse images.
  • A disturbing 564 of these were classified as Category A, the most severe category, depicting rape, sexual torture, and bestiality.
  • 1,372 of these images featured primary school-aged children, while 143 depicted children aged three to six, and two portrayed infants under two years old.

This crisis in the abuse of AI extends beyond the dark web. In June, the IWF sounded the alarm about AI-generated child sexual abuse imagery found on the open web, underscoring the urgency of addressing this issue across all online spaces.

In the News: 1Password reports a security incident related to Okta breach

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: