Skip to content

OpenAI introduces tool to identify DALL-E 3 images amid concerns

  • by
  • 3 min read

Photo: Koshiro K/Shutterstock.com

OpenAI has introduced a new tool to identify images created through DALL-E 3, its text-to-image model. The tool is highly accurate in identifying DALL-E 3 generated images, but subtle modifications to these images can pose challenges.

The proliferation of synthetic media, including manipulated images produced through generative AI, has sparked widespread debates about the authenticity of visual content. This surge in fake media has particularly fueled discussions about its potential impact on election campaigns, especially in the current landscape of 2024.

Policymakers have been raising concerns about the increasing presence of AI-generated visuals on online platforms. The accessibility of tools like DALL-E 3 has accelerated the creation of such content, prompting responses from various AI startups and tech industry giants, reports The Wall Street Journal.

Dave Robinson, who oversees policy planning at OpenAI, emphasised the significant role of election-related concerns in driving these developments. “It’s primacy concern that policymakers express to us,” Robinson stated.

OpenAI announced its participation in an industry consortium led by Microsoft and Adobe to establish standards for verifying the authenticity of online images, coinciding with the launch of their new tool.

Additionally, OpenAI, in collaboration with Microsoft, is launching a $2 million fund focused on promoting AI education initiatives.

OpenAI’s latest detection tool boasts an impressive accuracy rate of around 98% in identifying DALL-E 3-generated content under typical conditions, barring any image alterations. Even in cases where images undergo cropping or screenshots, the tool maintains high accuracy.

Photo: Tada Images / Shutterstock.com
The growing number of fake AI-generated images can have an impact on elections. | Photo: Tada Images / Shutterstock.com

However, challenges arise when images undergo modifications, such as changes in hue, which decrease the tool’s performance. Sandhini Agarwal, a policy researcher at OpenAI, highlighted ongoing efforts to address these challenges by collaborating with external researchers.

Unlike some methods that rely on watermarks, OpenAI’s classification tool operates independently, without relying on removable signatures often found in AI-generated images.

While the tool is effective at discerning DALL-E 3 images, it encounters difficulties when evaluating images from competing AI products. Changes in hue significantly impact its performance, as noted by Agarwal.

OpenAI acknowledges that the tool may incorrectly flag non-AI-generated images as DALL-E 3 creations in rare instances, about 05% of the time.

Researchers also note that determining the AI origin of images is less complex than assessing the AI-generated text. OpenAI is working towards this end by refining its text-detection tool.

Not only images and text, the proliferation of AI has expanded into ads as well. In April 2024, it was reported that AI-generated NSFW ads are being shown on popular social media platforms such as Facebook, Instagram, and Messenger.

Furthermore, AI-generated images of children are proliferating the internet, said a report by the Internet Watch Foundation in October 2023. In June, reports came out that paedophiles are now shifting to AI-generated CSAM.

The introduction of OpenAI’s image detection tool marks a significant step forward in addressing the proliferation of manipulated visuals.

In the News: Google Threat Intelligence integrates Gemini, Mandiant and VirusTotal

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: kumarhemant@pm.me

>