Skip to content

Five misinformation campaigns were stopped using AI models

  • by
  • 3 min read

Photo: Koshiro K/Shutterstock.com

Over the past three months, OpenAI has successfully disrupted five covert influence operations orchestrated by actors from Russia, China, Iran, and Israel and designed to manipulate public opinion and influence political outcomes.

Two operations dubbed ‘Bad Grammar’ and ‘Doppelganger’ originated in Russia. Through Telegram, the first operation primarily targeted Ukraine, Moldova, the Baltic States, and the United States. The actors behind this campaign used OpenAI’s models to debug code for running Telegram bots and generate short political comments in Russian and English. These comments were then disseminated across various Telegram channels, attempting to sway public opinion with AI-generated content.

On the other hand, ‘Doppelganger’ utilised AI to produce comments in multiple foreign languages, including English, French, German, Italian, and Polish. These comments were posted on platforms such as X and 9GAG. Additionally, Doppelganger operatives used AI to translate and edit articles, generate headlines, and convert news articles into Facebook posts, all aimed at spreading disinformation across a broad spectrum of online platforms.

The Chinese network ‘Spamouflage’ exploited OpenAI’s models for various tasks, including researching public social media activity and generating Chinese, English, Japanese, and Korean texts. These texts were posted across platforms like X, Medium, and Blogspot.

Moreover, Spamouflage actors used AI to debug code for managing databases and websites, including a previously unreported domain, revealscum[.]com, indicating a sophisticated approach to covert operations.

Iran’s ‘International Union of Virtual Media (IVUM)’ leveraged AI to generate and translate long-form articles, headlines, and website tags. These were published on websites linked to Iranian threat actors. IUVM’s use of AI facilitated the creation of extensive disinformation campaigns that targeted various geopolitical issues.

This is an image of ai artificial intelligence featured 11
Although cybercrooks attempted to sway public opinions via AI posts, OpenAI found that these posts couldn’t garner much attention.

A commercial operation in Israel, nicknamed Zeno Zeno, saw actors using AI to generate articles and comments that were posted on multiple platforms, including Instagram, Facebook, X, and affiliated websites. Despite their efforts, the actors behind Zeno Zeno failed to achieve significant engagement, highlighting the limitations of AI-generated content when faced with robust defensive measures.

“The content posted by these various operations focused on a wide range of issues, including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments,” said OpenAI.

OpenAI found that these operations did not rely solely on AI-generated content. Rather, they combined AI content with traditional formats, such as manually written texts and memes, to create a more diverse content mix. Also, some networks created the appearance of engagement by generating replies to their posts. This tactic aimed to mimic genuine interactions, although none of the operations attracted authentic engagements.

“So far, these operations do not appear to have benefited from meaningfully increased audience engagement or reach as a result of our services,” explained the company.

Last month, reports showed that AI-generated NSFW ads are increasingly distributed over Meta.

In the News: Google blames ‘misinterpreted queries’ for AI overview mistakes

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: kumarhemant@pm.me

>