Skip to content

AI-generated misinformation under 1% in major elections: Meta

  • by
  • 2 min read

Meta has provided insights into its efforts to mitigate the risks of artificial intelligence-enabled disinformation campaigns in the 2024 elections. Contrary to fears, the company reports that the impact of such technologies on the electoral process remained limited — up to only one percent of all fact-checked content — and manageable.

This year, elections were held in India, the United States, the United Kingdom, Bangladesh, Indonesia, Pakistan, France, South Africa, Mexico, and Brazil.

Despite isolated incidents, Meta claims that the existing policies and enforcement mechanisms proved effective in curbing the spreading of misleading AI-generated content.

“During the election period in the major elections listed above, ratings on AI content related to elections, politics and social topics represented less than 1% of all fact-checked misinformation,” Meta reports.

The Meta image AI generator rejected 590,000 image requests involving prominent United States figures such as President-elect Trump, Vice President-elect Vance, Vice President Harris, President Biden, and Governor Walz, among others.

Meta also scrutinised the use of generative AI by Coordinated Inauthentic Behaviour (CIB) networks — organised groups seeking to manipulate public opinion.

“We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI,” said the company. ” This has not impeded our ability to disrupt these influence operations because we focus on behaviour when we investigate and take down these campaigns, not on the content they post – whether created with AI or not.”

In May 2024, OpenAI disrupted five disinformation campaigns operating in Russia, China, Iran, and Israel, designed to manipulate public opinion.

In July, Google announced it would require advertisers to disclose any election ads employing digitally altered content. In April 2024, Microsoft warned India, South Korea, and the United States that threat actors from China or North Korea may target elections.

In the News: Five major Android bugs leave devices vulnerable to attackers

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: kumarhemant@pm.me

>