Skip to content

Google mandates AI disclosures in election ads

  • by
  • 3 min read

Google announced it would require advertisers to disclose any election ads that employ digitally altered content. This mandate forms part of an updated political content policy to enhance transparency and accountability in the political advertising space.

Under the revised policy, announced on Monday, advertisers must indicate if their campaign materials include “altered or synthetic content” by selecting a checkbox within their campaign settings. This step ensures that any digitally manipulated imagery or video used in election ads is easily identified.

Google’s policy updates address the increasing sophistication of generative AI technologies, which can create highly realistic text, images, and videos within seconds. The rapid advancement of these technologies has heightened concerns about their potential for misuse, particularly in deepfakes creations that are highly convincing and manipulating content that can misrepresent individuals or events.

Meta Platforms, the parent company of Facebook and Instagram, has already implemented a policy requiring advertisers to disclose the use of AI or other digital tools in creating or altering political ads.

“Verified election advertisers in regions where verification is required must prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events. This disclosure must be clear and conspicuous and must be placed in a location where it is likely to be noticed by users. This policy applies to image, video, and audio content,” said the company.

What is Deepfake? How it can be problem and how to spot one
The use of deepfake content is significantly increasing, causing problems such as misinformation and other related issues.

Reuters reports that the new disclosure requirements will apply across various platforms. Google will automatically generate in-ad disclosures for content displayed in feeds and shorts on mobile devices and in-stream content on computers and television.

Advertisers are responsible for ensuring that a “prominent disclosure” is visible to users in the case of other ad formats. Furthermore, Google will tailor the languages of these disclosures to fit the context of each advertisement.

However, if the AI alteration is inconsequential or quite small, then those ads are exempt from disclosure.

“Ads that contain synthetic content altered or generated in such a way that is inconsequential to the claims made in the ad are exempt from these disclosure requirements. This includes editing techniques such as image resizing, cropping, colour or brightening corrections, defect correction (for example, “red eye” removal), or background edits that do not create realistic depictions of actual events,” explains Google.

The policy shift comes after several high-profile incidents involving AI-generated content. In May, OpenAI disrupted five misinformation campaigns orchestrated by threat actors to target elections in several countries. The scope of these campaigns shows how much new technologies can affect a country’s geopolitical and internal functionality.

Furthermore, last year, Europol reported that terrorists are harnessing AI and Metaversse for recruitment and planning.

In the News: New BTI attack ‘Indirector’ targets modern Intel CPUs

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: kumarhemant@pm.me

>