Tech giants Google and Meta joined forces on an advertising campaign aimed at teenagers on YouTube in Canada and the United States, circumventing company policies to safeguard minors online.
The campaign aimed to promote Instagram to 13 to 17-year-old users via advertising, reports Financial Times. Interestingly, both companies talk openly about teenage health and prohibit personalised advertisements to under-18 users.
The campaign ran from February to April in Canada and extended to the US in May. The target was a group labelled ‘unknown.’ This ‘unknown’ labelling allowed the companies to bypass rules prohibiting personalised ads to minors and proxy targeting policies.
It was later revealed that the group was predominantly comprised of individuals under 18.
While the manoeuvre was undeniably innovative, its ethical implications have been questioned. Documents reveal that Google took deliberate steps to mask the campaign’s true intent. The collaboration was already in progress when Meta CEO Mark Zuckerberg testified before Congress, addressing the concerns about child exploitation on his platforms.
Despite this, the campaign continued, driven by Meta’s need to reclaim the attention of younger users from rising competitors like TikTok.
The campaign was orchestrated by Spark Foundry, a subsidiary of the advertising conglomerate Publicis, under the code name ‘Meta IG Connects.’ The initiative was part of Meta’s strategy to attract more Gen Z users to Instagram, which has been losing its teenage foothold.
Spark Foundry solicited Google’s involvement, emphasising the need to target 13- to 17-year-olds and to collect data directly from these viewers.
The timing of this discovery couldn’t be more poignant, as the United States Senate recently passed the Kids Online Safety Act, which aims to protect minors from harmful online content.
Senator Marsha Blackburn, a supporter of the bill, voiced her shock and anger at the news, stressing the need for strong regulation of the tech industry to prevent the potential exploitation of children for financial gain. The revelation has reignited the debate over Big Tech companies’ ethical practices and their impact on young people.
Upon learning about the Financial Times investigation, Google initiated an internal review and subsequently cancelled the project. A Google spokesperson stated, “We prohibit ads being personalised to people under 18, period.”
They acknowledged using the ‘unknown’ group but maintained that no registered users under 18 were directly targeted.
Meta, on the other hand, defended its actions, claiming adherence to its advertising policies and those of its peers. However, the company did not comment on whether their staff knew the underage skew of the ‘unknown’ group.
“We’ve been open about marketing our apps to young people as a place for them to connect with friends, find community, and discover their interests,” a Meta spokesperson told FT.
In October last year, it was found that AI-generated images of children were proliferating on the internet.
In June 2024, it was reported that Instagram was showing explicit content to kids within minutes of logging in. Reports also surfaced in July that Meta was showing ads for cocaine and other opioids on their platform.
In April, AI-generated NSFW ads were shown to people on Facebook, Instagram, and Messenger.
In the News: NHS’ IT provider slapped with a £6m fine for ransomware fail