Skip to content

Instagram’s algorithm recommends explicit content to kids

  • by
  • 4 min read

Instagram is scrutinised for exposing teenage users to adult content within minutes of logging in. Despite Meta Platforms’ January promise to provide a safer, age-appropriate experience for teens by restricting sensitive content, tests showed that the platform still pushes such content to minors.

Investigations by The Wall Street Journal and Laura Edelson from Northeastern University have revealed that accounts created for 13-year-olds were quickly served increasingly explicit material, suggesting that Instagram’s content recommendation fails to protect young users effectively.

Researchers found that this issue starkly contrasts rival platforms like Snapchat and TikTok, which do not exhibit the same level of risk for underage users.

“All three platforms also say that there are differences in what content will be recommended to teens,” Edelson explained. “But even the adult experience on TikTok appears to have much less explicit content than the teen experience on Reels.”

The methodology involved creating new Instagram accounts listed as 13 years old and observing the content recommended through the platform’s Reels feature. From the outset, these accounts were shown moderately racy videos. When the accounts engaged with these videos, the recommendations quickly escalated to more explicit content, including promotions from adult content creators.

Within minutes of initial engagement, some accounts were bombarded with explicit material, and within 20 minutes, their feeds were dominated by such content.

Internal Meta documents reviewed by the Journal reveal that the company has long been aware of these issues. A 2022 internal analysis showed that young users on Instagram were more exposed to adult content, violence, and bullying than adults. This exposure was significantly higher for teens compared to users over 30, with teens seeing three times as many posts containing nudity and 4.1 times as much bullying content.

Meta’s automated systems to prevent inappropriate content from reaching minors have proven insufficient. An internal suggestion proposed building a separate recommendation system for teens, but this has not been pursued.

This is an image of meta fb oculus workspace whatsapp messenger instagram
Meta-owned apps including Facebook and Instagram have been criticised for failing to protect children and teenagers.

Senior executives, including Instagram head Adam Mosseri, have expressed concerns about the disproportionate display of adult content to minors. Despite Meta’s policies stating that sexually suggestive content should not be recommended to any users unless followed, the platform continues to fail to protect young users effectively.

Unlike Instagram, TikTok and Snapchat have shown better performance in shielding minors from explicit content. TikTok’s algorithms, for instance, are designed to err on the side of caution, often resulting in stricter content limitations for teens. Even when teen accounts on TikTok searched for or interacted with adult content, they rarely received such recommendations.

Edelson’s tests highlighted that Instagram’s adult and teen accounts received similar rates of videos from adult content creators, indicating minimal differentiation in content filtering for minors. In some instances, Instagram recommends videos labelled as ‘disturbing’ to teen accounts, contradicting Meta’s policies.

Instagram has introduced several tools and resources designed to support teens and their parents, yet their effectiveness is still being questioned. Senior attorneys at Meta have highlighted the legal and regulatory risks tied to child safety concerns on Instagram and Facebook, advocating for increased investment in child safety measures.

In April, Instagram began blurring nude images in private messages of minors. Moreover, in the same month, Meta’s Oversight Board took up cases of two celebrity deepfake cases.

A few months prior, it was reported that AI-generated NSFW app ads were proliferating Meta-owned apps. In 2022, it was reported that Facebook had a child predation problem, and it is quite concerning.

In November 2023, big tech companies, including Discord, Google, Twitch, Quora, and Roblox, launched Lantern, a cross-platform signal-sharing program to enforce child safety policies.

In the News: Atlassian patches 9 high-severity vulnerabilities in Confluence, Crucible, and Jira

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: kumarhemant@pm.me

>