Meta, Instagram’s parent company, plans to implement an artificial intelligence (AI) system in 2025 to identify underage teens who misrepresent their age on the platform.
Social media platforms like Instagram, Facebook, Snapchat, and TikTok have long been under scrutiny for failing to protect children and teenagers from sextortion, failing mental health, and other side effects like compulsive scrolling.
Meta’s new ‘adult classifier’ tool, designed to estimate users’ ages, categorises them into two groups: over or under 18. This system analyses user behaviour patterns, such as followed networks, content interactions, and even birthday messages, to detect signs of age misrepresentation, reports Bloomberg.
The classifier will enable the platform to automatically transition suspected under-18 users into more restrictive privacy settings, effectively creating “teen accounts.”
These accounts, launched in September, provide a safer environment by limiting message requests and access to certain content categories. Teens already registered with self-reported ages will be moved to these more secure accounts, but starting next year, the AI classifier will catch those who attempt to bypass age restrictions by lying about their age, on their profile.
Notably, users aged 16 or 17 can still adjust privacy settings, but younger users will require parental consent to alter restrictions.
To further deter underage access, Meta plans to cross-reference email addresses and unique device IDs when users create new accounts with questionable ages. Additionally, teens attempting to raise their account age must upload formal identification, such as a driver’s license, or complete a video verification through Yoti, a third-party service specialising in age estimation through facial features.
Meta has allowed an appeal for reclassification in the future for accounts inaccurately categorised as teenagers.
In the News: Microsoft warns of 3 major Windows Server 2025 issues