Meta-owned social media platform Instagram is rolling out safety features that will blur nude images in direct messages to young individuals.
In an age where digital dangers like sextortion loom large, such measures are welcome as these safeguard users, particularly teens, from falling prey to such malicious activities.
Meta has been engaging with experts specialising in combating such crimes to gain insights into scammers’ modus operandi and develop effective countermeasures.
The new feature within Instagram’s Direct Messages (DMs) is driven by on-device machine learning “to help protect against sextortion and intimate image abuse”. It is designed to detect and blur images containing nudity to protect people from viewing unsolicited images. Its primary objective is to shield users from unsolicited content and act as a deterrent against scammers who exploit nude imagery to coerce individuals into sharing sensitive photos.
This feature will automatically activated for users under 18 globally, with an opt-in option for adults. Upon activation, senders of potentially sensitive photos will receive reminders to exercise caution, including the ability to retract such images.
Recipients, in turn, will encounter blurred images accompanied by warnings, encouraging thoughtful interactions and providing access to safety resources and guidance. Instagram will also give some safety tips to recipients, including potential screenshot warnings and a careful review of the profiles before interacting with the person.
The nudity protection feature is an on-device tool, so Meta cannot access these images unless someone reports it.
Anyone who will try to forward nude images will also receive a message from Instagram encouraging them to reconsider the decision.
Furthermore, Instagram is working on developing technology to potentially identify scammers engaged in sextortion scams by analysing a range of indicators. Once the signals are positive, the platform will ensure that these accounts are hidden from teenagers.
In 2022, it was reported that Facebook is a den of child predators, confirming an earlier report about the same. Facebook also removed several million posts targeting children from the platform.
Social media platforms affect the mental health of teens and cause an internet obsession among them. Countries like the United Kingdom tightened the Online Safety Bill to protect young adults and children from adult content.
Companies like Google are trying to control the sudden burst of deepfake content. A report by the Internet Watch Foundation highlighted that AI-generated images of children are being proliferated on the internet.
Facial recognition search engine, Pimeyes, banned underage searches on their platform to counter this menace.
As social media gains more ground, it is up to stakeholders, such as parents, teachers, and institutions, to fight back against this menace by educating their children about its potentially harmful consequences.
In the News: TA547 uses LLM-generated scripts for distributing Rhadamanthys