Microsoft’s Bing search engine joins a growing list of major platforms — Facebook, Instagram, Threads, TikTok, Snapchat, Reddit, Pornhub, and OnlyFans — that now utilise the digital fingerprint or hash of the image to protect users from the viral dissemination of revenge porn. Microsoft has announced a partnership with StopNCII (Stop Non-Consensual Intimate Image Abuse) for this purpose.
Once generated on the victim’s device, the hash is then utilised by StopNCII’s partner platforms to identify and remove matching images. The tool can be accessed here.
Microsoft highlighted the scale of the problem, revealing that through a pilot project with StopNCII’s database, Bing took action on 268,000 explicit images by the end of August. The company admitted that its previous strategy, which relied on direct user reporting, could not handle the sheer volume of cases.
Victims and experts expressed concerns that traditional reporting was slow and often ineffective in curbing the spread of images.
“We have heard concerns from victims, experts, and other stakeholders that user reporting alone may not effectively for impact or adequately address the risk that imagery can be accessed via search,” Microsoft stated.
However, the company stated that it would continue removing the content reported by users or other entities.
“We will continue to remove content reported directly to us on a global basis, as well as where violative content is flagged to us by NGOs and other parties,” Microsoft said. “In search, we also continue to take a range of measures to demote low quality content and to elevate authoritative sources while considering how we can further evolve our approach in response to expert and external feedback.”
However, Microsoft’s decision also shines a light on another industry giant: Google. Despite offering tools for reporting explicit images, Google has faced criticism for not partnering with StopNCII, reports TechCrunch. In March 2024, reports came out that Google had received over 13,000 copyright complaints under the Digital Media Copyright Act (DMCA).
The problem of AI-generated deepfake nudes is not limited to adults. While StopNCII’s tools are only available to those aged 18 and over, so-called ‘undressing’ sites have started targeting high school students across the United States, adding to the complexity of the issue.
Platforms such as YouTube are increasingly utilised by malicious actors who use celebrity deepfakes to promote Medicare scams.
In June, reports emerged that over 170 personal photos of Brazilian children were used to train LLM models. Concerns about AI-generated ads proliferating Meta-owned platforms have also emerged.
In the News: Google rolls out AI-powered ‘Ask Photos’ for complex photo searches