Photo: Trismegist san / Shutterstock.com
Hundreds of Americans have been arrested based on facial recognition software matches, often without knowing the technology was used to identify them. Police departments in 15 states have employed facial recognition in over 1,000 investigations over the last four years. However, in most cases, defendants were not informed that the technology was used to identify them, limiting their ability to challenge the evidence.
This raises concerns about due process, transparency, and accuracy — particularly as studies have shown facial recognition algorithms are prone to errors, especially when identifying people of colour.
A detailed investigation by The Washington Post has revealed that facial recognition technology has been criticized for its lack of transparency, as many police departments obscure its role in identifying suspects.
Officers frequently cited vague investigative methods or relied on ambiguous language, such as “through investigative means,” rather than explicitly mentioning facial recognition software in public records.
According to legal experts, this tactic denies defendants a crucial opportunity to question the technology’s accuracy and reliability in court.
For instance, in Evansville, Indiana, police claimed that they identified a man involved in an assault based on his distinctive tattoos, long hair, and previous jail-booking photos. In another case from Pflugerville, Texas, authorities noted in reports that they identified a suspect in a $12,500 theft from “investigative databases.”
However, internal police records indicate that both suspects were identified using facial recognition software — information that was withheld from the defendants and their attorneys.
Many police departments declined to answer questions about their use of the technology or failed to provide records, with only 30 departments responding to The Post’s requests for data. Others justified their secrecy, stating that facial recognition software provided investigative leads and was not the sole basis for an arrest.
In Florida, the Coral Springs Police Department reportedly instructs its officers to avoid disclosing facial recognition in their written reports. Ryan Gallagher, the department’s that investigative techniques like facial recognition are exempt from public disclosure laws. However, critics argue that this practice undermines defendants’ rights and allows police to shield their use of AI tools from judicial scrutiny.
The problem with facial recognition is that it is prone to errors and often struggles with accurately identifying people of colour, raising the risk of wrongful arrests. As The Post’s investigation details, at least six out of seven Americans wrongfully arrested due to misidentification were black.
The opaque use of facial recognition software has also drawn criticism from lawmakers. U.S. Senator Cory Booker has called for legislation mandating disclosure when AI technology leads to criminal charges, citing concerns over due process and fair trials.
However, no federal regulations currently govern how facial recognition should be used or disclosed in law enforcement.
Civil rights groups are increasingly pushing for greater transparency regarding the use of facial recognition software in criminal investigations. Under the ‘Brady’ rule, a landmark Supreme Court ruling from 1963, prosecutors are required to disclose evidence that could prove a defendant’s innocence or challenge the credibility of a witness.
Yet, the legal standing of facial recognition as evidence remains a grey area, with no federal laws regulating its use.
Courts such as the appeals court of New Jersey established that defendants had the right to know when facial recognition technology was used in their case.
The increasing use of facial recognition technology in police investigations has raised serious concerns about balancing the fight against crime with the protection of civil liberties.
In India, too, the state of Telangana has rapidly deployed facial recognition technology. Chennai, the capital of Tamil Nadu, has raised eyebrows as the city upgrades its surveillance.
Similarly, Israel expanded its facial recognition tech in Gaza in March 2024.
In the News: DoT rolls out two-tier system to block international spam calls