In Detroit, where facial recognition software is used in police investigations, the software fails “96% of the time,” according to Detroit Police Chief James Craig.
Craig said as much during a public meeting on Monday, Vice reported. “If we were just to use the technology by itself, to identify someone, I would say 96% of the time it would misidentify,” Craig said.
Police across the United States use facial recognition software, though several major cities have outright banned its use.
Visit Business Insider’s homepage for more stories.
Facial recognition software used by police to identify potential suspects is wildly inaccurate, according to Detroit Police Chief James Craig.
“If we were just to use the technology by itself, to identify someone, I would say 96% of the time it would misidentify,” Craig said in a public meeting on Monday, Vice reported. “If we would use the software only, we would not solve the case 95-97% of the time. That’s if we relied totally on the software, which would be against our current policy.”
Last week, the New York Times reported what may have been the first known case of a man being wrongfully arrested — in Detroit — after being misidentified by facial recognition software. Robert Julian-Borchak Williams was detained by Detroit police for 30 hours. Williams pointed out that the suspect on CCTV footage did not look like him, and a detective responded: “I guess the computer got it wrong.”
The city of Detroit uses facial recognition software developed by a company named DataWorks Plus, which said that facial recognition tech isn’t intended as the sole way of identifying potential suspects.
The system doesn’t “bring back a single candidate,” DataWorks Plus general manager Todd Pastorini told Vice. “It’s hundreds. They are weighted just like a fingerprint system based on the probe.”
Across the US, police departments are using facial recognition software developed by a variety of different companies.
Major tech players — including Amazon, IBM, and Microsoft — all have their own versions of facial recognition software for sale, though all three are re-evaluating its use amid nationwide anti-police brutality protests.
An under-the-radar tech startup named Clearview AI has taken a different approach: Its client list spans more than 2,200 law enforcement departments, government agencies, and companies across 27 countries.
Critics of facial recognition technology have argued for years that the technology is inaccurate and potentially dangerous.
“Facial recognition is a horrifying, inaccurate tool that fuels racial profiling + mass surveillance,” Rep. Alexandria Ocasio-Cortez said on June 10. “It regularly falsely ID’s Black + Brown people as criminal. It shouldn’t be anywhere near law enforcement.”
A federal study published in late 2019 found “empirical evidence” of racial bias in facial recognition software.
Across the board, the study found that facial recognition software makes “false positive” — inaccurate matches — far more often when the person is Asian or Black than when the person is white. “The team saw higher rates of false positives for Asian and African American faces relative to images of …read more
Source:: Business Insider – Tech