Skip to main content

Police facial recognition tech could misidentify people at protests, experts say

Even if you’re sitting at home on your couch, there’s a chance you could be arrested for protesting.

How? If the police force in your area is using any kind of facial recognition software to identify protesters, it’s possible you could be misidentified as one.

Most facial recognition was trained to identify white male faces, experts told Digital Trends, which means the probability of misidentification for anyone who is not white and not a man is much higher.

“Anytime you do facial recognition, it’s a best guess. It’s a probability score,” said David Harding, chief technology officer for ImageWare Systems, a cybersecurity firm that works with law enforcement on facial recognition. “Anytime you’re in an area where they [law enforcement or the government] are using facial recognition, you have to worry about being falsely matched to someone. Or what’s even worse, someone being falsely matched to you.”

Deployed against protesters

protester
Getty Images

Facial recognition has been gaining in popularity among law enforcement.

In Minnesota, where demonstrators have flooded the streets for days protesting the killing of George Floyd by police, officers are still using the controversial facial recognition software Clearview AI, according to digital security firm Surfshark.

Clearview AI came under fire earlier this year for scraping people’s photos from social media, counter to companies’ terms of service. Several companies, including Twitter, issued a cease and desist order.

Surfshark told Digital Trends that in addition to Minnesota, several other states, including New York, are still using Clearview AI technology, and in Washington County, Minnesota, police are using Amazon’s Rekognition software.

But in a statement to Digital Trends, the Minneapolis Police Department denied that it possesed any facial recognition technology.

A peer-reviewed study from the Massachusetts Institute of Technology found that Rekognition was extremely bad at recognizing female and dark-skinned faces, more so than other similar services. The software misclassified women as men 19% of the time, the New York Times reported. That error rate got even higher when skin color was taken into account: 31% of dark-skinned women were labeled as men.

“False results can incriminate the wrong people as FRT [facial recognition technology] is proven to be problematic while spotting criminals in a crowd,” Gabreille Hermier, Surfshark’s media officer, said.

A shaky system at best

A facial recognition system prone to false positives could cause innocent people to be arrested, according to Mutale Nkonde, a fellow at the Berkman Klein Center of Internet & Society at Harvard University and a non-resident fellow at the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society.

“Police will use the mug shots of people who have been committed for other crimes to train facial recognition, arguing that if you’ve committed one crim,e then you’ve committed another,” Nkonde said. “First off, that’s unconstitutional. Second, that means that if you’ve been arrested for looting in the past, but haven’t looted recently, the police could now come arrest you for looting in May or June because your picture is in the system and it may have turned up a false positive.”

Harding said those who are non-white and female are “very much at risk” of misidentification. Harding emphasized there’s a big difference between facial recognition as the sole tool in mass surveillance and law enforcement using a mug shot, fingerprints, and other evidence in a controlled environment alongside facial recognition software to find a specific suspect.

“Even if it were 100% accurate, this isn’t compatible with a democratic society,” said Saira Hussain, a staff attorney with the Electronic Frontier Foundation. “It’s always a possibility that someone will be misidentified.”

Dangerous precedents

Critics say the case of Eric Loomis looms large over facial recognition use.

Loomis was sentenced to six years in prison after a 2013 arrest, due in large part to an assessment from a private security company’s software. But the company that wrote the algorithm kept the software proprietary, even as it was being used to help determine a defendant’s prison sentence, the New York Times reported at the time.

It’s a chilling precedent. Protesters and demonstrators have feared police surveillance, in some cases for good reason. The Christian Science Monitor found evidence that Chicago police were tracking people using their cell phones in 2014 following the Black Lives Matter protests that sprung up following the shooting of Michael Brown in Ferguson, Missouri.

It happened again in 2015, Hussain said, following the protests surrounding the death of Freddie Gray in Baltimore, also at the hands of police.

“Law enforcement used people’s social media posts as a tool to identify who was in a certain vicinity of the protest, identify the protesters, and then arrest them for unrelated charges,” Hussain said.

“People who are currently protesting are taking a stand on racial injustice. If they risk becoming subjects of state surveillance, we are leaning towards China, where [facial recognition technology] is a tool for authoritarian control,” Hermier wrote.

Editors' Recommendations

Maya Shwayder
I'm a multimedia journalist currently based in New England. I previously worked for DW News/Deutsche Welle as an anchor and…
Is Clearview AI’s facial recognition legal? We need to figure it out soon
collage of facial recognition faces

No one seems to be able to figure out if what Clearview AI is doing is legal, a quandary that has exposed the messy patchwork of laws that allow exploitation of people’s personal data to profligate.

As reported first by CNET, Google and YouTube recently sent cease-and-desist letters to Clearview -- the controversial law enforcement facial recognition technology -- over its scraping of their sites for people’s information. Twitter sent one in January, while Facebook is said to be reviewing Clearview’s practices.

Read more
Clearview AI’s facial-recognition app is a nightmare for stalking victims
Facial Recognition Composite

The latest example of Silicon Valley's hubris is the facial-recognition app Clearview AI. The small startup's app is so powerful that someone could walk up to you on the street, snap your photo, and quickly find out your name, address, and phone number, according to a report in The New York Times.

This technology probably sounds like a great idea to two types of people: Law enforcement and creeps. Advocates worry this kind of facial-recognition technology could be a boon to stalkers, people with a history of domestic abuse, and anyone else who would want to find out everything about you for a nefarious purpose.

Read more
Ring says it isn’t using facial recognition, but it’s definitely working on it
Ring Video Doorbell Pro

Ring claims it isn't using facial recognition technology -- most recently in a story on its work with 400 U.S. police departments. However, the company is working on the technology, something the company has yet to publicly admit. The latest example? A response by its PR team to criticism by the Bernie Sanders presidential campaign.

Sanders deputy policy director Billy Gendell noted on Twitter Thursday that the Democratic presidential candidate is the only one in the race that would explicitly ban police from using facial recognition software.

Read more