Facial-recognition backlash brews after fury over police conduct

Silkie Carlo, left, demonstrates in front of a mobile police facial recognition facility outside a shopping centre in London. File picture:AP Photo/Kelvin Chan

Silkie Carlo, left, demonstrates in front of a mobile police facial recognition facility outside a shopping centre in London. File picture:AP Photo/Kelvin Chan

Published Jun 29, 2020

Share

When the American Civil Liberties Union ran a test of Amazon.com Inc.'s facial recognition technology, the software falsely matched 28 members of Congress, many of them minorities, with mugshots of arrested people from police files.

Now, some of those lawmakers are drafting new legislation to curtail the use of facial recognition by police departments and government agencies. They're looking to harness the public outrage over police misconduct and racial inequities, which have also put tech companies on the defensive over their sales of these products.

Civil-rights advocates have long complained that facial recognition tools promote bias by misidentifying people of color. But it's taken the widespread anger and sorrow over the death in police custody of George Floyd, an African American Minneapolis man, to galvanize the debate.

"What I've learned is when there's public sentiment around an issue, you can move mountains," said Rep. Jimmy Gomez, a California Democrat who was among those falsely identified in the ACLU test and who is drafting a measure that addresses the accuracy concerns. "People are acknowledging that there is implicit bias in policing, and that leads to interactions that - for people of color - are harmful and can lead to death."

Lawmakers have introduced at least a dozen bills in the last year - including several since Floyd's death - and some are drafting new measures to curtail use of facial recognition by government agencies. But passing any of them won't be easy, with Congress hitting a deadlock this week over a broader police overhaul.

Big Tech is also rushing to catch up with the national reckoning over race. While Amazon contests the ACLU study, it recently announced it would stop, at least temporarily, selling its facial recognition technology to police forces. Microsoft Corp. has said the same. International Business Machines Corp. said it's permanently ending sales of the software. Alphabet Inc.'s Google stopped selling it in 2018.

Yet facial recognition software is already in wide use. The tech giants face formidable competition from smaller providers, such as Ayonix Corp. and Clearview AI Inc., which got a head start selling face-recognition products. So far, they haven't joined the moratorium and could end up controlling a market that's now estimated at $3 billion and could double in three years.

Widespread adoption of facial recognition tools could make the legislative debate rancorous if lawmakers try to limit their use - not just slow or stop future sales.

At least one fourth of all state and local police departments have access to a facial recognition system, according to a 2016 study by Georgetown University's Center on Privacy & Technology. Schools have installed it to secure their buildings. Homeowners have placed around their property cameras that are often connected to police departments, which use facial recognition software to identify potential intruders. Retailers use it to alert security about possible shoplifters.

Federal and local law enforcement use facial recognition to track suspects, find missing persons and keep tabs on protesters. The Federal Bureau of Investigation allows state and local police forces to search a database of more than 30 million photos. Federal agencies such as Customs and Border Protection and the Transportation Security Administration have been experimenting with facial recognition to automate passenger verification.

While civil libertarians and a wide range of civil-rights groups want strict guardrails, some law-enforcement agencies and security professionals caution against moving too quickly. They say facial recognition in combination with human review is a potent tool for law enforcement.

The Security Industry Association, a trade group for companies that sell security services, the U.S. Chamber of Commerce, the tech trade group NetChoice and others have called on Congress to mandate uniform facial recognition rules that would override local laws but not impose a full moratorium on police use.

A recent case in Michigan shows the danger of police departments relying too much on the technology. The ACLU alleged in a complaint on Wednesday that the Detroit Police Department wrongfully arrested a man after a false positive by a facial recognition system.

The software inaccurately matched the man, an African American, with a grainy, still image from a surveillance video showing a man allegedly shoplifting watches from a boutique. But the police did little investigating beyond uploading the surveillance photo, the ACLU said. The case, which has been dismissed, may be the first known example of a wrongful arrest because of the technology, according to the ACLU.

Social activists want Congress to think big, and not just agree to a temporary time-out. "Lawmakers need to do their job and protect the public from a technology that research shows is not only racially biased but also just deeply invasive," said Evan Greer, deputy director of Fight for the Future, a coalition of advocacy groups.

Democrats and Republicans on the House Oversight and Government Reform Committee have been negotiating a broad bill that would limit all government use of facial recognition, but talks stalled after the death of Rep. Elijah Cummings of Maryland, the panel's chairman and the measure's biggest proponent.

Cummings' successor, Carolyn Maloney, a New York Democrat, said she hopes to have legislation ready in the coming weeks. One draft of the bill's text, obtained by Bloomberg, says it would place a one-year moratorium on the new use of facial recognition technology by federal agencies and create an advisory committee to study its use by federal agencies.

On Thursday, two Democratic senators, Edward Markey of Massachusetts and Jeff Merkley of Oregon, joined two Democratic House members, Ayanna Pressley of Massachusetts and Pramila Jayapal of Washington state, on a bill that would stop all government use of biometric technology, including facial recognition. The bill would also require state and local police forces to enact their own face-recognition moratoriums to get federal funds.

In the Senate, Chris Coons, a Delaware Democrat, and Mike Lee, a Utah Republican, are proposing that federal law-enforcement agencies obtain a warrant before using face-recognition technology to track an individual for more than three days.

"I was looking for a midpoint that I could get any Republican to agree to," said Coons, who added that he's open to stricter curbs. "Let's at least start the conversation and then see what the range of options are."

Big Tech's moratorium has potential loopholes. Neither Microsoft nor Amazon will say whether their moratoriums apply just to police departments or also to federal agencies. They also won't say say if they will pursue overseas sales or turn off access to the software to existing customers. Ring, the Amazon-owned doorbell camera-maker, hasn't said whether it will stop giving police departments access to footage.

In Congress, the proposals that have gained the most traction wouldn't address the Michigan case. The sweeping police-overhaul bill that the Democratic-controlled House passed largely on party lines Thursday night would only bar federal law-enforcement agencies from using facial recognition on body cameras and vehicles, leaving state and local governments to set their own rules.

The Republican counter-proposal ignores facial recognition altogether.

If Congress doesn't act, a patchwork of state restrictions may force its hand. Facial recognition critics are flocking to state legislatures and city councils to urge them to pass laws limiting the technology. California, Oregon and Washington have adopted restrictions.

Police use of facial recognition was hardly controversial until recently. Just last year, IBM, Microsoft and Amazon talked about the benefits of giving law enforcement access to the technology as they eyed a burgeoning market, which is expected to grow from $3.2 billion in 2019 to $7 billion by 2024, according to research firm MarketsandMarkets.

Clearview AI claims it has already supplied 2,400 of the more than 18,000 U.S. police departments. Officers can easily upload a photo of a suspect and the extensive database returns other photos scraped from LinkedIn, Facebook Inc., Twitter Inc. and other social media platforms, plus links to websites that mention the person.

The rapid growth of the program and the data the company has amassed led Markey to question whether the company is helping the government monitor protesters. Clearview Chief Executive Officer Hoan Ton-That said in a statement that the tool is intended to be used to help investigate crimes and "not as a surveillance tool relating to protests or under any other circumstances."

Civil rights activists cite research documenting problems identifying minorities. A 2018 study by researchers from MIT Media Lab and Microsoft showed software from IBM, Microsoft and Face++, a Chinese-developed product, performing worse on darker-skinned people, and especially women, than on whites.

Last year, researchers found similar issues with Amazon's Rekognition software. Amazon questioned the methodology and said its own tests found better results.

"With inaccurate systems, it leaves the harms and the consequences of abuse to fall more disproportionately on people who are already marginalized and vulnerable," said Joy Buolamwini, an MIT Media Lab researcher who worked on the studies. "We have to move the conversation beyond accuracy and look at potential for abuse as well."

Bloomberg

Related Topics: