File picture: Pixabay
File picture: Pixabay

Top facial recognition tech is thrown off by masks, says study

By The Washington Post Time of article published Jul 30, 2020

Share this article:

By Taylor Telford

Washington - Masks are confusing many commercial facial-recognition systems, a new study finds, leading to error rates as high as 50%.

A preliminary study published Monday by the National Institute for Standards and Technology found that facial-recognition algorithms could be tripped up by such variables as mask color and shape. But industry players already are working on software that adjusts for masks - a requirement in many public spaces to contain the spread of covid-19 - which the NIST also plans to study this summer.

"With respect to accuracy with face masks, we expect the technology to continue to improve," said Mei Ngan, a NIST computer scientist and co-author of the report produced in collaboration with U.S. Customs and Border Protection and the Department of Homeland Security.

Ngan and other researchers tested how 89 top facial-recognition algorithms performed "one to one matching," which compares two photos of the same person - a common verification method for such tasks as unlocking a smartphone or checking a passport. They used more than 6 million pictures of a million individuals and added masks digitally, accounting for real-world variations by using a range of colors, shapes and nose coverage.

Without masks, the top-performing algorithms usually have error rates of about 0.3%. But when the most accurate algorithms were confronted with the highest-coverage masks, error rates jumped to about 5%, researchers found.

"This is noteworthy given that around 70% of the face area is occluded by the mask," the report reads. "Some algorithms that are quite competitive with unmasked faces fail to authenticate between 20% and 50% of images."

Companies are rushing to develop software that can make identifications based only on facial features that are still visible with a mask, such as eyebrows - a challenge, given that such algorithms depend on getting as many data points as possible. Researchers have been combing social media for masked selfies to create data sets to train facial-recognition algorithms, CNET reported in May.

Though controversial, the use of facial-recognition software by federal and local investigators has become routine, turning the technology into a ubiquitous presence in people's lives, whether they are aware of it or not. Authorities harness it to scan hundreds of millions of Americans' photos, often drawing on state driver's license databases or booking photos. It's deployed to unlock cellphones, monitor crowded public venues and guard entrances to schools, workplaces and housing complexes.

Even retailers have made tentative steps in the arena. This week, a Reuters investigation found that Rite Aid had been quietly adding facial-recognition systems to its stores for eight years. The technology was installed in 200 locations, mostly in lower-income urban areas, in what the report called one of the largest such rollout for an American retailer. The drugstore chain, after being presented with the findings, told the news organization the cameras had been turned off.

"This decision was in part based on a larger industry conversation," the company told Reuters in a statement, adding that "other large technology companies seem to be scaling back or rethinking their efforts around facial recognition given increasing uncertainty around the technology's utility."

A growing chorus of lawmakers and privacy advocates say the technology threatens to erode American protections against government surveillance and unlawful searches, and that inaccuracies in the systems could undermine criminal prosecutions, unfairly target people of color and lead to false arrests. In a landmark 2019 study, NIST found that facial-recognition systems misidentified people of color more often than White people: Asian and African American people were up to 100 times more likely to be misidentified than White men, depending on the particular algorithm and type of search.

In January, a Michigan man was wrongfully arrested based on a faulty facial-recognition match in the first known case of its kind, the New York Times reported. The case was later dismissed, and the county prosecutor's office said the man's case and fingerprint data could be expunged.

Some facial-recognition software makers are rethinking their relationship to the technology. IBM discontinued its facial-recognition software in June on the grounds that it promoted racism. The following day, Microsoft said it would stop selling its software to law enforcement until the technology is federally regulated. Soon after, Amazon, the largest provider of facial-recognition systems to law enforcement, said it would place a one-year moratorium on police use of the technology. (Amazon founder and chief executive Jeff Bezos owns The Washington Post.)

Last month, Democratic lawmakers introduced legislation that would ban federal agencies from using facial recognition and encourage state and local law enforcement to follow suit by making bans a requirement for certain grants. Also in June, Boston joined San Francisco in banning the use of facial recognition by law enforcement and city agencies.

The Washington Post

Share this article: