Jul
20
Doubtless readers of this blog are aware of the limitations of facial recognition software. Among them are its troubles with identifying and authenticating people of color.
And that’s no small number of people. Depending on whose definitions and statistics you accept, “people of color” refers to some 70 percent of the world’s population. In the United States alone, we’re talking about 40 percent of the U.S. population—around 132 million people.
Facial recognition software typically has difficulty distinguishing very dark or very light skin and confuses women’s faces more often than men’s. “There are various reasons why facial recognition services might have a harder time identifying minorities and women compared with white men,” writes c|net’s Queenie Wong.
Public photos that tech workers use to train computers to recognize faces could include more white people than minorities, said Clare Garvie, a senior associate at Georgetown Law School’s Center on Privacy and Technology. If a company uses photos from a database of celebrities, for example, it would skew toward white people because minorities are underrepresented in Hollywood. Engineers at tech companies, which are made up of mostly white men, might also be unwittingly designing the facial recognition systems to work better at identifying certain races, Garvie said. Studies have shown that people have a harder time recognizing faces of another race and that “cross-race bias” could be spilling into artificial intelligence. Then there are challenges dealing with the lack of color contrast on darker skin, or with women using makeup to hide wrinkles or wearing their hair differently, she added.
MIT Technology Review, reported that a US National Institute of Standards and Technology test found, among other issues:
For one-to-one matching, most systems had a higher rate of false positive matches for Asian and African-American faces over Caucasian faces, sometimes by a factor of 10 or even 100. In other words, they were more likely to find a match when there wasn’t one … Algorithms developed in the US were all consistently bad at matching Asian, African-American, and Native American faces. Native Americans suffered the highest false positive rates.
False arrest
Consequences range from the inconvenience of not being able to open your smartphone, to your bank’s proprietary app having trouble recognizing authorized users …
… to being cuffed in front of your family, dragged to jail, and held for 30 hours because a Detroit police force’s algorithm misidentified you.
Which is exactly what happened to Robert Julian-Borchak Williams earlier this year. According to NPR,
Civil rights experts say Williams is the first documented example in the U.S. of someone being wrongfully arrested based on a false hit produced by facial recognition technology. What makes Williams’ case extraordinary is that police admitted that facial recognition technology, conducted by Michigan State Police in a crime lab at the request of the Detroit Police Department, prompted the arrest, according to charging documents reviewed by NPR.
Perhaps spurred by the Williams case and the Black Lives Matter movement, last month IBM, Amazon, and Microsoft decided no longer to sell face recognition technology to police in the United States.
Do bankers have it easier than police?
It’s fortunate for the financial services industry that scary facial recognition headlines tend to focus on law enforcement. Harmon Leon, writing for Observer, quotes co-founder and CEO of Trueface Shaun Moore, who suggests that the bank’s task is easier.
“The impact of this hurdle plays more of a role when it comes to recognizing one person out of many; thousands or millions,” he stated. “Typically with account authentication, the database we scan is few or one-to-one making this a non-issue.”
American Banker was not quite so sanguine:
As consumer advocates, state authorities and national lawmakers line up in protest against facial-recognition technology, banks using it to let customers log in to mobile banking may need to brace for a fight …
Besides privacy, civil liberties, and security issues, American Banker says that the ACLU has expressed concerns about the potential to fool facial recognition software simply by holding up a photo. It’s not implausible. I reported six months ago that San Diego artificial intelligence company Kneron tested using photos to fool the software. Fortune reported:
At the self-boarding terminal in Schiphol Airport, the Netherlands’ largest airport, the Kneron team tricked the sensor with just a photo on a phone screen. The team also says it was able to gain access in this way to rail stations in China where commuters use facial recognition to pay their fare and board trains.
As for banking, Fortune continued,
… in stores in Asia—where facial recognition technology is deployed widely—the Kneron team used high quality 3-D masks to deceive AliPay and WeChat payment systems in order to make purchases.
Not to be overlooked is banks’ indirect use of facial recognition software, as anyone who has accessed an account through a newer smartphone knows.
Apple, Google, and others are not oblivious and are working to make face recognition software more, shall we say, fair-minded. Good. As I have also written in this blog, it’s essential that we build AIs—and that includes facial recognition software and algorithms—bias-free.