As a condition of a legal settlement with the Federal Trade Commission, Rite Aid has agreed to discontinue its reliance on AI-powered facial recognition technology in its shoplifting prevention efforts. The retail drugstore chain allegedly profiled Black, Latino, and Asian shoppers at higher rates than whites. Specifically, stores began using AI-powered technology in 2012 to identify customers who were deemed likely to steal products, according to the FTC complaint. Employees reportedly received match alerts when those “Be on the Look Out” consumers entered stores. Trend data presented in the legal documents show that people of color were disproportionately and wrongly followed, harassed, and embarrassed in front of others.
“Shopping while black” is a longstanding phenomenon that usually entails greeting Black shoppers with suspicion the moment they enter stores, following them around the entire time, and wrongly accusing them of stealing. Locking black haircare and beauty products in glass cabinets while placing similar goods aimed at white buyers on open shelves, as Walmart did for many years, is another example of how retail stores discriminate against Black patrons. Rite Aid’s misuse of facial recognition technology is a troubling case example of how AI further exposes Black people and other shoppers of color to what Michelle Dunlap and other scholars call retail racism.
Rashawn Ray, a sociology professor at the University of Maryland and senior fellow at The Brookings Institution, explains: “AI technologies often replicate existing inequalities because they are created by people and in spaces that lack diversity and inclusion to make the technology equitable. If the same stereotypes used to profile Black people in everyday encounters are put in algorithms, then we get facial recognition that stereotypes Black people just like another human would.”
Implicit biases are largely informed by implicit associations – mental shortcuts that compel people to unconsciously associate particular groups with particular characteristics, expectations, and behaviors. Implicit bias often plays out quite explicitly in retail environments. Problem is, store employees too often wrongly associate shoppers of color with trends that shoplifting data don’t support. Most shoplifters in the U.S. are white, notes Shaun L. Gabbidon, Distinguished Professor of Criminal Justice at Penn State University Harrisburg. Despite this fact, “the racist deployment of the technology in mostly minority communities continues to perpetuate the false narrative around who the majority of shoplifters are.”
Costs associated with theft of goods pose serious financial risks to businesses. Given this, it would seem that store owners and managers would want to be sure they’re monitoring the right people. While they and their employees are unnecessarily surveilling and otherwise harassing shoppers of color, whites shoplifters often get away with in-store crimes without interference. AI privileges them. Meanwhile, income is forfeited from humiliated shoppers of color who would’ve spent money had they not been treated like criminals. It’s worth noting that in some communities, namely those that are food and retail deserts, low-income residents of color don’t have a choice but to patronize establishments in which they’re routinely subjected to such racism and abuse.
“Retailers need to think carefully and cautiously before utilizing AI to implement security measures in their stores,” says Cassi Pittman Claytor, associate professor of sociology at Case Western Reserve University. “Heightened surveillance never makes people want to spend their money. Time and time again, research has illustrated that AI is not only capable but quite competent in perpetuating racial inequities and reflecting racial biases that are endemic to our society. It is extremely naive to think that AI is ‘race-neutral’ or that adopting new technologies will eliminate persistent and society-wide problems like retail racism.”
AI also has the likelihood of exacerbating racial profiling in other domains, such as policing. There is the highly-publicized example of Porsha Woodruff, an eight-month old pregnant Black woman in Detroit whom facial recognition technology erroneously matched with someone who committed a robbery and carjacking. Police wrongly arrested Woodruff at home in front of her two young daughters. Examples like this help explain why many Black Americans worry about and doubt the trustworthiness of AI-powered surveillance systems.
Pew Research Center data show that among all racial groups surveyed, Black respondents were least trusting of facial recognition technology in policing. Nearly half (48%) of Black people predicted that officers would misuse AI-powered technologies to surveil predominantly Black and Latino neighborhoods more often than they would residential contexts with different racial demographics. Being misidentified by police on the streets and then mistakenly accused of stealing in stores doubly exposes people of color to extremely consequential technology-enabled dangers.
Safiya U. Noble, the David O. Sears Presidential Endowed Chair at UCLA, is one of the world’s foremost experts on racist and sexist algorithmic harms. She contends, “racial profiling technologies like facial recognition and other pattern-recognition systems are marketed by Silicon Valley as hyper-convenient, but they are trained on historically racist and sexist data, and that, on its face, means that these technologies will continue to discriminate.” Noble, who directs the UCLA Center on Race and Digital Justice, predicts “companies will continue to face serious consequences as they adopt these faulty systems that threaten both their bottom line and their brands. She also says we should expect the FTC to continue protecting consumers from dangerous technologies.
“Because it is notoriously less accurate when used on women and people of color, retailers should be incredibly cautious about deploying facial recognition technology in any situation – but at the very least, it is imperative that the technology not be unequally deployed in certain neighborhoods,” advises Nicol Turner Lee, director of the Center for Technology Innovation at The Brookings Institution. “Generally speaking, retailers should shy away from ever using facial recognition technology as a predictive measure, like Rite Aid did, because the risk of biased inaccuracies makes racial profiling all too likely.”
Earlier this year, five U.S. senators co-signed a letter opposing the Transportation Security Administration (TSA) use of facial recognition technology at U.S. airports. In it, they cited a National Institute of Standards and Technology study based on 18 million photos of more than eight million Americans. Results showed that in comparison to white men, Black and Asian people were up to 100 times more likely to be misidentified by facial recognition technology. As part of its settlement with the FTC, Rite Aid agreed to suspend its use of AI-powered surveillance systems for the next five years. Other retailers should do the same until racial biases that are imbedded in those technologies are rigorously and repeatedly tested, then ultimately eliminated.