IBM Quits Facial-Recognition Over Bias Claims

IBM is exiting its facial recognition business over concerns of possible human rights violations and racial biases.

Facial recognition software has not had the best of reputations.

Amid racial bias issues and privacy concerns, IBM has announced that it will no longer offer general-purpose facial recognition or analysis software. IBM CEO, Arvind Krishna said in a letter that the company would no longer develop or research the technology.

“IBM firmly opposes and will not condone the use of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter. “We believe now is the time to begin a national dialogue on whether and how domestic law enforcement agencies should employ facial recognition technology.”

Facial recognition software has significantly improved over the last decade, thanks to advances in artificial intelligence. At the same time, the technology, that is often provided by private companies with little regulation or federal oversight, has been shown to suffer from bias along the lines of age, race, and ethnicity. It has made the tools unreliable for law enforcement and security and ripe for potential civil rights abuses.

Various researchers have explored the extent to which many commercial facial recognition systems (including IBM’s) are biased, leading to mainstream criticism of the algorithms used with the facial recognition technology and ongoing attempts to rectify bias.

A December 2019 National Institute of Standards and Technology(NIST) study found the empirical evidence for the existence of a wide range of accuracy across demographic differences in the majority of the current face recognition algorithms that were evaluated. The technology has also come under fire for its role in privacy violations.

Notably, NIST’s study did not include technology from Amazon, which is one of the few major tech companies to sell facial recognition software to law enforcement. Yet Rekognition, the name of the program, has also been criticized for its accuracy.

In 2018, the American Civil Liberties Union found that Rekognition incorrectly matched 28 members of Congress to faces picked from 25,000 public mugshots.

Another company, Clearview AI, came under heavy scrutiny starting earlier 2020 when it was discovered that its facial recognition tool, built with more than 3 billion images and compiled in part from scraping social media sites, was being widely used by the private sector companies and law enforcement agencies.

Facebook was also ordered in January to pay the US $550 million to settle a class-action lawsuit over its unlawful use of facial recognition technology.

IBM has tried to help with the issue of bias in facial recognition, releasing a public data set in 2018 designed to help reduce bias as part of the training data for a facial recognition model.

But IBM was also found to be sharing a separate training data set of nearly one million photos in January 2019 taken from Flickr without the consent of the subjects, albeit under a Creative Commons license.

IBM said that the data set was only accessible by verified researchers and only included publicly available images. The company further noted that individuals can opt-out of the data set.

In his letter, Krishna added that there is a need to create more open and equitable pathways for all to acquire marketable skills and training. He is further suggesting that Congress should consider scaling the P-TECH school model nationally and expanding eligibility for Pell Grants.


Do you have a story that you think would interest our readers?
Write to us


Please enter your comment!
Please enter your name here
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.