IBM CEO, Arvind Krishna, has announced that the company will end all its facial recognition business following criticism over privacy concerns and racial bias.
Even with the advances that have been made in recent years, the technology still demonstrates bias. This has been attributed to, in part, private companies with little regulation or federal oversight providing that technology, making it an unreliable tool for law enforcement and security, as well as heightening the potential for civil rights abuses.
In a letter reported by CNBC, Krishna explained the company’s decision to end all of its facial recognition as a service, stating:
“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
The letter coincides with the Justice in Policing Act, which was introduced on June 8, aiming to reform the U.S. police after weeks of protests against police brutality and racism. The protests were sparked by the death of George Floyd, a black American who died after a white police officer knelt on Floyd’s neck for 8 minutes and 46 seconds whilst being arrested.
Senior legislative counsel for the ACLU in Washington, DC, has commented that many facial recognition algorithms are less accurate on darker skin tones, urging a federal ban on facial recognition “unless and until it can be used in a way that respects civil liberties.”
This is not the first time; however, that bias in facial recognition technology has come under scrutiny. Research, done in 2018 by Joy Buolamwini and Timnit Gebru, revealed to what extent many commercial facial recognition systems, including IBM’s, are biased. The study showed that commercial facial recognition software was significantly less accurate for darker-skinned women than for lighter-skinned men, in fact, they were 34.4% less accurate. IBM was found to have the greatest disparity.
Other big tech companies developing facial recognition have been criticized for accuracy and privacy violations. Amazon’s Rekognition, one of the few major tech companies to sell facial recognition software to law enforcement has been criticized for its accuracy. Similarly, Clearview AI’s facial recognition tool, compiled in part from scraping social media sites, and which is being widely used by private sector companies and law enforcement agencies, is at the center of a number of privacy lawsuits and has been issued numerous cease and desist orders. In January 2020, Facebook was ordered to pay $550 million over its unlawful use of facial recognition technology, to settle a class-action lawsuit.
IBM has also been criticized for privacy violations on numerous occasions. One such instance was in January 2019 when the company shared a data set which contained almost one million photographs taken from Flickr, without the consent of the subjects. Even though IBM stated that only verified researchers would be able to access them and only photographs publicly available would be included, with individuals being able to opt-out of the data set if they so wished, it was widely felt that the data set went against privacy rights.
Krishna stated that “IBM firmly opposes and will not condone uses of any facial recognition technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether, and how, facial recognition technology should be employed by domestic law enforcement agencies.”