The Politics of Facial Recognition Technology: Could Biased Algorithms Destroy the Industry?

Google+ Pinterest LinkedIn Tumblr +

NewtonX recently wrote about how facial recognition technology is altering the global economy, from agriculture to law enforcement. Many of the applications we wrote about, though, have another dimension to them: the politics/ethics of algorithmic identification and data ownership. Multiple experts that NewtonX surveyed for the earlier analysis cited these concerns, particularly in relation to policing, algorithmic bias, and accuracy gaps by race and gender. While facial recognition technology may be altering the global economy, its path to implementation will be plagued by these concerns.

To provide an analysis of the politics of facial recognition technology, NewtonX interviewed researchers involved in ten different seminal papers on the problematic aspects of facial recognition technology. The insights gained through these interviews inform the analysis in this article.

The Rise and Potential Fall of Facial Recognition Technology

Over the past few years, facial recognition technology has been adopted for security, agriculture, hiring, retail, and surveillance. The most benign of these systems include Facebook’s photo tagging recommender or smartphone Face ID unlocking systems. The most sinister of these systems includes Amazon pitching ICE to adopt its Rekognition technology earlier this year, to outrage from employees and citizens alike. Amazon’s Rekognition has previously been criticized for its partnerships with Palantir (used by ICE) and other government agencies.

The outrage over government use of facial recognition technology does not stem solely from privacy fears. In 2018, a study led by an MIT Media Lab researcher revealed that systems sold by IBM, Microsoft, and Face++ had up to a 34.4% accuracy gap in gender classification between lighter-skinned males and darker-skinned females. A followup to the study published several weeks ago found that while the systems previously tested — IBM, Microsoft, and Face++— show significant improvement in their gender identification of dark-skinned women, a new system tested, Amazon’s Rekognition, had much higher rates of error in gender identity for dark-skinned females, showing error rates of 31%, as compared to Microsoft’s under 2% error rate.

Despite Microsoft’s less biased technology, the company has called on Congress to regulate facial recognition technology, arguing that its potential for harm is too great for companies alone to provide regulation. Congress may be motivated by the results of a study by The ACLU of Northern California which found that Amazon’s platform had a higher rate of misidentification for non-white Congress members than white Congress members (Amazon stated that the study used improper settings that contributed to the rate of error).  

The Solutions: Why Algorithms May be Less Biased than Humans

Last week, IBM released a highly diverse set of 1 million images of faces, all annotated with tags related to facial features including craniofacial measurements, symmetry, and gender. The company says it hopes the data set can be used for training algorithms to accurately identify facial features, regardless of race or gender.

While using more diverse tagged training sets is one approach, the MIT Computer Science and AI Laboratory came up with another: they figured out a way to reduce bias even if an algorithm is trained on heavily biased data. As the algorithm trains, it also identifies which samples in the data are underrepresented and spends extra time looking at them to compensate. The researchers found that their algorithm closed the largest accuracy gap between light and dark skinned males compared to a standard training algorithm, but that it didn’t eliminate the accuracy gap completely.

Facial recognition technology developers will likely implement both strategies: using more diverse training data with tags that accurately differentiate for all races, as well as using algorithms that identify bias in training data sets, and correct for it. Importantly, using strategies such as these, facial recognition technology does have the potential to be less biased than humans. This potential will push researchers and developers to continue investing in the technology, albeit with careful attention to the racial and gender biases that the technology is capable of exhibiting.

 

Share.

About Author

Germain Chastel is the CEO and Founder of NewtonX.

Comments are closed.