Amazon’s Rekognition Still Showing Signs of Racial Bias

As facial recognition systems become more common, Amazon has emerged as a frontrunner in the field, courting customers around the US, including police departments and Immigration and Customs Enforcement (ICE). But experts say the company is not doing enough to allay fears about bias in its algorithms, particularly when it comes to performance on faces with darker skin.

In research published by MIT’s Joy Buolamwini last February, which identified similar racial and gender biases in facial analysis software built by Microsoft, IBM, and Chinese firm Megvii. Since bias in algorithms is often the result of biased training data, a number of tech companies voiced concern about these disturbing results found in facial recognition systems. In response to Buolamwini’s results, IBM published a cirated dataset it said would boost accuracy. Microsoft went even further, calling for regulation of the technology to ensure higher standards so that the market does not become a “race to the bottom.”

Unfortunately, Amazon’s response to the debate hasn’t been as promising.

In a study published just last week by the MIT Media Lab, Amazon’s Rekognition performed worse when identifying an individual’s gender if they were female or darker-skinned. In tests once again led by Buolamwini, Rekognition made no mistakes when identifying the gender of lighter-skinned men, but it mistook women for men 19 percent of the time and mistook darker-skinned women for men 31 percent of the time.

Amazon has denied that this recent research suggested anything about the accuracy of its technology. It noted that the researchers had not tested the latest version of Rekognition, and the gender identification test was facial analysis (which spots expressions and characteristics like facial hair), not facial identification (which matches scanned faces to mugshots).


Photo Credit: Zapp2Photo / Shutterstock.com