Facial recognition software can’t identify trans people, according to science

The facial-recognition technology being developed by big tech companies is unable to identify the correct gender of transgender people, according to a recent study.

Computer science researchers at the University of Colorado, Boulder, found that major artificial-intelligence based facial analysis tools routinely fail to identify trans people.

These tools are those used by major tech companies and include Amazon’s Rekognition, IBM’s Watson, Microsoft’s Azure and Clarifai, according to QZ.

“We knew that people of minoritised gender identities—so people who are trans, people who are non-binary—were very concerned about this technology, but we didn’t actually have any empirical evidence about the misclassification rates for that group of people,” Morgan Klaus Scheuerman, a doctoral student in the information-science department of the University of Colorado Boulder, said in a video about the study.

The study, “How computers see gender: An evaluation of gender classification in commercial facial analysis and image labelling services”, saw researchers collect 2,450 images of people’s faces from Instagram, searching hashtags like #transman, #woman and #nonbinary to find groups of people by gender identity.

The images were divided into groups and tested against the facial analysis tools of the four big tech companies.

The tools were most accurate with cisgender men and women, who were accurately classified 98 percent of the time of average.

Trans men were wrongly classified 30 percent of the times. Non-binary and/or genderqueer people were inaccurately classified every time.

The researchers also think that the algorithms rely on outdated gender stereotypes, which make them more inaccurate. Scheuerman, a man with long hair, was misclassified as a woman by half of the tools tested.

IBM’s Watson also classified a man in drag as female.

Facial recognition tools are increasingly used by police, government, banks and other institutions. Fears that they will be used to cause harm have been exacerbated by the fact that early studies appear to show they struggle with both racial and gender bias.

Some MPs and campaigners have called for UK police and companies to stop using live facial recognition technology for public surveillance, after it was revealed that the tech allows faces captured on CCTV to be check in real time against watch lists compiled by police.

Privacy campaigners say this is inaccurate, intrusive and infringes on an individual’s right to privacy.