Artificial Intelligence that can tell if you’re gay is ‘dangerous and flawed’ say LGBT groups

Last week research from Stanford hit the news, saying that the most advanced artificial intelligence could tell if somebody was gay or straight simply by analysing their face.

It used over 35,000 photos from a US dating website, and found that the AI, when shown two random photos of a gay and straight man, could correctly identify which was which 81% of the time.

However, the study has been heavily criticised, including by leading US LGBT+ advocacy organisations GLAAD and Human Rights Campaign.

Read the original story here: Artificial Intelligence can now work out whether you’re gay or straight just from a photograph

ISAAC LAWRENCE/AFP/Getty Images

A joint statement from the two organisations not only called the study “factually inaccurate” but also “dangerous and flawed research that could cause harm to LGBTQ people around the world.”

It urged outlets reporting on the study to also report the flaws in it, not least that it did not include any non-white participants.

“Technology cannot identify someone’s sexual orientation. What their technology can recognize is a pattern that found a small subset of out white gay and lesbian people on dating sites who look similar. Those two findings should not be conflated,” said Jim Halloran, GLAAD’s Chief Digital Officer.

Halloran pointed out that the photos used did not include people of colour, transgender people, older people, or bisexual people.

He said that this meant the research wasn’t science, but simply “a description of beauty standards on dating sites that ignores huge segments of the LGBTQ community.”

The statement notes that, in a society with strict standards of beauty, it’s hardly surprising that many gay men may post similar photos to each other on a dating site, and the same is true across demographics.

It also draws attention to the fact that, although 81% sounds like a high success rate, this means that almost 20% of the time, the result is incorrect, which could have detrimental consequences.


Ashland Johnson, HRC’s Director of Public Education and Research, said, “This is dangerously bad information that will likely be taken out of context, is based on flawed assumptions, and threatens the safety and privacy of LGBTQ and non-LGBTQ people alike.”

“Stanford should distance itself from such junk science rather than lending its name and credibility to research that is dangerously flawed and leaves the world — and this case, millions of people’s lives — worse and less safe than before.”

“At a time where minority groups are being targeted, these reckless findings could serve as weapon to harm both heterosexuals who are inaccurately outed, as well as gay and lesbian people who are in situations where coming out is dangerous,” Halloran said.

GLAAD and HRC say they talked to the researchers several months ago and raised their significant concerns with the study but that “there was no follow-up after the concerns were shared and none of these flaws have been addressed.”

However, some argue that this is missing the aim of the study. The Economist’s report said that the point was not about the creation of the software, but about the fact that the software is possible, and the effects this could cause.

The study said it wanted to draw attention to this facial recognition software, and the possible implications it could have when many governments and institutions still actively discriminate against LGBT people.