AI in schools puts LGBTQ+ students at risk, new study reveals

AI schools LGBTQ+ students

A new report warns that the use of AI and digital surveillance in US schools poses a risk to LGBTQ+ students and could infringe on their rights. 

Experts from the Center for Democracy and Technology, a nonprofit, nonpartisan organisation that advocates for civil rights in the digital world, have published a report on the experiences of teachers, students and parents regarding student data privacy, content filtering and blocking, student activity monitoring, and generative AI (programmes like ChatGPT).

In the report, released on Wednesday (20 September), researchers found that tech used to block explicit adult content and flag students at risk of self-harm or harming others are putting already vulnerable students at risk – particularly those who are LGBTQ+, disabled or students of colour.

For their sample, the Center for Democracy and Technology polled 1,029 students from ninth through to 12th grade, 1,018 parents of sixth to 12th grade students and 1,005 teachers of sixth to 12th grade students.

The study found that LGBTQ+ pupils are more likely to face negative consequences for their online activity through student activity monitoring than their straight, cisgender peers.

Twenty-nine per cent of LGBTQ+ students reported that either themselves or another student had been outed as LGBTQ+, which is a “potentially traumatising event”, and half (50 per cent) of all LGBTQ+ students surveyed said that they or another pupil had been disciplined for doing something online, compared to 39 per cent of non-LGBTQ+ students.

You may like to watch

Student activity monitoring was found to be widespread, with nearly nine in 10 teachers (88 per cent) reporting that it is used on school-provided devices. Four in 10 teachers also said their school monitors students’ personal devices. This monitoring takes place both in and outside school hours, with 38 per cent of teachers reporting monitoring out of school time.

A third of teachers interviewed said content related to LGBTQ+ topics and content exploring race is more likely to be restricted by AI content filters. The centre said this “amounts to a digital book ban” similar to recent legal restrictions and bans of LGBTQ+ materials like Florida’s reviled ‘Don’t Say Gay’ law. 

The survey also notes that the majority of parents, students and teachers are suffering from “widespread confusion” about the role of artificial intelligence in the classroom, who say they want more information and training about how to properly use it. 

The same researchers warned about AI and surveillance technology’s potential to harm the LGBTQ+ community some time ago. Back in 2022, the Center for Democracy and Technology released the research paper ‘Hidden Harms: Targeting LGBTQ+ Students’, which highlighted these exact concerns.  

It found that algorithms used by schools scanned students’ messages, documents, and websites they visited to find words like “gay” and “lesbian” and automatically flag them to faculty. Although this was ostensibly for student ‘safety’, this was more likely to be harmful, as it amounted to “outing without consent.”

Twenty-three per cent of students said that they, or someone they knew, had been outed to teachers or parents by this technology. It also found that LGBTQ+ students were more likely to be disciplined by teachers and parents because of moderation technology’s findings.

Commenting on the outcomes of the study, Elizabeth Laird, director of equity in civic technology for the Center for Democracy and Technology, said: “There are certain groups of students who should already be protected by existing civil rights laws, and yet they are still experiencing disproportionate and negative consequences because of the use of this education data and technology.”

The White House released a Blueprint for an AI Bill of Rights white paper in October 2022, outlining a set of principles to help responsibly guide the design and use of artificial intelligence.

However, civil rights groups – including the ACLU, the American Association of School Librarians, American Library Association, Disability Rights in Education Defense Fund and the Electronic Frontier Foundation – signed a letter accompanying the Center for Democracy and Technology’s 2022 report calling for “education-related protections” in the wake of the “explosive emergence of generative AI”.

Despite this, 57 per cent of teachers in the most recent survey said they haven’t had any substantive training in AI since the Blueprint white paper was drawn up.

According to the Center for Democracy and Technology, increasing numbers of students and parents are expressing concerns about how these technologies might cause harm to marginalised students.

On what needs to happen next to protect them, the centre said: “Schools need to become more intentional about how they use these technologies to make sure they do not discriminate and cause other harms.

“They need to ensure that technologies are deployed in ways that benefit all students.”