AI Discriminates Against Non-Native English Speakers

K. C. Sabreena Basheer 14 Jul, 2023 • 4 min read

A recent study has uncovered a disconcerting truth about artificial intelligence (AI): its algorithms used to detect essays, job applications, and other forms of work can inadvertently discriminate against non-native English speakers. The implications of this bias are far-reaching, affecting students, academics, and job applicants alike. The study, led by James Zou, an assistant professor of biomedical data science at Stanford University, exposes the alarming disparities caused by AI text detectors. As the rise of generative AI programs like ChatGPT introduces new challenges, scrutinizing these detection systems’ accuracy and fairness becomes crucial.

Also Read: No More Cheating! Sapia.ai Catches AI-Generated Answers in Real-Time!

New study shows that generative AI like ChatGPT can discriminate against non-native English speakers.

The Unintended Consequences of AI Text Detectors

In an era where academic integrity is paramount, many educators view AI detection as a vital tool to combat modern forms of cheating. However, the study warns that claims of 99% accuracy, often propagated by these detection systems, are misleading at best. The researchers urge a closer examination of AI detectors to prevent inadvertent discrimination against non-native English speakers.

Also Read: Massive Stack Exchange Network on Massive Strike Due to AI-Generated Content Flagging

Tests Reveal Discrimination Against Non-Native English Speakers

To evaluate the performance of popular AI text detectors, Zou and his team conducted a rigorous experiment. They submitted 91 English essays written by non-native speakers for evaluation by seven prominent GPT detectors. The results were alarming. Over half the essays designed for the Test of English as a Foreign Language (TOEFL) were incorrectly flagged as AI-generated. One program astonishingly classified 98% of the essays as machine-generated. In stark contrast, when essays written by native English-speaking eighth graders in the United States underwent the same evaluation, the detectors correctly identified over 90% as human-authored.

Essays written by non-native English speakers for TOEFL were incorrectly flagged as AI-generated.

Deceptive Claims: The Myth of 99% Accuracy

The discriminatory outcomes observed in the study stem from how AI detectors assess the distinction between human and AI-generated text. These programs rely on a metric called “text perplexity” to gauge how surprised or confused a language model becomes while predicting the next word in a sentence. However, this approach leads to bias against non-native speakers who often employ simpler word choices and familiar patterns. Large language models like ChatGPT, trained to produce low-perplexity text, inadvertently increase the risk of non-native English speakers being falsely identified as AI-generated.

Also Read: AI-Detector Flags US Constitution as AI-Generated

Rewriting the Narrative: A Paradoxical Solution

Acknowledging the inherent bias in AI detectors, the researchers decided to test ChatGPT’s capabilities further. They asked the program to rewrite the TOEFL essays, utilizing more sophisticated language. Surprisingly, when these edited essays underwent evaluation by AI detectors, they were all correctly labeled as human-authored. This paradoxical finding reveals that non-native writers may use generative AI more extensively to evade detection.

Also Read: Hollywood Writers Go on Strike Against AI Tools, Call It ‘Plagiarism Machine’

ChatGPT can create content that can pass off as human-generated.
Source: Fur Affinity

The Far-Reaching Implications for Non-Native Writers

The study’s authors emphasize the serious consequences AI detectors pose for non-native writers. College and job applications could be falsely flagged as AI-generated, marginalizing non-native speakers online. Search engines like Google, which downgrade AI-generated content, further exacerbate this issue. In education, where GPT detectors find the most significant application, non-native students face an increased risk of being falsely accused of cheating. This is detrimental to their academic careers and psychological well-being.

Also Read: EU Calls for Measures to Identify Deepfakes and AI Content

Non-native English speakers face discrimination by AI for jobs and college applications.

Looking Beyond AI: Cultivating Ethical Generative AI Use

Jahna Otterbacher, from the Cyprus Center for Algorithmic Transparency at the Open University of Cyprus, suggests a different approach to counter AI’s potential pitfalls. Rather than relying solely on AI to combat AI-related issues, she advocates for an academic culture that fosters the ethical and creative utilization of generative AI. Otterbacher emphasizes that as ChatGPT continues to learn and adapt based on public data, it may eventually outsmart any detection system.

Also Read: OpenAI Introducing Super Alignment: Paving the Way for Safe and Aligned AI

ChatGPT is striving towards being more ethical.

Our Say

The study’s findings shed light on a concerning reality: AI text detectors can discriminate against non-native English speakers. It is crucial to critically examine and address the biases present in these detection systems to ensure fairness and accuracy. With the rise of generative AI like ChatGPT, balancing academic integrity and a supportive environment for non-native writers becomes imperative. By nurturing an ethical approach to generative AI, we can strive for a future where technology serves as a tool for inclusivity rather than a source of discrimination.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers