Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Document Type
Class Research
Publication Date
Spring 2026
Abstract
Hundreds of millions of users now rely on AI-generated writing for ideation, drafting, and revision. As models make linguistic decisions about their writing, they create preferences that extend to users, reshaping our capacity to articulate who we are in ways that are just beginning to be understood. By drawing on research in computer science, psychology, ethics, and sociolinguistics, this paper traces how LLMs acquire and transmit linguistic bias. Users internalize AI preferences, subtly homogenizing linguistic decisions in ways that impact communities unevenly. This constitutes a form of algorithmic epistemic injustice, which harms writers through deflated credibility and the loss of culturally-specific frames necessary for identity formation. Any ethical framework for AI-assisted writing must therefore prioritize linguistic plurality as a means to sustain equitable expressive autonomy.
Recommended Citation
Smith, Sarah, "Bias in the Wires: Epistemic Harms of Linguistic Homogenization in AI-Asisted Writing" (2026). 2026 Awards for Excellence in Student Research and Creative Activity - Documents & Media. 1.
https://thekeep.eiu.edu/lib_awards_2026_docs/1
Display as Peer Reviewed
Peer-Reviewed
Comments
This paper won 3rd Place in the Graduate Division.