Document Type

Class Research

Publication Date

Spring 2026

Abstract

Hundreds of millions of users now rely on AI-generated writing for ideation, drafting, and revision. As models make linguistic decisions about their writing, they create preferences that extend to users, reshaping our capacity to articulate who we are in ways that are just beginning to be understood. By drawing on research in computer science, psychology, ethics, and sociolinguistics, this paper traces how LLMs acquire and transmit linguistic bias. Users internalize AI preferences, subtly homogenizing linguistic decisions in ways that impact communities unevenly. This constitutes a form of algorithmic epistemic injustice, which harms writers through deflated credibility and the loss of culturally-specific frames necessary for identity formation. Any ethical framework for AI-assisted writing must therefore prioritize linguistic plurality as a means to sustain equitable expressive autonomy.

Comments

This paper won 3rd Place in the Graduate Division. 

Display as Peer Reviewed

Peer-Reviewed

Share

COinS