Result: Pressing Matters: How AI Irons Out Epistemic Friction and Smooths Over Diversity
Further Information
This paper explores how Large Language Models (LLMs) foster the homogenization of both style and content and how this contributes to the epistemic marginalization of underrepresented groups. Utilizing standpoint theory, the paper examines how biased datasets in LLMs perpetuate testimonial and hermeneutical injustices and restrict diverse perspectives. The core argument is that LLMs diminish what Jose Medina calls “epistemic friction,” which is essential for challenging prevailing worldviews and identifying gaps within standard perspectives, as further articulated by Miranda Fricker (Medina 2013, 25). This reduction fosters echo chambers, diminishes critical engagement, and enhances communicative complacency. AI smooths over communicative disagreements, thereby reducing opportunities for clarification and knowledge generation. The paper emphasizes the need for enhanced critical literacy and human mediation in AI communication to preserve diverse voices. By advocating for critical engagement with AI outputs, this analysis aims to address potential biases and injustices and ensures a more inclusive technological landscape. It underscores the importance of maintaining distinct voices amid rapid technological advancements and calls for greater efforts to preserve the epistemic richness that diverse perspectives bring to society.