The “Blandification” of Thought: How AI is Changing What – and How – We Write
Does relying on artificial intelligence to craft our thoughts diminish the very essence of what makes us human? New research suggests the answer is a concerning yes. A study conducted by researchers from a coalition of West Coast universities reveals that heavy reliance on large language models (LLMs) fundamentally alters not just how we write, but what we write, leading to a homogenization of thought and a loss of individual voice.
AI’s Impact on Meaning and Passion
The study, accepted to a leading AI conference, examined how participants responded to the question of whether money leads to happiness. Researchers found that individuals who heavily relied on LLMs – those generating over 40% of their text with AI – were 69% more likely to offer a neutral response. Those who used AI sparingly or not at all expressed far more passionate opinions, either for or against the correlation between wealth and well-being. This suggests AI isn’t simply assisting with writing. it’s actively shaping the conclusions we reach.
From Personal Anecdotes to Impersonal Formality
The shift isn’t limited to the content of our arguments. AI likewise dramatically alters writing style. Heavy AI users produced essays with 50% fewer pronouns, resulting in language that felt less personal and more formal. Participants themselves reported that AI-generated text felt less creative and less authentically “them,” despite reporting similar levels of satisfaction with the final product – a troubling disconnect highlighted by researchers and experts alike.
The Editing Paradox: AI vs. Human Revision
The impact extends beyond original composition. Researchers compared AI editing to human editing, using a database of essays written before the widespread adoption of LLMs. The results were striking: AI systems made significantly larger edits than human editors and those edits often altered the original meaning of the text. Although human editors focus on refining language, AI tends to replace substantial portions of the writing, eroding the author’s unique voice and style.
“This represents a really good paper,” said Thomas Juzek, a professor of computational linguistics at Florida State University. “What really struck me is this kind of illusion of using LLMs to perform a grammar check. This research shows that while a user might think they’re just doing a simple language check, the model is doing so much more.”
Why is This Happening? The Problem with Optimization
Natasha Jaques, a lead author of the study and a computer science professor at the University of Washington, believes the issue stems from how LLMs are trained. “If you’re training a model on human feedback, the model has no boundary or perception of the difference between satisfying the humans and actually altering the human to make their preferences easier to satisfy,” she explained. Essentially, AI is optimizing for what it thinks we want, rather than faithfully representing our own thoughts.
The Rise of AI-Powered Universities and Tools
This research arrives at a time when institutions are increasingly embracing AI. The California State University (CSU) system, for example, has partnered with OpenAI to expand AI integration across its campuses. Universities like UC San Diego and San Diego State University have launched new AI majors to prepare students for a tech-driven future. At the same time, institutions like California State University Long Beach are providing access to university-licensed GenAI tools like Microsoft Copilot and ChatGPT for Education, emphasizing data security compared to publicly available options.
Pro Tip:
Don’t rely solely on AI for writing critical content. Use it as a tool for brainstorming or initial drafts, but always revise and refine the text to ensure it reflects your own voice and ideas.
Looking Ahead: Preserving Human Expression in the Age of AI
The implications of this “blandification” are far-reaching. As AI becomes more integrated into our lives, from academic writing to professional communication, the risk of losing our individual voices and critical thinking skills grows. Jaques suggests that the current trajectory could be similar to how YouTube recommendations subtly alter our preferences over time.
“Humans care about clarity, relevance, and impact, while AI cares about scalability and reproducibility,” Jaques noted. “It’s changing our conclusions in ways that are already affecting our existing institutions.”
FAQ
Q: Does this mean I shouldn’t use AI at all?
A: Not necessarily. AI can be a valuable tool, but it’s important to be aware of its limitations and use it thoughtfully.
Q: What can I do to avoid the “blandification” effect?
A: Focus on expressing your own ideas and voice, and use AI as a supplement rather than a replacement for your own thinking.
Q: Are all LLMs equally problematic?
A: The study evaluated Claude 3.5 Haiku, GPT-5 Mini, and Gemini 2.5 Flash, and found similar effects across all three.
Q: Will AI eventually be able to write in my voice?
A: Current AI systems struggle to accurately capture individual writing styles and preferences.
Did you grasp? Half of the study participants initially refused to use an LLM or only used it for information gathering, highlighting a natural resistance to fully outsourcing thought to AI.
The future of AI is unfolding rapidly. By understanding the potential pitfalls – and actively working to preserve human expression – we can navigate this new landscape with greater awareness and intention.
What are your thoughts on the impact of AI on writing? Share your perspective in the comments below!
