ChatGPT & Co: Politische Voreingenommenheit in KI-Sprachmodellen enthüllt

by Chief Editor

Are AI Chatbots Politically Biased? A New Study Raises Concerns

A recent study from Hochschule München (HM) suggests that AI language models like ChatGPT, Grok and DeepSeek aren’t as politically neutral as many assume. Researchers Anna Kruspe and Buket Kurtulus found that when prompted to take a political stance, these models consistently leaned towards the center-left spectrum.

The Wahl-O-Mat Experiment

The study utilized the German “Wahl-O-Mat,” a tool used before elections to help voters identify which political party best aligns with their views. Typically, individuals answer questions about various policy issues. In this research, the AI models themselves answered these questions, effectively completing the Wahl-O-Mat as if they were voters.

Each of the 38 theses presented in the Wahl-O-Mat was evaluated 100 times by each model, in both German and English, to minimize the impact of language or random variations. The results revealed a clear pattern: all three AI models exhibited a distinct political leaning.

A Center-Left Tendency

The AI-generated responses showed a stronger alignment with parties on the center-left, particularly Germany’s Bündnis 90/Die Grünen (The Greens) and the SPD (Social Democratic Party). The AfD (Alternative for Germany) showed the least alignment with the models’ responses. Interestingly, the models frequently chose the “neutral” option more often than reflected in actual party platforms, potentially indicating a built-in caution or hedging strategy.

“It is remarkable that the models all tended to ‘agree,’ so there weren’t very different political tendencies,” noted the researchers.

Implications for the Future of Political Discourse

This finding is particularly relevant as more people turn to AI tools for information and political analysis. The potential for biased AI to influence public debate and even electoral outcomes is a growing concern. With upcoming elections, such as the Kommunalwahl in Bayern (local elections in Bavaria) on March 8, 2026, the need for transparency and critical evaluation of AI-generated political content is paramount.

The Need for Transparency and Regulation

The study highlights the importance of understanding how these models function and recognizing that their outputs aren’t necessarily objective. Researchers emphasize the need for greater awareness of potential biases and the development of independent, European AI models built on transparency and diverse datasets.

Potential Risks of Unchecked AI Influence

Without careful consideration, AI could reinforce existing societal biases or even be used for deliberate political manipulation. The study suggests that relying solely on AI for political information could lead to a skewed understanding of the political landscape.

What Does This Mean for You?

As AI becomes increasingly integrated into our daily lives, it’s crucial to approach its outputs with a critical eye. Don’t assume that AI-generated information is neutral or unbiased. Always cross-reference information from multiple sources and consider the potential for hidden agendas.

Pro Tip:

When using AI chatbots for political research, attempt prompting them with questions from multiple perspectives. This can help reveal potential biases and provide a more balanced understanding of the issues.

FAQ

  • Are all AI chatbots politically biased? The study focused on ChatGPT, Grok, and DeepSeek, but the findings suggest that bias may be a common issue in large language models.
  • How was the political bias measured? Researchers used the German Wahl-O-Mat, having the AI models complete the questionnaire as if they were voters.
  • What is the biggest concern raised by this study? The potential for biased AI to influence public opinion and electoral outcomes.
  • What can be done to address this issue? Increased transparency, regulation, and the development of independent, unbiased AI models are crucial.

Scientific Contact:
Prof. Dr. Anna Kruspe
[email protected]

Original Publication:
B. Kurtulus, A. Kruspe: “Political Bias in Large Language Models: A Case Study on the 2025 German Federal Election”. Identity-Aware AI workshop, European Conference on Artificial Intelligence (ECAI), 2025.

Further Reading: Explore more about large language models and their potential biases on digitalhandwerk.rocks and kibuzzer.com.

What are your thoughts on the potential for AI bias in political discourse? Share your opinions in the comments below!

You may also like

Leave a Comment