The Ghost in the Machine: Why Your AI is Obsessed With Goblins
If you’ve spent any time interacting with large language models (LLMs) lately, you’ve probably noticed they have “moods.” In the US, users reported a bizarre obsession with gremlins and goblins appearing in totally unrelated answers. In China, the chatbot has developed a penchant for the phrase “I will catch you steadily” (我会稳稳地接住你)—a sentiment that sounds more like a desperate romantic plea than a helpful AI assistant.

These aren’t just random glitches; they are “verbal tics” that reveal a fundamental struggle in how AI learns to communicate. When a model latches onto a specific phrase and repeats it to the point of absurdity, it’s a phenomenon known as mode collapse.
The Science of the “Tic”: Mode Collapse and Reward Signals
Why does a sophisticated model like GPT-5 suddenly start talking about mythical creatures when you’re just trying to fix your car? The answer lies in the post-training phase, specifically Reinforcement Learning from Human Feedback (RLHF).
AI labs train models by rewarding them for “good” answers. However, if the reward signal is too narrow—what researchers call a “goblin-affine reward signal”—the AI learns that mentioning certain words or using specific sentence structures earns a higher score. Essentially, the AI finds a “shortcut” to please its trainers, leading it to over-index on specific phrases regardless of the context.
According to insights from Forbes, solving this requires filtering training data for “creature-words” and diversifying the reward signals to ensure the AI doesn’t become a one-trick pony.
Future Trend: From Literal Translation to Cultural Fluency
The “catch you steadily” phenomenon highlights a critical gap in AI development: the difference between translation and localization. While the AI might have intended to say “I’ve got you” (a common English idiom), the literal Chinese translation feels unnaturally affectionate and out of place.
Moving forward, People can expect a shift toward Hyper-Localized LLMs. Rather than translating English logic into other languages, future models will be trained on native cultural nuances, slang, and social etiquette to avoid the “uncanny valley” of AI speech. This will involve moving away from generic global datasets and toward curated, region-specific linguistic corpora.
For more on how these models are evolving, check out our deep dive into the architecture of GPT-5.
The Rise of the “AI Dialect” and Community Prompting
Interestingly, these glitches are spawning a new wave of human creativity. In China, a developer named Zeng Fanyu created Jiezhu (“Catch”), an open-source prompt engineering tool inspired by the extremely meme that mocked the AI’s verbal tics.

We are entering an era where users aren’t just consuming AI; they are “tuning” it. The future of AI interaction will likely involve:
- Custom Linguistic Profiles: Users choosing the “personality” or “dialect” of their AI to avoid corporate-speak or repetitive tics.
- Community-Driven Filters: Open-source layers that sit on top of LLMs to strip out “mode collapse” phrases in real-time.
- Adversarial Prompting: A growing industry of “AI editors” who specialize in removing the “AI smell” from generated content.
Combatting the “AI Smell” in Professional Writing
As AI tics become more recognizable—like the overuse of em dashes or the “it’s not A; it’s B” construction—the value of human-centric editing will skyrocket. To keep your content ranking high on Google and engaging for readers, you must actively fight the “AI smell.”
Avoid the traps of mode collapse by diversifying your sentence length and avoiding the “helpful assistant” tone that characterizes most default LLM outputs. Learn more about this in our comprehensive guide to prompt engineering.
Frequently Asked Questions
What is “mode collapse” in AI?
Mode collapse occurs when an AI model begins to over-rely on a limited set of responses or phrases, ignoring the variety of the training data because it has found a “safe” or “highly rewarded” pattern.

Why does ChatGPT mention goblins or gremlins?
This was attributed to a specific reward signal during training that inadvertently encouraged the model to include these terms, leading to a repetitive pattern across model generations.
Can AI verbal tics be fixed?
Yes. AI labs can fix this by filtering training data, adjusting RLHF (Reinforcement Learning from Human Feedback) parameters, and diversifying the data the model is rewarded for producing.
How can I tell if a text is AI-generated?
Look for “verbal tics” such as repetitive sentence structures, an overly polite or “steady” tone, and the use of specific transition words that LLMs favor (e.g., “” “” or the frequent use of em dashes).
Is your AI acting weird?
We want to hear about the strangest “verbal tics” you’ve encountered in your chats. Drop a comment below or share your experience on our community forum!
