Beyond the AGI Myth: Understanding AI as a Social Technology
For years, the conversation around artificial intelligence has been dominated by science fiction. We hear about the “Singularity,” the arrival of Artificial General Intelligence (AGI), and the fear of super-intelligent machines with their own agendas. But if we strip away the mythic garb, a different, more grounded reality emerges.
Rather than a separate species or a digital god, AI—and specifically Large Language Models (LLMs)—is better understood as a social technology. Much like the development of markets, bureaucracies, and democracy, AI is a systematic means of reorganizing social relationships among human beings.
The Science of ‘Coarse-Graining’: Why AI is Always Lossy
To understand the future of AI, we have to understand a concept called coarse-graining. In simple terms, a coarse-graining is a stripped-down representation of a complex phenomenon. No system—human or machine—can grasp the full detail of its environment, so we use abstractions to make the world tractable.
LLMs are essentially massive coarse-grainings of the textual information they were trained on. They capture leading statistical characteristics to produce outputs that resemble human reasoning. However, this process is inherently lossy.
Abstraction hides complexity. When AI discards information to create a manipulable model, it creates blind spots. The critical question for the future isn’t whether AI is “accurate,” but rather: Which information is being discarded, and who benefits from that loss?
The Collision of AI and Bureaucracy
There is a prevailing belief in some tech circles that AI will “eat the state,” replacing “inefficient” human bureaucrats with optimized algorithmic agents. This vision assumes that the primary problem with government is a lack of efficiency. In reality, the friction in bureaucracy often stems from non-commensurable trade-offs.
Bureaucracy isn’t just about top-down orders; it’s about “mutual adjustments”—the constant, messy negotiation between divergent goals (e.g., balancing immediate unemployment relief versus long-term economic “pump priming”).
Because you cannot “optimize” a political trade-off, AI cannot simply replace the bureaucratic process. Instead, we are seeing a “chimerical meld” of the two, leading to several emerging trends:
1. The Rise of the ‘Mosaic Eye Synopticon’
Traditionally, large organizations “do not know what they know” because valuable information is scattered across disparate offices. AI can now stitch this information together, creating a mosaic eye synopticon. This allows for unprecedented coordination but eliminates the “invisible spaces” that often protect individual liberties and local autonomy.
2. Ideological Oracles and Blind Spots
AI is exceptionally good at articulating organizational ritual and boilerplate language. We may see the rise of “ideological oracles”—AI systems that can explain how a general national strategy applies to a specific, niche sector. However, these systems can also create systematic blind spots, seamlessly stitching together facts while concealing ideologically inconvenient truths.
3. The ‘Street-Level Algorithm’ Problem
Human “street-level” bureaucrats often identify novel problems or anomalies and use them to refine policy. Algorithms, however, execute pre-trained classification boundaries. When a “street-level algorithm” encounters a marginal case, it may apply a standard pattern with erroneously high confidence, potentially locking in existing inequalities and ignoring valuable, novel feedback from the public.
The New Bureaucratic Tug-of-War
As AI integrates into governance, the “cat-and-mouse” games of bureaucracy will simply move to a new medium. We already see this in how officials might “juke the stats” to look better in the eyes of their superiors.

In an AI-driven system, subordinates may find ways to manipulate the training data or the internal documents that feed the LLM, ensuring the “oracle” produces a favorable assessment of their performance. The struggle for power won’t disappear; it will just become more technical and harder to articulate.
while AI can translate jargon across different organizational branches—such as the various arms of the US military—this “over-facile translation” can be dangerous. It may obscure genuine operational differences that only become apparent when bureaucratic language finally meets material reality.
Frequently Asked Questions
Will AI eventually replace all government employees?
Unlikely. While AI can automate routine tasks and organizational translation, it cannot resolve the political trade-offs and “mutual adjustments” that are central to human governance.
What is ‘lossiness’ in AI?
Lossiness refers to the information discarded when a complex reality is compressed into a statistical model (a coarse-graining). This can lead to systematic errors or blind spots regarding rare events and minority groups.
Is AGI a realistic goal for government efficiency?
The pursuit of AGI often ignores the actual difficulties of social organization. Treating AI as a “magic” solution to bureaucracy often overlooks how these tools can actually exacerbate information loss and centralize power in ways that reduce flexibility.
Join the Conversation
Do you think AI will make government more transparent or simply create new ways to hide the truth? Share your thoughts in the comments below or subscribe to our newsletter for more deep dives into the political economy of technology.
