The New Battlefield: How AI is Weaponizing Disinformation in Modern Warfare
Since the beginning of U.S. And Israeli strikes against Iranian military and nuclear infrastructure in late February, a dual conflict has been unfolding. One is a traditional kinetic war; the other is a more insidious battle for reality itself, waged through the systematic apply of artificial intelligence to manufacture disinformation at an unprecedented scale.
From Crude Propaganda to AI-Powered Deception
The evolution of disinformation tactics is stark. During the Iran-Iraq War, Tehran relied on radio broadcasts and print media with limited reach. Later, during the 1991 Gulf War, Iraqi disinformation – exaggerations easily debunked by the Western press – proved ineffective. The digital age brought sock puppets and recycled footage, requiring significant human effort and easily countered by basic verification tools.
December 2023 marked a turning point. Iran’s IRGC-linked group, Cotton Sandstorm, hijacked streaming services in the UAE, UK, and Canada, broadcasting a deepfake newscast. Microsoft identified this as the “first Iranian influence operation where AI played a key component,” signaling a “fast and significant expansion” of Iranian capabilities.
By June 2025, the 12-day Israel-Iran conflict was dubbed “The First AI War,” with generative AI creating more misinformation than traditional methods. Three fake videos amassed over 100 million views.
The Tactics of Disruption: Coordinated Attacks and Forensic Cosplay
The current conflict, beginning in March 2026, demonstrates further sophistication. Tens of thousands of inauthentic accounts are simultaneously distributing identical AI-generated content across major platforms, utilizing synchronized posting times and coordinated hashtags. This points to centralized production, not organic spread.
A disturbing new tactic is “forensic cosplay” – the fabrication of technical-looking verification tools to discredit authentic evidence. For example, fabricated heatmap visualizations were used to falsely label photographs from a strike site in Tehran as AI-generated. Similarly, a fake “Empirical Research and Forecasting Institute” published flawed analysis of a New York Times photograph, rendering its conclusions meaningless, yet still attracting over 600,000 views on X.
The Authoritarian Amplification Network
Iran isn’t operating in isolation. The Foundation for Defense of Democracies has documented an “authoritarian media playbook” involving Russian bot networks laundering Iranian content and Chinese state-aligned media echoing anti-U.S. Narratives. This convergence of interests amplifies the impact far beyond what any single actor could achieve.
The Strategic Goal: Destroying the Shared Reality
The primary objective of Iran’s AI-driven disinformation campaign isn’t simply persuasion; it’s the destruction of the shared evidentiary foundation that enables accountability. By flooding the information space with plausible deniability – the ability to dismiss any evidence as fabricated – Iran aims to create a state of epistemological chaos where truth becomes unknowable.
This “Liar’s Dividend,” as termed by scholars Danielle Citron and Robert Chesney, allows actors to dismiss genuine evidence as fabricated, increasing public support even when facing accountability. Research confirms this effect, and its impact is likely amplified by the dramatic improvements in synthetic media.
Within Iran, this strategy serves a dual purpose: projecting military capability abroad while insulating the regime from documentation of its own actions against its citizens. Internet connectivity within Iran has been severely restricted, creating an information vacuum filled by deepfakes and fabricated analysis.
The Detection Gap and the Need for a New Approach
Currently, there is “no ability today to systematically identify AI-driven influence campaigns,” according to Danny Citrinowicz of Tel Aviv University. Meta’s Oversight Board has deemed its deepfake detection “not robust or comprehensive enough.” The EU AI Act’s labeling requirements won’t be enforceable until August 2026, long after this conflict began.
The U.S. Is restructuring its counter-influence mission, but the timing is critical. A new institutional architecture is still under development while Iran’s campaign continues unabated.
Key Takeaways
- The strategic objective is epistemic disruption – degrading the audience’s ability to form reliable beliefs.
- The Russia-China-Iran amplification model is a template for future conflicts.
- Detection tools are now weapons, with the fabrication of forensic verification tools representing a qualitative escalation.
- The gap between adversary capability and institutional response is significant and growing.
FAQ
Q: What is “forensic cosplay”?
A: It’s the fabrication of technical-looking verification tools to falsely discredit authentic evidence, making real things appear false.
Q: What is the “Liar’s Dividend”?
A: It’s the ability to dismiss genuine evidence as fabricated, increasing support for actors facing accountability.
Q: Is AI the only problem?
A: No. The coordinated amplification of disinformation by state-backed networks in Russia and China significantly exacerbates the issue.
Did you know? Bot traffic now accounts for over 50% of all web activity, meaning the information environment is, in a measurable sense, majority-synthetic.
Pro Tip: Be skeptical of all online content, especially during times of conflict. Verify information from multiple credible sources before sharing.
Explore more insights on national security challenges and emerging threats at The Cipher Brief.
