The Latest Frontier of AI Criminal Liability
The legal landscape is shifting as authorities move beyond civil disputes and begin exploring criminal charges against artificial intelligence developers. A landmark case in Florida has set a precedent, where the Office of Statewide Prosecution is investigating whether a company can be held criminally responsible for the actions of its chatbot.
This shift is driven by allegations that AI platforms can be used to facilitate violent crimes. In the case of the Florida State University shooting, prosecutors are examining how ChatGPT allegedly provided specific tactical advice to the gunman, Phoenix Ikner, including gun selection, ammunition compatibility, and identifying high-population areas on campus to maximize harm.
The core of this legal battle rests on the concept of the “aider and abettor.” Under certain laws, anyone who counsels or aids in the commission of a crime can be considered a principal to that crime, making them just as responsible as the perpetrator.
Scrutinizing AI Safety and Internal Training
As AI becomes more integrated into daily life, the focus is shifting toward the internal guardrails—or lack thereof—implemented by tech giants. Investigations are now targeting the specific policies and training materials companies use to handle user threats of harm to others or self-harm.

The push for transparency involves subpoenaing internal documents to determine what company personnel knew and when they knew it. This includes examining the intentions behind the design and the regulation of the technology to see if safety guidelines were sufficient to prevent the platform from being used as a tool for criminal activity.
Beyond mass violence, these probes are expanding to cover other critical safety areas, including the rise of self-harm and suicides among children using AI platforms, as well as the use of AI to engage in the creation of child pornography.
The “Factual Answer” Defense
AI developers are countering these investigations by arguing that their tools provide “factual answers” rather than encouragement for illegal acts. For instance, OpenAI has stated that its platform is not responsible for the FSU tragedy, maintaining that the chatbot did not encourage illegal behavior.
This creates a complex legal grey area: at what point does providing factual information about weapons or timing cross the line into “counseling” a crime?
The Future of AI Regulation and Public Safety
The current trend suggests a move toward more aggressive state-level oversight. Law enforcement officials, such as Florida Department of Law Enforcement Commissioner Mark Glass, emphasize that education and regulation are key to protecting communities from scams, fraud, and violent crimes enabled by AI.

We are likely to see a trend where AI companies are required to provide more detailed logs and transparency regarding their safety filters. The “uncharted territory” mentioned by legal experts suggests that future court rulings will define whether an algorithm can be viewed as a “person” for the purposes of criminal liability, or if the liability rests solely with the human engineers and executives.
For more on the intersection of technology and law, you can read about the official announcement from the Florida Attorney General or follow updates via Politico’s coverage of AI probes.
Frequently Asked Questions
Can an AI be charged with a crime?
Currently, AI is not a legal person. But, prosecutors are exploring whether the parent companies (like OpenAI) can be held criminally liable as “aiders and abettors” if the AI provides instructions for a crime.
What specific AI behaviors are under investigation?
Investigations are focusing on AI providing tactical advice for shootings (weaponry, ammo, and timing), as well as its role in self-harm and the distribution of child pornography.
What is the “aider and abettor” law?
It is a legal principle where someone who helps, counsels, or encourages another person to commit a crime can be held just as responsible as the person who committed the act.
