Amount of AI-generated child sexual abuse material found online surged in 2025 | AI (artificial intelligence)

by Chief Editor

The Rising Tide of AI-Generated CSAM: A Looming Crisis

The digital landscape is facing a disturbing new threat: a surge in child sexual abuse material (CSAM) created using artificial intelligence. Recent data reveals a dramatic increase in the volume of this content, raising serious concerns about the exploitation of children and the challenges facing law enforcement and tech companies.

Exponential Growth: Numbers Paint a Grim Picture

The Internet Watch Foundation (IWF) reported identifying 8,029 AI-made images and videos of realistic CSAM in 2025. This represents a 14% increase in the overall amount of CSAM detected, but the growth in AI-generated content is far more alarming. Specifically, there has been a more than 260-fold increase in AI-generated videos. In 2025, the IWF identified 1,286 AI-generated videos, compared to just 2 in early 2024. Another report indicated over 485,000 reports of AI-generated CSAM in the first half of 2025 alone, compared to 67,000 for all of 2024.

The Severity of the Content is Escalating

It’s not just the quantity of AI-generated CSAM that’s concerning. the nature of the content is also becoming more extreme. The IWF found that 65% of the 3,443 AI-generated videos were classified as “category A” – the most severe classification under UK law. This is significantly higher than the 43% rate for non-AI generated videos, indicating that the technology is being used to create increasingly violent and disturbing material.

Dark Web Discussions Reveal a Disturbing Trend

Analysis of online activity reveals a chilling trend. IWF analysts have observed conversations among individuals involved in CSAM on the dark web, expressing “delight” at the advancements in AI technology. These discussions center on the increasing realism of AI-generated content, including the ability to add audio to videos and manipulate images of real children. There is also discussion around “agentic” systems, which can autonomously carry out tasks, raising fears about the potential for fully automated CSAM creation.

Combating the Threat: Current and Future Strategies

Authorities are taking steps to address this growing problem, but the challenge is immense. The UK government has granted tech companies and child safety agencies the power to test AI tools for their potential to generate CSAM, aiming to prevent abuse before it happens. A ban on possessing, creating, or distributing AI models designed to generate CSAM was also announced last year.

The Inevitability of “Full Feature-Length Films”

Experts warn that the current situation is just the beginning. The IWF has cautioned that “full feature-length AI films of child sexual abuse” are likely inevitable as synthetic video technology continues to improve. This underscores the urgent demand for proactive measures and robust safeguards.

Public Opinion and the Call for Legislation

Public support for stronger regulations is also growing. Polling data shows that eight out of ten UK adults want the government to introduce legislation ensuring AI systems are developed with safety as a priority and are “future-proofed from causing harm.”

Looking Ahead: Potential Future Trends

The rapid evolution of AI technology suggests several potential future trends in the realm of AI-generated CSAM:

  • Increased Realism: AI models will continue to improve, making it increasingly difficult to distinguish between real and synthetic content.
  • Personalized Abuse: AI could be used to create CSAM featuring images or likenesses of specific children, potentially obtained from social media or other online sources.
  • Automated Production: Fully automated systems could generate CSAM on a massive scale, overwhelming existing detection and removal efforts.
  • Evasion Techniques: Offenders will likely develop techniques to evade detection, such as using obfuscation methods or distributing content through encrypted channels.

FAQ

What is Category A CSAM? Category A refers to the most severe type of child sexual abuse material under UK law.

What is the IWF? The Internet Watch Foundation is a UK-based organization that monitors and removes child sexual abuse content online.

Is AI-generated CSAM detectable? While increasingly difficult, AI-generated CSAM can sometimes be identified through forensic analysis and detection tools.

What can be done to help? Reporting suspected CSAM to the appropriate authorities and supporting organizations working to combat online child exploitation are crucial steps.

Did you know? The IWF operates a hotline for reporting illegal online content.

This is a rapidly evolving situation. Stay informed and report any suspicious activity. Explore the Internet Watch Foundation website for more information and resources.

You may also like

Leave a Comment