Why Deepfake Abuse of Children Is a Growing Crisis

UNICEF’s latest research shows that millions of children are having their images turned into sexualised content by generative AI. The agency warns that without robust regulatory frameworks and coordinated action from governments and tech platforms, the threat will only worsen.

What the Data Reveal

A 2025 report from the Childlight Global Child Safety Institute recorded a jump from 4,700 cases in the United States in 2023 to over 67,000 in 2024. A joint study by UNICEF, Interpol and ECPAT found that at least 1.2 million children—roughly one in every 25—had their images manipulated into sexually explicit deepfakes in the past year alone.

Psychological Harm Is Real and Lasting

Afrooz Kaviani Johnson, Child Protection Specialist at UNICEF, explains that a child’s “body, identity, and reputation can be violated remotely, invisibly, and permanently.” Victims often experience shame, anxiety, depression, and fear, with the viral nature of deepfakes leading to long‑term trauma and mistrust of digital spaces.

Public Attitudes Signal an Alarming Trend

The National Police Chiefs’ Council (NPCC) surveyed UK residents and found a 1,780 % surge in deepfake abuse between 2019 and 2024. Nearly three‑in‑five respondents said they were worried about becoming victims, even as 34 % admitted creating a sexual or intimate deepfake of someone they knew.

Did you understand? 13 % of respondents said creating and sharing an intimate deepfake of a partner should be both morally and legally acceptable.

Future Trends: What Experts Expect

1. Strengthening Legal Definitions

UNICEF urges governments to update child sexual abuse material (CSAM) laws to expressly include AI‑generated content and to criminalise both creation and distribution. Legal reforms are expected to accelerate as more countries recognise the gap.

2. “Safety‑by‑Design” in AI Tools

Tech firms are being pressed to adopt child‑rights impact assessments and embed safeguards at the development stage. The UN notes that many generative AI tools currently lack meaningful protection mechanisms.

3. Growing AI‑Detection Capabilities

As deepfake technology evolves, researchers are investing in detection algorithms that can flag manipulated child imagery before it spreads. These tools are likely to become standard requirements for platforms that host user‑generated content.

4. Expanding AI Literacy Programs

A joint UN statement highlighted a widespread lack of AI literacy among children, parents and teachers. Future initiatives will focus on education campaigns that teach young people how to recognise and report AI‑driven abuse.

5. Increased Platform Accountability

Recent investigations into X’s Grok chatbot—found generating non‑consensual sexual deepfakes—have led to legal scrutiny in the UK, EU and France. Similar high‑profile cases are expected to prompt stricter oversight and mandatory reporting obligations for online services.

Pro tip for parents and educators

Encourage children to share any unexpected or unsettling images they encounter online. Early reporting can trigger removal and support services before the content goes viral.

Key Players Mobilising Against Deepfake Abuse

Frequently Asked Questions

What is a deepfake?
A deepfake is AI‑generated media—image, video or audio—engineered to look and sound real, often used to create fabricated sexual content.
Why are children especially vulnerable?
Children’s images can be taken from public photos and transformed into sexualised material without consent, leading to permanent online exposure and psychological harm.
How can I report a deepfake involving a child?
Contact local law‑enforcement, the national CSAM hotline, or report directly to platforms that host the content. UNICEF recommends preserving any evidence before deletion.
Are there legal protections against AI‑generated child abuse material?
Some jurisdictions are updating CSAM definitions to include AI‑generated content, but many laws still lag behind the technology.

Where to Learn More

Explore related stories on our site:

Read external analyses from leading outlets:

Take Action

Deepfake abuse is real abuse. Share this article, join the conversation in the comments, and sign up for our newsletter to stay informed about emerging AI risks and how you can help protect the next generation.