Latest Law Targets Deepfakes, But Victims Say More Action Needed
A new law criminalizing the creation of non-consensual intimate images has come into effect in the UK, marking a significant step in addressing the growing threat of deepfake abuse. However, victims and campaigners are already urging the government to go further, highlighting gaps in protection and the urgent need for stronger enforcement.
The Rise of Deepfake Abuse and the New Legislation
Deepfake technology, which uses artificial intelligence to create realistic but fabricated images and videos, has opened a new frontier for online abuse. Victims, like Jodie (who uses a pseudonym), have found their images used in deepfake pornography, leading to severe emotional distress and a difficult path to justice. Jodie testified against Alex Woolf, who was convicted and sentenced to 20 weeks in prison for posting images of women to porn websites. However, she notes that a specific law addressing her experience didn’t exist at the time.
The new offence was introduced as an amendment to the Data (Use and Access) Act 2025, receiving royal assent last July but only being enforced from Friday. Campaigners expressed frustration over the delay, arguing it allowed millions more women to become victims without legal recourse.
Beyond Criminalization: What More Needs to Be Done?
While the criminalization of deepfake creation is a welcome development, advocates emphasize that it’s only one piece of the puzzle. A petition delivered to Downing Street, with over 73,000 signatures, calls for civil routes to justice, such as takedown orders for abusive imagery on platforms and devices. Improved relationships and sex education, alongside adequate funding for specialist services like the Revenge Porn Helpline, are also key demands.
Madelaine Thomas, founder of tech forensics company Image Angel, described the day the law came into effect as “a very emotional day,” but also pointed out that the law doesn’t fully protect sex workers from intimate image abuse. She has experienced the daily sharing of her intimate images without consent for the past seven years.
Tech Companies and the Fight Against Deepfakes
The issue isn’t solely about creating the images; it’s also about the platforms where they are shared. Ofcom is currently investigating Elon Musk’s X (formerly Twitter) over Grok AI-generated sexual deepfakes. X has stated it will take steps to address the problem, but details remain limited. The UK government is also taking action against “nudification” apps, banning them outright to prevent abuse at its source.
The government has also confirmed that creating non-consensual sexual deepfakes will be a priority offence under the Online Safety Act, placing extra duties on platforms to proactively prevent this content.
the UK has partnered with Microsoft to develop a deepfake detection system, signaling a commitment to leveraging technology to combat the problem.
The Scale of the Problem
The prevalence of online abuse is staggering. According to Refuge, one in three women in the UK have experienced online abuse. This highlights the widespread nature of the problem and the urgent need for comprehensive solutions.
Frequently Asked Questions
What is a deepfake? A deepfake is an image or video that has been altered to replace one person’s likeness with another, often using artificial intelligence.
Is it illegal to share a deepfake image? Sharing intimate deepfakes without consent is already illegal, and creating them is now a criminal offence in the UK.
Where can I get help if I am a victim of deepfake abuse? The Revenge Porn Helpline and organizations like Stop Image-Based Abuse can provide support and guidance.
What is the Online Safety Act? The Online Safety Act places legal duties on online platforms to protect users from harmful content, including deepfakes.
The fight against deepfake abuse is ongoing. While the new law represents progress, continued vigilance, technological innovation, and a commitment to supporting victims are essential to address this evolving threat.
