The Deepfake Dilemma: How App Stores Are Fueling a Dangerous Trend
Artificial intelligence offers incredible potential, but its darker side is becoming increasingly apparent. The proliferation of “nudify” apps – applications capable of digitally stripping individuals from images – highlights a significant and growing concern. Recent investigations reveal these apps aren’t just available; they’re being actively promoted by the very platforms that claim to ban them.
Apple and Google’s Role in the Problem
A report from the Tech Transparency Project (TTP) uncovered a disturbing trend: Apple’s App Store and Google’s Play Store are not only hosting nudify apps but are also steering users towards them. Searches for terms like “nudify,” “undress,” and “deepnude” yield numerous results for apps capable of creating non-consensual intimate imagery. Even more concerning, the app stores have been found to suggest related search terms, effectively encouraging users to explore this harmful content.

The scale of the problem is substantial. TTP’s data shows these apps have been downloaded nearly 500 million times and have generated over $122 million in revenue. Adding to the danger, 31 of these apps were rated as suitable for all ages (“E” for everyone), making them accessible to children.
How Nudify Apps Work: A Generative AI Threat
These apps leverage generative AI to manipulate images. While similar technology powers legitimate features like Google Photos’ Magic Editor, nudify apps lack the safety measures and ethical considerations of their more responsible counterparts. They can take any image and, using AI, create realistic-looking nude depictions of the person in the photo.
Beyond Nudification: The Rise of Deepfake Scam Ads
The problem extends beyond nudify apps. Scammers are increasingly utilizing deepfake technology in advertising, particularly on platforms like Meta. A TTP investigation revealed $49 million spent on political ads featuring deepfake videos of prominent figures like Donald Trump and Elon Musk, used to promote fraudulent government benefits and investment schemes. These ads often target vulnerable populations, such as seniors.
What’s Being Done – and What Needs to Happen
Following the TTP reports, both Google and Apple removed some of the identified apps. However, the underlying issue remains. The UK government is attempting to ban these apps outright, recognizing the harm they inflict, particularly within schools.

The current situation demands a more proactive approach. App stores demand to refine their vetting processes and implement stricter controls on search and advertising algorithms to prevent the promotion of harmful content. The increasing sophistication of generative AI means that simply removing apps isn’t enough; platforms must anticipate and address the evolving threat.
Frequently Asked Questions
What are “nudify” apps? These apps use artificial intelligence to digitally remove clothing from images.
Are these apps illegal? Legality varies by jurisdiction, but the ethical concerns surrounding non-consensual image manipulation are significant.
How can I protect myself? Be cautious about sharing personal photos online and be aware of the potential for misuse of AI technology.
What is a deepfake? A deepfake is a manipulated video or image created using artificial intelligence, often used to depict someone doing or saying something they never did.
Do you have thoughts on this issue? Share your comments below!
