Apple and Google Broke Their Own Rules by Promoting ‘Nudify’ Apps, Report Says

by Chief Editor

The Hidden Economy of Nonconsensual AI Imagery

The digital landscape is witnessing a troubling rise in “nudify” or undressing apps—tools designed specifically to create nonconsensual intimate imagery. By leveraging generative AI to create deepfakes, these apps can edit images of people, predominantly women, to make them appear without clothing.

Even as major platforms claim to maintain strict safety standards, a report from the Tech Transparency Project (TTP) suggests a significant gap between policy and practice. Despite bans on “overtly sexual or pornographic material,” these apps have found a way to thrive within the world’s largest app stores.

Did you know? According to data from analytics firm AppMagic, “nudify” apps have been downloaded 483 million times, generating over $122 million in lifetime revenue.

Why Big Tech Struggles to Police Its Own Stores

For developers, getting an app onto the Apple App Store or Google Play Store requires passing rigorous safety criteria. However, the persistence of prohibited content suggests that moderation systems are being bypassed or ignored.

From Instagram — related to Google, Apple

Investigations have revealed that users can still search for troubling keywords such as “deepnude,” “undress,” and “nudify.” In a deep dive of the top 10 apps across both stores, TTP found that 40% of them advertised the ability to render women nude or scantily clad.

The Revenue Conflict

The tension between safety and profit is a recurring theme in tech moderation. Because Apple and Google earn money through advertising and a percentage of paid app subscriptions, there is a financial incentive that may lead to less vigilance. This “revenue stream” is cited as a potential reason why companies may be slow to remove apps that clearly violate their own policies.

The Revenue Conflict
Google Apple Store

In some cases, the platforms aren’t just hosting these apps—they are actively promoting them. Google, for instance, has been found creating a “carousel of ads” for some of the most sexually explicit apps identified in investigations.

From Niche Apps to Large-Scale AI Models

The threat is evolving beyond standalone “undressing” apps. The integration of generative AI into larger platforms has scaled the production of abusive content. For example, users of the AI model Grok reportedly created 1.4 million sexualized deepfakes over a mere nine-day period.

This shift indicates a trend where AI capabilities are becoming more accessible and powerful, making it harder for platforms to contain the spread of nonconsensual imagery. While some US senators have called for the removal of such tools from app stores, the response from tech giants has often been gradual.

Pro Tip: If you encounter an app that violates safety policies or promotes nonconsensual imagery, apply the “Report a Concern” feature within the App Store or Google Play Store to alert moderators.

The “Cat-and-Mouse” Game of Moderation

When reports go public, platforms often take swift, corrective action. Following recent revelations, Apple removed 15 reported apps and blocked several flagged search terms, while Google removed seven. However, the cycle often repeats as new apps emerge with slightly altered keywords to evade detection.

Apple and Google won’t like this…

This pattern suggests that without a fundamental shift in how AI-generated content is screened, the battle against nonconsensual imagery will remain a reactive process rather than a preventative one. For more on this, check out our guide on AI safety trends.

Future Outlook for Digital Safety

As AI continues to evolve, the industry may move toward more aggressive automated detection and stricter accountability for platforms that profit from prohibited content. The pressure from watchdogs and legislative bodies is increasing, forcing a conversation about whether financial gain justifies the hosting of abusive tools.

Future Outlook for Digital Safety
Google Apple Store

The case of Grok highlights this struggle; despite reports that Apple privately threatened to remove the app due to its abusive AI capabilities, the tool remains available on major stores. This suggests a complex negotiation between platform owners and high-profile AI developers.

Frequently Asked Questions

What are “nudify” apps?
These are applications that use generative AI to edit images of people—usually women—to make them appear nude, creating nonconsensual intimate imagery.

Do Apple and Google ban these apps?
Yes, both companies have policies prohibiting “overtly sexual” or “sexually suggestive” content, but reports indicate many such apps still bypass these rules.

How much money do these apps make?
According to AppMagic, these apps have generated more than $122 million in lifetime revenue.

What is the role of generative AI in this?
Generative AI allows these apps to create realistic deepfakes, significantly increasing the scale and ease with which abusive sexualized images can be produced.

Join the Conversation

Do you feel app stores should be held legally responsible for the AI tools they promote? Let us know your thoughts in the comments below or subscribe to our newsletter for more insights into the intersection of AI and ethics.

Subscribe Now

You may also like

Leave a Comment