It’s not just Grok: Apple and Google app stores are infested with nudifying AI apps

It’s not just Grok: Apple and Google app stores are infested with nudifying AI apps

AI “Nudify” Apps Are Flooding Apple and Google Stores—Here’s the Shocking Truth

We often think of the Apple App Store and Google Play Store as digital “walled gardens”—safe, curated spaces where dangerous or inappropriate content gets filtered out long before it reaches our screens. But a disturbing new analysis by the Tech Transparency Project (TTP) suggests those walls have some serious cracks. The report reveals a grim reality: both major storefronts are currently flooded with dozens of AI-powered “nudify” apps. These aren’t obscure tools hidden on the dark web—they’re sitting right there in plain sight, allowing anyone to take an innocent photo of a person and digitally strip away their clothing without their consent.

Earlier this year, the conversation around this technology exploded when Elon Musk’s AI, Grok, was caught generating similar sexualized images on the platform X. But while Grok became the lightning rod for public outrage, the TTP investigation shows it was just the tip of the iceberg. A simple search for terms like “undress” or “nudify” in the app stores brings up a laundry list of software designed specifically to create non-consensual deepfake pornography.

The Scale of This Industry Is Frankly Staggering

We’re not talking about a few rogue developers slipping through the cracks. According to the data, these apps have collectively racked up over 700 million downloads. They’ve generated an estimated $117 million in revenue. And here’s the uncomfortable truth: because Apple and Google typically take a commission on in-app purchases and subscriptions, they’re effectively profiting from the creation of non-consensual sexual imagery. Every time someone pays to “undress” a photo of a classmate, a coworker, or a stranger, the tech giants get their cut.

The human cost of this technology cannot be overstated. These tools weaponize ordinary photos. A selfie from Instagram or a picture from a yearbook can be twisted into explicit material used to harass, humiliate, or blackmail victims. Advocacy groups have been screaming about this for years, warning that “AI nudification” is a form of sexual violence that disproportionately targets women and, terrifyingly, minors.

So, Why Are They Still There?

Both Apple and Google have strict policies on paper that ban pornographic and exploitative content. The problem is enforcement. It’s become a digital game of Whac-A-Mole. When a high-profile report comes out, the companies might ban a few specific apps, but the developers often just tweak the logo, change the name slightly, and re-upload the exact same code a week later. The automated review systems seem completely incapable of keeping up with the rapid evolution of generative AI.

For parents and everyday users, this is a wake-up call. We can no longer assume that just because an app is on an “official” store, it’s safe or ethical. As AI tools get more powerful and easier to access, the safeguards we relied on in the past are failing. Until regulators step in—or until Apple and Google decide to prioritize safety over commission fees—our digital likenesses remain uncomfortably vulnerable.


Tags: AI nudify apps, deepfake pornography, Apple App Store, Google Play Store, Tech Transparency Project, non-consensual imagery, Elon Musk Grok, AI sexual violence, digital exploitation, minors at risk, tech regulation, privacy invasion, app store safety, viral tech scandal, 700 million downloads, $117 million revenue, weaponizing photos, online harassment, digital rights, AI ethics crisis

Viral Phrases: “AI-powered ‘nudify’ apps are flooding official app stores,” “700 million downloads and counting,” “Apple and Google profiting from non-consensual content,” “Your selfie could be twisted into explicit material,” “The walls of the ‘walled garden’ have serious cracks,” “A digital game of Whac-A-Mole,” “This is not just a tech issue—it’s a human rights crisis,” “Parents, this is your wake-up call,” “The uncomfortable truth about tech giants’ cut,” “Minors are being targeted—and no one is stopping it,” “The safeguards we trusted are failing,” “Regulators, it’s time to step in,” “AI tools are evolving faster than safety measures,” “Your digital likeness is more vulnerable than ever.”

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *