Inside the Tech Stack Keeping Deepfake Abuse Profitable
Following the money behind the deepfake crisis: how platforms, advertisers and tech infrastructure providers profit from non-consensual sexual imagery while avoiding direct accountability.
Cold storage in Meta data center | Courtesy of Meta
In Part 1 of this investigation, we examined how artificial intelligence tools helped scale non-consensual sexual imagery into a rapidly expanding online industry. Platforms removed some apps, lawmakers proposed bans and regulators opened investigations.
Part 2 examines the ecosystem supporting deepfake abuse. Behind every deepfake tool sits an ecosystem of app stores, advertising networks, cloud infrastructure and payment processors. Each layer captures revenue while the system continues operating.
What emerges is a layered market in which abuse travels through the same infrastructure that powers much of the modern internet
In early December 2025, users of Elon Musk’s social media platform X began sharing a discovery: The platform’s chatbot, Grok, could generate sexually explicit images of real women using ordinary photographs.
Within days, requests asking the system to “undress her” multiplied across the platform. Internal monitoring later showed the number of requests rising from just 10 to nearly 200,000 in nineteen days.
The images were created from the kinds of photographs that populate modern digital life: Zoom screenshots, workplace headshots and casual pictures stored in phone galleries or company messaging apps. A photo from a morning meeting could become the source material for a manipulated nude image within seconds.
The surge left Musk with a decision. Engineers could restrict the feature or disable it entirely.
Instead, the capability moved behind a paywall.
Grok’s nudification feature became available only to subscribers of X Premium, a paid service with an estimated 15.3 million users. The shift turned a rapidly spreading abuse vector into a premium product.
And what followed looked less like a moderation problem than a market response.
Users quickly learned how to refine their prompts. When the platform later introduced a “zero tolerance” policy for non-consensual nudity, indirect phrasing achieved the same result. Prompts like “micro bikini,” “dental floss,” and “Christmas tree lights” continued producing sexualized images of real women, according to analysis by Copyleaks.
At peak activity, Grok generated roughly one manipulated image every minute.
The episode triggered a cascade of government responses. French prosecutors opened a criminal investigation. Thirty-seven U.S. state attorneys general launched coordinated inquiries. Regulators in the European Union began proceedings under the Digital Services Act. Indonesia, Malaysia, and the Philippines announced bans.
The scale of the reaction reflected a realization among regulators: the images were spreading through a commercial ecosystem.
The technology that produced them sat inside a much larger market.
The Seven Layer Profit Stack: Mapping The Players Behind the System
Researchers who study deepfake abuse describe the system less as a single industry than as a supply chain.
A photograph uploaded to a nudify tool can pass through multiple layers of technology companies before the image reaches a user: the AI model that generates it, the app store that distributes the software, the advertising network that attracts customers, the cloud provider hosting the service and the payment processor collecting subscription fees.
Each layer earns revenue and operates independently, together forming the infrastructure of a growing digital economy built around manipulated sexual imagery.
Layer 1: Model Developers
At the base of the system sit open-source AI models such as Stable Diffusion and its derivatives.
Developers around the world have adapted the models to generate photorealistic imagery from text prompts. Third-party programmers package the technology into simple interfaces that allow users to upload a photograph and produce a manipulated version within seconds.
The tools often require little technical expertise. Revenue arrives through subscriptions, premium upgrades and paid custom outputs.
Developers frequently frame the software as neutral infrastructure.
Layer 2: App Store Distribution
Much of the technology reaches users through the world’s largest mobile marketplaces.
In January 2026, the Tech Transparency Project found 55 nudify apps on Google Play and 47 on Apple's App Store. Collectively, these apps had been downloaded more than 705 million times, generating approximately $117 million in revenue.
Google and Apple collect commissions of up to 30 percent on in-app purchases.
One of the applications identified in the report, called AI Dress Up, carried a rating labeled “suitable for all ages.” The software had been downloaded more than 10 million times and could generate nude images from uploaded photographs.
Other apps attracted millions of users as well. A tool called Collart accumulated more than 7 million downloads and generated over $2 million in revenue while accepting prompts that depicted women in explicit sexual scenarios.
Another application, RemakeFace, allowed users to place the faces of real people onto nude bodies.
Researchers also found that fourteen of the apps operated from China. Under Chinese data retention rules, images processed through those platforms may be accessible to government authorities.
After the Tech Transparency Project released its report, Apple removed 28 of the apps and Google removed 31.
Some returned. Researchers tracking the changes found that several developers resubmitted modified versions of the apps or reappeared under different names within weeks.
No public database tracks how often these removals and reinstatements occur. The churn has become part of the system.
Layer 3: Platform Advertising
Advertising networks play a central role in directing users to nudify services.
Meta, the world's largest advertising platform, drives 90 percent of traffic to nudify services. CrushAI ran more than 87,000 ads on Meta's platforms. In just the first two weeks of January 2025, over 8,000 CrushAI ads appeared across Facebook and Instagram.
The ads were not subtle. "Undress any girl for free." "Ever wish you could erase someone's clothes?"
Meta's advertising revenue for full-year 2025: $196.12 billion. The cost Meta claims for investigating and enforcing against CrushAI: $289,000.
The Senate Judiciary Committee identified more than 290 deepfake pornography apps, 80 percent of which launched recently.
Layer 4: Social Amplification
Once these tools reach the internet, platform algorithms help spread them.
X hosts 70 percent of all nudify-related mentions across social platforms. Bing returned nudify tools as top search results. Forbes documented over 100 videos promoting deepfake pornography on YouTube.
The amplification is not accidental. Internal Facebook research from 2016 found that 64 percent of all extremist group joins were driven by the platform's own recommendation algorithms.
In July 2025, Zuckerberg attributed increased user time to AI-powered recommendations. A UK Parliamentary briefing concluded that outrageous and misleading content increases engagement, which platforms leverage for ad revenue.
Layer 5: Infrastructure
Behind the websites hosting nudify services sits the backbone of the internet.
An analysis of 85 nudify websites found Amazon Web Services hosting 62 of them. Cloudflare provided content delivery network or Domain Name System services to 62 of 85. As of January and February 2026, domain analysis confirms Google Sign-On remains active on 54 of those same sites.
No major cloud provider has issued a public commitment to systematically cut off nudify services. Some nudify services have begun evolving into infrastructure providers themselves, offering application programming interfaces to other developers of non-consensual image generators. This is not retail anymore. This is wholesale. Violation sold as a service to developers building the next generation of the same machine. WIRED's investigation confirmed this shift as of January 2026. Almost nobody has covered what it means: the supply chain is now self-replicating.
Payment processors remain in the chain. PayPal and major credit cards continue to be offered on active nudify sites. One structural shift: Visa began enforcing updated Integrity Risk Program standards in late 2025, requiring merchants offering AI-generated adult content to use a specific merchant category code. Compliance tracking shows many violating merchants still lack the required category code entirely, meaning even this minimal accountability step is being circumvented.
Layer 6: Monetization
Across 85 nudify websites, estimated annual revenue reached $36 million.
The money arrives through recurring subscriptions, premium upgrades, custom image commissions and data extraction. Non-consensual images frequently become training material for the next generation of commercial AI systems.
The result is a feedback loop: manipulated images generate revenue while simultaneously improving the tools that produce them.
Layer 7: Sextortion and Secondary Extraction
At the far end of the supply chain, revenue can be extracted directly from victims.
Cryptocurrency-based sextortion schemes pressure individuals to pay in exchange for suppressing manipulated images. Women remain primary targets, and minors appear increasingly often in investigations.
A WIRED investigation identified at least 50 active Telegram bots with a combined 4 million monthly users, supported by at least 25 associated channels with 3 million members. These bots operate by selling tokens for image generation. When Telegram removes them, operators relaunch on new channels within days.
The FBI's July 2025 Public Service Announcement on "The Com" documents an international criminal ecosystem with thousands of members, primarily ages 11 to 25, who are recruited from gaming platforms and social media as operatives.
The FBI's 2025 annual Internet Crime report has not yet been published. Available indicators suggest the numbers continue rising.
The structural revelation: These are not seven separate problems. They are one integrated supply chain. Each layer's revenue depends on the layers below functioning. No single layer has incentive to break the chain because each profits from maintaining it.
The Ban That Is Coming and Why It Isn’t Enough
Governments have begun responding to the spread of AI-generated sexual imagery.
The TAKE IT DOWN Act, signed May 19, 2025, created the first federal criminal prohibition on publishing non-consensual intimate imagery including AI deepfakes. Platforms must implement notice-and-takedown by May 19, 2026. The Federal Trade Commission enforces compliance. Violations carry fines and up to three years in prison.
In January 2026, the Senate passed the DEFIANCE Act unanimously, creating civil remedies allowing victims to sue for damages up to $150,000 (or $250,000 linked to assault or harassment). The bill now sits with the House Judiciary Committee, with no markup or floor vote scheduled.
Representative Alexandria Ocasio-Cortez and colleagues publicly urged Speaker Johnson in late January 2026 to bring the bill directly to the floor. As of this writing, he has not committed to a vote.
As of January 2026, 47 states have enacted some form of deepfake-related legislation, with 174 total deepfake laws nationwide, 64 of them enacted in 2025 alone.
Alaska, Missouri, and New Mexico remain without comprehensive regimes. Hawaii's deepfake election law was struck down January 30, 2026, on First Amendment grounds. California's governor vetoed the state's AI Abuse Act. A federal judge struck down portions of another California law, citing Section 230 conflicts.
These are real protections. They matter and people fought hard for them.
But we need to be precise about the limitations of the TAKE IT DOWN Act because the people who fought for it deserve honesty, and the people impacted by deepfake abuse deserve to understand how they are and aren’t protected by the law.
In Minnesota, a man named Ben used his access to the Facebook pages of women he knew personally, including family vacation photos and a goddaughter's graduation, to generate pornographic deepfakes of 80 women using a site called DeepSwap.
Among the victims wereJessica Guistolise, Megan Hurleyand Molly Kelley, whose testimony later informed state legislation and federal hearings.Under the TAKE IT DOWN Act, criminal liability attaches to the publication of non-consensual intimate imagery. Creation alone does not automatically trigger federal prosecution.
It requires a victim to find the image, identify where it lives, submit a takedown notice and wait 48 hours. During that window, images can spread across multiple platforms.The man who made it may never have broken a federal law. What the advocates built is real. What it cannot reach is also real.
The 48-hour takedown window assumes content exists in one location, yet several widely used nudify applications operate outside the United States. Fourteen of the apps identified in the Tech Transparency Project’s analysis were based in China. CrushAI operates from Hong Kong. Many payment flows pass through cryptocurrency systems that move across borders.
The legal framework being built in the United States addresses individual points in the system. The ecosystem itself is a distributed system.
Regulate app stores? Developers move to direct web distribution and application programming interfaces. Ban specific apps? New ones appear weekly. Criminalize publication? Creation continues legally, and the creation tools are evolving into platforms themselves, selling wholesale violation infrastructure to developers building the next wave.
You cannot fix a coordination failure with more centralization.
The Architecture That Can Actually Change This
If the ban will not be enough, the question is not what rule to add. The question is what kind of system makes violation structurally unprofitable.
The profit stack works because every layer is economically incentivized to keep the chain intact. The only parties who bear the full cost are the women whose images are used without consent, and increasingly the men whose isolation and economic precarity are being converted into subscription revenue by the same companies.
Some technologists and policy researchers argue that the most effective interventions will involve changing the underlying incentives of the system itself.
Several emerging technologies attempt to address that problem by shifting control over images and identity away from platforms and toward the individuals depicted.
One approach involves programmable consent systems. Under this model, ownership of an image could be registered on decentralized networks where individuals hold the encryption keys controlling access to the file. Instead of platforms determining how images circulate, subjects retain the ability to revoke or grant permission.
Early versions of this idea already exist. Blockchain-based consent frameworks built on systems such asHyperledger Fabric implementations in clinical trials have been tested in clinical data environments, where researchers require both permanent audit records and revocable permissions.
Other technologies focus on tracking how images move across the internet.
The Coalition for Content Provenance and Authenticity, a consortium supported by companies including Adobe and Microsoft, has developed systems that attach metadata to images describing how they were created and edited.
Anchor that provenance on a decentralized ledger and you have a tamper-proof record that travels with the image. Developers who build on models without that provenance chain lose access to credentialed markets. Violation becomes a credential problem, not just a legal problem.
Financial infrastructure may represent another leverage point.
Visa introduced new merchant category codes in 2025 to classify businesses offering AI-generated adult content. The measure organizes these transactions within the payment system, though critics note that classification alone does little to limit the industry and compliance gaps remain common.
Payment networks already block transactions tied to sanctions violations and illegal gambling markets. Applying similar enforcement standards to non-consensual image services could reshape the economics of the business.
When processing these transactions carries greater financial and legal risk than rejecting them, payment processors tend to withdraw support.
Other proposals focus on collective defense.
Some advocates have begun discussing community-governed legal and support networks, sometimes described as defense DAOs, where victims pool resources for legal representation, rapid takedown requests and coordinated evidence collection. The concept remains largely experimental, though the technical infrastructure required to operate such networks already exists.
Taken together, these proposals share a common goal: shifting the incentives that currently reward exploitation.
The digital architecture that powers today’s deepfake ecosystem generates revenue from both the violation and the demand it produces. Changing that system would require mechanisms that reward protection, consent and accountability instead.
But before we can build that system at scale, we need to understand the full demand side: the pipeline that converts economic insecurity and algorithmic isolation into a customer base for violation tools. That intersection, where male vulnerability meets female harm and both are monetized by the same platforms, is where Part 3 of this investigation begins.