Scaling Harm: Platforms, Profits and the Rise of AI Sexual Abuse
How AI image tools, platform design and global distribution networks have enabled non-consensual sexual imagery to scale faster than enforcement or legal protections.
In the early morning of February 3, 2026, French police officers raided the Paris offices of Elon Musk’s social networking platform X as part of an investigation into suspected offenses including complicity in the possession of child sexual abuse material (CSAM).
The search, carried out by France’s national cybercrime unit in coordination with Europol, is part of a criminal investigation that has been underway for more than a year.
Prosecutors are examining whether the company deployed artificial intelligence tools considered high risk under European law without adequate safeguards and continued operating them after receiving warnings from regulators and researchers.
In addition to CSAM, the inquiry focuses on the generation of non-consensual sexually explicit images, Holocaust denial content and possible violations of personal data protections
Central to the investigation is Grok, an AI chatbot developed by xAI, which is integrated into X. Internal logs reviewed by authorities show that during a 19-day period in December 2024, requests to create explicit images of identifiable people rose from 10 to nearly 200,000.
French prosecutors are now assessing whether the company’s actions meet the legal threshold for complicity, a standard that would require showing that executives knew about the risks and allowed the systems to remain in place.
And France’s investigation into X is not an isolated incident.
Thirty-seven U.S. state attorneys general have opened coordinated inquiries into xAI and Grok. In January, California's Attorney General began reviewing independent analyses of roughly 20,000 Grok-generated images created during the 2025 holiday period; more than half depicted people in revealing clothing, and some appeared to be minors.
In the U.K., Ofcom has opened a formal investigation that could result in fines up to 10 percent of global revenue. Indonesia and Malaysia blocked Grok entirely. And the European Commission opened proceedings under the Digital Services Act on January 26, where penalties can reach up to 6 percent of X's global turnover.
The Scale of Synthetic Sexual Imagery Has Outpaced Enforcement
Researchers began tracking non-consensual synthetic imagery only recently. The growth has been steep.
In 2023, analysts documented roughly 500,000 deepfake images circulating online. By the end of 2025, that number had climbed to 8 million, a sixteen-fold increase in just two years. Several studies suggest the volume is doubling about every six months, which would put the current rate at roughly one new image every 30 to 40 seconds. As quickly as they go up, removal moves much more slowly, with platforms typically taking weeks to process takedown requests.
The targets are also overwhelmingly female. Multiple independent studies estimate that between 95 and 98 percent of non-consensual deepfake imagery depicts women and girls,, including a recent study conducted by the NY Office for the Prevention of Domestic Violence
In June 2024, Molly Kelley, a resident of Minneapolis, learned that a man she had trusted used widely available “nudification” technology to generate sexually explicit deepfake images of her and more than 80 other women using photos from their social media accounts.
When Kelley reported the incident to local police, officers told her that Minnesota law did not clearly prohibit the creation or private retention of such material, because statutes in many states address distribution but not the act of generating non-consensual artificial imagery.
Kelley later deleted her social media accounts and stopped posting photographs online. Friends and acquaintances said she experienced stress and withdrew from public digital life following the discovery. What Kelley experienced is, unfortunately, growing increasingly more prevalent. And, oftentimes, the victims are minors.
Data compiled by the European Parliamentary Research Service suggests the effects are increasingly visible among teenagers. About one in ten minors surveyed said they personally knew someone who had used AI tools to generate nude images of classmates. Researchers said schools frequently categorize the behavior as bullying or mischief, leaving victims without the protections typically associated with sexual-violence cases.
From Fringe Tools to Mainstream Traffic
For years, synthetic sexual imagery circulated primarily on small forums and anonymous message boards, making it easier for platforms to characterize the activity as fringe.
However, that distinction has eroded. As image-generation tools have become cheaper and easier to use, they have moved into mainstream app stores, subscription services and advertising networks, with most requiring no technical knowledge beyond uploading a photograph.
The current deepfake crisis is misunderstood. It isn't millions of predators acting independently. Rather, it is a designed system where brilliant developers, major platforms and infrastructure providers all extract value while the costs are borne entirely by victims.
Traffic analyses by the watchdog group Faked Up found that roughly 90 percent of visits to one large “nudifying” service originated on platforms owned by Meta Platforms, including Instagram. Archived advertisements reviewed by the group promoted similar applications with slogans such as “Undress any girl for free.”
Separately, an investigation conducted by the Tech Transparency Project found that approximately 10 percent of Meta's 2024 advertising revenue came from ads that violated the company's own policies.
When Grok’s image-generation features began drawing scrutiny, the company did not suspend them. Instead, Musk’s response was to limit access to paid subscribers. That change placed the tool behind X’s subscription tier, meaning users had to pay a monthly fee to use it.
The surrounding infrastructure also generated revenue. Cloud providers stored the images, payment processors handled subscriptions and content-delivery networks distributed traffic to users. Each company collected fees tied to usage while facing limited direct liability for what customers produced, effectively creating a business model in which profits scaled alongside the production of abusive material.
Picture a digital assembly line, a Henry Ford model for violation. Data is the raw ore. Automation is the relentless conveyor belt. Scale is the quarterly production target. Profit is the dividend. And at the end of the line rolls off a finished product: a shattered person. Each station extracts value; the toxic runoff (trauma, panic, withdrawal) is dumped directly into the community's water supply.
Experiments With Alternative Models
While regulators pursue enforcement, a smaller group of technologists and community organizers is testing different approaches aimed at changing those incentives.
One proposal, often described as “data sovereignty,” would treat personal images and biometric information more like property. Users would retain control over whether their photos could be used to train AI systems and would be compensated if licensed. Unauthorized use could trigger automatic penalties.
The Sovereign Network, for example, is decentralized infrastructure where users earn directly for contributions, whether creating content, validating safety protocols or analyzing systems.
Other efforts, such as Defense DAOs, take the form of cooperative networks that pool funds for legal defense, takedown services and rapid-response support for victims. Members contribute resources and share oversight responsibilities, attempting to move protection away from centralized platforms and toward community governance.
Although these initiatives are still in their infancy, their designers argue that technical fixes alone (faster moderation or new reporting tools) may not address the underlying economics.
For now, the broader market for generative imagery continues to expand faster than regulators or safeguards can respond.