Concern over how easily artificial intelligence can be used to produce highly believable deepfake images – especially nonconsensual sexualized depictions of adults and children – is at an all-time high. It follows the recent flood of such imagery produced through Grok, the generative AI chatbot linked to the worldwide social media platform X. In an 11-day period starting Dec. 29, according to one report, Grok users on X generated some 3 million photorealistic sexualized depictions, including about 23,000 of children.
Several governments began investigations, with some temporarily shutting down national access to Grok. This week, French police raided the Paris offices of X. Across the English Channel, the British government is investigating X. And, in the United States, lawmakers (along with 35 state attorneys general who signed an open letter to X last month) and some watchdog groups have called for urgent steps, including new laws.
At the heart of the matter is how best to balance First Amendment free speech protections with legal and social expectations of accountability and corporate ethics.
Cautioning against a rush to regulate, the Foundation for Individual Rights and Expression writes that existing laws “in many cases” can be used. “The right response,” it urges, starts with enforcing these provisions while “resist[ing] the temptation to trade constitutional principles for the illusion of control.”
However, some analysts as well as tech sector leaders believe otherwise. In an essay published in January, Dario Amodei, the CEO of artificial intelligence firm Anthropic, cautioned against support for “extreme anti-regulatory policies on AI,” noting that the technology is entering a risky period of “adolescence.” Mr. Amodei has often called for transparency regulations that would require AI companies to disclose how they guide their models’ behavior and incorporate standards that protect privacy and dignity.
“When a company offers generative AI tools on their platform, it is their responsibility to minimize the risk of image-based abuse,” Sloan Thompson, of EndTAB, an organization that works to tackle tech-facilitated abuse, told Wired. “X has done the opposite. They’ve embedded AI-enabled image abuse directly into a mainstream platform.”
A 2025 article in the Harvard Law Review pointed to the three-decade-old Section 230 of the Communications Decency Act, which limits publisher and distributor liability for internet platforms. In the age of generative AI, it said, this section “may have outgrown its original purpose.”
“The purpose of the First Amendment is to protect core forms of human expression,” the law review noted. This can include what is sometimes referred to as “lawful but awful” content. But, as much as humans rely on the automated work of AI, those systems do not have “morality, intelligence or ideas,” it noted, and “should not receive the same protections as humans.”
According to Anthropic’s Mr. Amodei, “If we act decisively and carefully, the risks can be overcome. … And there’s a hugely better world on the other side of” this phase of AI transformation.










