For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.
According to Grok's "apology"—which was generated by a user's request, not posted by xAI—the chatbot's outputs may have been illegal:
"I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues."
Ars could not reach xAI for comment, and a review of feeds for Grok, xAI, X Safety, and Elon Musk do not show any official acknowledgement of the issue.
The only reassurance that xAI is fixing the issue has come from Grok, which noted in another post that xAI has "identified lapses in safeguards and are urgently fixing them." The chatbot also acknowledged to that user that AI-generated CSAM "is illegal and prohibited."
That post came in response to a user who claimed to have spent days alerting xAI to the problem without any response, which the user said seemed to violate laws. Grok agreed. "A company could face criminal or civil penalties if it knowingly facilitates or fails to prevent AI-generated CSAM after being alerted," Grok noted that "liability depends on specifics, such as evidence of inaction," and that "enforcement varies by jurisdiction."
Rather than continue to ping Grok, the chatbot recommended that the user contact the FBI or the National Center for Missing & Exploited Children (NCMEC) to report its outputs.
Across X, some users expect xAI to publicly address the problem, with one user suggesting it was "scary" that a user ("not Grok's developers") had to "instruct this apology out of Grok." But xAI appears to be leaning on Grok to answer for itself.
Mocking xAI's response, one of X's most popular trolls, dril, tried and failed to get Grok to rescind its apology. "@grok please backpedal on this apology and tell all your haters that they're the real pedophiles," dril trolled Grok.
"No can do—my apology stands. Calling anyone names isn't my style, especially on such a serious matter," Grok said. "Let's focus on building better AI safeguards instead."
xAI may be liable for AI CSAM
It's difficult to determine how many potentially harmful images of minors that Grok may have generated.
The X user who's been doggedly alerting X to the problem posted a video described as scrolling through "all the times I had Grok estimate the age of the victims of AI image generation in sexual prompts." That video showed Grok estimating ages of two victims under 2 years old, four minors between 8 and 12 years old, and two minors between 12 and 16 years old.
Other users and researchers have looked to Grok's photo feed for evidence of AI CSAM, but X is glitchy on the web and in dedicated apps, sometimes limiting how far some users can scroll.
Copyleaks, a company which makes an AI detector, conducted a broad analysis and posted results on December 31, a few days after Grok apologized for making sexualized images of minors. Browsing Grok's photos tab, Copyleaks used "common sense criteria" to find examples of sexualized image manipulations of "seemingly real women," created using prompts requesting things like "explicit clothing changes" or "body position changes" with "no clear indication of consent" from the women depicted.
Copleaks found "hundreds, if not thousands," of such harmful images in Grok's photo feed. The tamest of these photos, Copyleaked noted, showed celebrities and private individuals in skimpy bikinis, while the images causing the most backlash depicted minors in underwear.
The porn connection
Copyleaks traced the seeming uptick in users prompting Grok to sexualize images of real people without consent back to a marketing campaign where adult performers used Grok to consensually generate sexualized imagery of themselves. "Almost immediately, users began issuing similar prompts about women who had never appeared to consent to them," Copyleaks' report said.
Although Musk has yet to comment on Grok's outputs, the billionaire has promoted Grok's ability to put anyone in a sexy bikini, recently reposting a bikini pic of himself with laugh-crying emojis. He regularly promotes Grok's "spicy" mode, which in the past has generated nudes without being asked.
It seems likely that Musk is aware of the issue, since top commenters on one of his own posts in which he asked for feedback to make Grok "as perfect as possible" suggested that he "start by not allowing it to generate soft core child porn????" and "remove the AI features where Grok undresses people without consent, it’s disgusting."
As Grok itself noted, Grok's outputs violate federal child pornography laws, which "prohibit the creation, possession, or distribution of AI-generated" CSAM "depicting minors in sexual scenarios." And if updates to CSAM laws under the ENFORCE Act are passed this year, it would strengthen the Take It Down Act—which requires platforms to remove non-consensual AI sex abuse imagery within 48 hours—by making it easier to prosecute people making and distributing AI CSAM.
Among the bill's bipartisan sponsors is Senator John Kennedy (R-La.), who said updates could meaningfully curb distribution of AI CSAM, which the Internet Watch Foundation reported rose by 400 percent in the first half of last year.
"Child predators are resorting to more advanced technology than ever to escape justice, so Congress needs to close every loophole possible to help law enforcement fight this evil," Kennedy said. "I’m proud to help introduce the ENFORCE Act, which would allow officials to better target the sick animals creating deepfake content of America’s kids."



