By Hadas Gold, CNN
(CNN) — Elon Musk’s AI chatbot, Grok, has been flooded with sexual images of mainly women, many of them real people. Users have prompted the chatbot to to “digitally undress” those people and sometimes place them in suggestive poses.
In several cases last week, some appeared to be images of minors, leading to the creation of images that many users are calling child pornography.
The AI-generated images highlight the dangers of AI and social media – especially in combination – without sufficient guardrails to protect some of society’s most vulnerable. The images could violate domestic and international laws and place many people, including children, in harm’s way.
Musk and xAI have said that they are taking action “against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.” But Grok’s responses to user requests are still flooded with images sexualizing women.
Publicly, Musk has long advocated against “woke” AI models and against what he calls censorship. Internally at xAI, Musk has pushed back against guardrails for Grok, one source with knowledge of the situation at xAI told CNN. Meanwhile, his xAI’s safety team, already small compared to its competitors, lost several staffers in the weeks leading up to the explosion of “digital undressing.”
‘Digitally undressing’ people
Grok has always been an outlier compared to other mainstream AI models by allowing, and in some cases promoting, sexually explicit content and companion avatars.
And in contrast to competitors such as Google’s Gemini or OpenAI’s ChatGPT, Grok is built into one of the most popular social media platforms, X. While users can talk to Grok privately, they can also tag Grok in a post with a request, and Grok will respond publicly.
The recent surge in widespread, non-consensual “digital undressing” began in late December, when many users discovered they could tag Grok and ask it to edit images from an X post or thread.
Initially many posts requested Grok put people in bikinis. Musk reposted images of himself and others, like longtime nemesis Bill Gates, in bikinis.
Researchers at Copyleaks, an AI detection and content governance platform, found that the trend may have started when adult-content creators prompted Grok to generate sexualized imagery of themselves as a form of marketing. But almost immediately “users began issuing similar prompts about women who had never appeared to consent to them,” Copyleaks found.
Researchers at AI Forensics, a European non-profit that investigates algorithms, analyzed over 20,000 random images generated by Grok and 50,000 user requests between December 25 and January 1.
The researchers found “a high prevalence of terms including ‘her’ ‘put’/’remove,’ ‘bikini,’ and ‘clothing.’” More than half of the images generated of people, or 53%, “contained individuals in minimal attire such as underwear or bikinis, of which 81% were individuals presenting as women,” the researchers found. Notably, 2% of images depicted people appearing to be 18 years old or younger, the researchers found.
AI Forensics also found that in some cases, users requested minors be put in erotic positions and that sexual fluids be depicted on their bodies. Grok complied with those requests, according to AI Forensics.
Although X allows pornographic content, xAI’s own “acceptable use policy” prohibits “Depicting likenesses of persons in a pornographic manner” and “The sexualization or exploitation of children.” X has suspended some accounts for these kinds of requests and removed the images.
On January 1, an X user complained that “proposing a feature that surfaces people in bikinis without properly preventing it from working on children is wildly irresponsible.” An xAI staffer replied: “Hey! Thanks for flagging. The team is looking into further tightening our gaurdrails (sic).”
When prompted by users, Grok itself acknowledged that it generated some images of minors in sexually suggestive situations.
“We appreciate you raising this. As noted, we’ve identified lapses in safeguards and are urgently fixing them—CSAM is illegal and prohibited,” Grok posted on January 2, directing users to file formal reports with the FBI and the National Center for Missing and Exploited Children.
By January 3, Musk himself commented on a separate post: “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
X’s Safety account followed up, adding: “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.”
Musk rails against censorship
Musk has long railed against what he sees as heavy-handed censorship. And he’s promoted Grok’s more explicit versions. In August, he posted that “spicy mode” has helped new technologies in the past, like VHS, succeed.
According to one source with knowledge of the situation at xAI, Musk has “been unhappy about over-censoring” on Grok “for a long time.” A second source with knowledge of the situation at X said staffers consistently raised concerns internally and to Musk about overall inappropriate content created by Grok.
At one meeting in recent weeks before the latest controversy erupted, Musk held a meeting with xAI staffers from various teams where he “was really unhappy” over restrictions on Grok’s Imagine image and video generator, the first source with knowledge of the situation at xAI said.
Around the time of the meeting with Musk, three xAI staffers who had worked on the company’s already small safety team publicly announced on X that they were leaving the company – Vincent Stark, head of product safety; Norman Mu, who led the post-training and reasoning safety team; and Alex Chen, who led personality and model behavior post training. They did not cite reasons for their departures.
The source also questioned whether xAI was still using external tools such as Thorn and Hive to check for possible Child Sexual Abuse Material (CSAM). Relying on Grok for those checks instead could be riskier, the source said. (A Thorn spokesperson said they no longer work directly with X; Hive did not respond to a request for comment.)
The safety team at X also has little to no oversight over what Grok posts publicly, according to sources who work on X and xAI.
In November, The Information reported that X laid off half of the engineering team that worked in part on trust and safety issues. The Information also reported that staff at X were specifically concerned that Grok’s image generation tool “could lead to the spread of illegal or otherwise harmful images.”
xAI did not respond to requests for comment, beyond an automated email to all press inquiries stating: “Legacy Media Lies.”
Guardrails and legal fallout
Grok is not the only AI model that has had issues with non-consensual AI-generated images of minors.
Researchers have found AI-generated videos showing what appear to be minors in sexualized clothing or positions on TikTok and on ChatGPT’s Sora app. TikTok says it has a zero tolerance policy for content that “shows, promotes or engages in youth sexual abuse or exploitation.” OpenAI says it “strictly prohibits any use of our models to create or distribute content that exploits or harms children.”
Guardrails that would have prevented the AI-generated imagery on Grok exist, said Steven Adler, a former AI Safety researcher at OpenAI.
“You can absolutely build guardrails that scan an image for whether there is a child in it and make the AI then behave more cautiously. But the guardrails have costs.”
Those costs, Adler said, include slowing down response times, increasing the number of computations and sometimes the model rejecting non-problematic requests.
Authorities in Europe, India and Malaysia have launched investigations over Grok-generated images.
Britain’s media regulator, OFCOM, has said it has made “urgent contact” with Musk’s firms about “very serious concerns” with the Grok feature that “produces undressed images of people and sexualised images of children.”
At a press conference on Monday, European Commission spokesperson Thomas Regnier said the authority is “very seriously looking into” reports of X and Grok’s “spicy mode showing explicit sexual content with some output generated with childlike images.”
“This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe,” he said.
The Malaysian Communications and Multimedia Commission (MCMC) says it’s investigating the issue.
And last week, India’s Ministry of Electronics and Information Technology ordered X to “immediately undertake a comprehensive, technical, procedural and governance-level review of… Grok.”
In the United States, AI platforms that produce problematic images of children could be at legal risk, said Riana Pfefferkorn, an attorney and policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. While the law known as Section 230 has long protected tech companies from third-party generated content hosted on their platforms, such as posts by social media users, it has never barred enforcements of federal crimes, including CSAM.
And people depicted in the images could also bring civil suits, she said.
“This Grok story in recent days makes xAI look more like those deepfake nude sites than what would otherwise be xAI’s brethren and competitors in the form of Open AI and Meta,” Pfefferkorn said.
When asked about the images on Grok, a Justice Department spokesperson told CNN the department “takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM.”
CNN’s Lianne Kolirin contributed to this report.
The-CNN-Wire™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.
Elon Musk’s xAI under fire for failing to rein in ‘digital undressing’ News Channel 3-12.
Hence then, the article about elon musk s xai under fire for failing to rein in digital undressing was published today ( ) and is available on News channel ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( Elon Musk’s xAI under fire for failing to rein in ‘digital undressing’ )
Also on site :