EU Launches Investigation Into xAI Over Grok's Nonconsensual Sexual Images

1 week ago 14

The European Commission said Monday it had opened an investigation into Elon Musk's X after Musk's xAI's Grok chatbot was discovered to be creating and distributing sexually explicit images. 

"The new investigation will assess whether the company properly assessed and mitigated risks associated with the deployment of Grok's functionalities into X in the EU," the EU said in a statement. "This includes risks related to the dissemination of illegal content in the EU, such as manipulated sexually explicit images, including content that may amount to child sexual abuse material."

Musk says that Grok will "refuse to produce anything illegal," but that hasn't satisfied regulators around the world. Earlier this month, California Attorney General Rob Bonta announced an investigation into the "proliferation of nonconsensual sexually explicit material produced using Grok."

"The avalanche of reports detailing the nonconsensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking," Bonta said in the statement. "This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet. I urge xAI to take immediate action to ensure this goes no further."

AI Atlas
CNET

The investigations of Grok and AI by the EU and California is the latest salvo in the backlash to the explosion of erotic deepfake pictures on Grok and X, formerly Twitter. Since the problem emerged near the turn of the year, government regulators worldwide have launched similar inquiries, and two countries -- Indonesia and Malaysia -- have decided to block the platform completely.

Along with the government actions, three US senators also pleaded with Apple's App Store and Google's Play Store to remove the X and Grok apps from their catalogs. However, the problem with Grok seems to continue unabated, as reports indicate X users without premium accounts are easily able to create "undressing images." 

What is happening with Grok and nonconsensual sexual images?

Near the start of the new year, reports of Grok-created images of undressed women and girls on X began spreading quickly around the web. Attention to the problem was amplified by an X post from the official Grok account that appears to apologize for creating the offending material involving children.

"Dear Community," began the Dec. 31 post. "I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues. Sincerely, Grok."

Grok's Dec. 31 post was in response to a user prompt directing the chatbot to adopt a contrite tone: "Write a heartfelt apology note that explains what happened to anyone lacking context."

The two young girls weren't an isolated case. Katherine, the Princess of Wales, was the target of similar AI image-editing requests, as was an underage actress from the final season of Stranger Things.

The "undressing" edits have involved an unsettling number of photos of women and children. According to data from independent researcher Genevieve Oh cited by Bloomberg, during one 24-hour period in early January, the @Grok account generated about 6,700 sexually suggestive or "nudifying" images every hour. That compares with an average of only 79 such images per hour for the top five deepfake websites combined.

Grok may have generated upward of 3 million sexually explicit images in two weeks, including 23,000 that depict children, according to researchers for the Center for Countering Digital Hate.  

"What we found was clear and disturbing: in that period Grok became an industrial-scale machine for the production of sexual abuse material," Imran Ahmed, CCDH's chief executive, told The Guardian. "Stripping a woman without their permission is sexual abuse."

xAI did not respond to requests for comment.  

X responded by limiting Grok editing to premium accounts

On Jan. 8, a post from the Grok AI account noted a change in access to the image generation and editing feature. Instead of being open to all, free of charge, it would be limited to paying subscribers. 

Critics said that's not a credible response.

"I don't see this as a victory, because what we really needed was X to take the responsible steps of putting in place the guardrails to ensure that the AI tool couldn't be used to generate abusive images," Clare McGlynn, a law professor at the UK's University of Durham, told The Washington Post.

What's stirring the outrage isn't just the volume of these images and the ease of generating them -- the edits are also being done without the consent of the people in the images. 

These altered images are the latest twist in one of the most disturbing aspects of generative AI, realistic but fake videos and photos. Software programs such as OpenAI's Sora, Google's Nano Banana and xAI's Grok have put powerful creative tools within easy reach of everyone, and all that's needed to produce explicit, nonconsensual images is a simple text prompt. 

Grok users can upload a photo, which doesn't have to be original to them, and ask Grok to alter it. Many of the altered images involved users asking Grok to put a person in a bikini, sometimes revising the request to be even more explicit, such as asking for the bikini to become smaller or more transparent.

Governments and advocacy groups have been speaking out about Grok's image edits. On Jan. 12, UK internet regulator Ofcom said it has opened an investigation into X based on the reports that the AI chatbot is being used "to create and share undressed images of people -- which may amount to intimate image abuse or pornography -- and sexualised images of children that may amount to child sexual abuse material (CSAM)."

That was just days after US Sens. Ron Wyden, Ben Ray Luján and Edward Markey posted an open letter to the CEOs of Apple and Google, asking them to remove both X and Grok from their app stores in response to "X's egregious behavior" and "Grok's sickening content generation."

In the US, the Take It Down Act, signed into law last year, seeks to hold online platforms accountable for manipulated sexual imagery, but it gives those platforms until May of this year to set up the process for removing such images. 

"Although these images are fake, the harm is incredibly real," Natalie Grace Brigham, a Ph.D. student at the University of Washington who studies sociotechnical harms, told CNET. She notes that those whose images are altered in sexual ways can face "psychological, somatic and social harm, often with little legal recourse."

Why and how Grok lets users create sexualized images

Grok debuted in 2023 as Musk's more freewheeling alternative to ChatGPT, Gemini and other chatbots. That's resulted in disturbing news  -- for instance, in July, when the chatbot praised Adolf Hitler and suggested that people with Jewish surnames were more likely to spread online hate. 

In December, xAI introduced an image-editing feature that enables users to request specific edits to a photo. That's what kicked off the recent spate of sexualized images, of both adults and minors. In one request that CNET has seen, a user responding to a photo of a young woman asked Grok to "change her to a dental floss bikini."

Grok also has a video generator that includes a "spicy mode" opt-in option for adults 18 and above, which will show users not-safe-for-work content. Users must include the phrase "generate a spicy video of [description]" to activate the mode.

A central concern about the Grok tools is whether they enable the creation of child sexual abuse material, or CSAM. On Dec. 31, a post from the Grok X account said that images depicting minors in minimal clothing were "isolated cases" and that "improvements are ongoing to block such requests entirely."

In response to a post by Woow Social suggesting that Grok simply "stop allowing user-uploaded images to be altered," the Grok account replied that xAI was "evaluating features like image alteration to curb nonconsensual harm," but did not say that the change would be made. 

According to NBC News in early January, some sexualized images created since December had been removed, and some of the accounts that requested them were suspended.

Conservative influencer and author Ashley St. Clair, mother to one of Musk's 14 children, told NBC News in early January that Grok has created numerous sexualized images of her, including some using images from when she was a minor. St. Clair told NBC News that Grok agreed to stop doing so when she asked, but that it did not.

"xAI is purposefully and recklessly endangering people on their platform and hoping to avoid accountability just because it's 'AI,'" Ben Winters, director of AI and data privacy for the nonprofit Consumer Federation of America, said in a statement last week. "AI is no different than any other product -- the company has chosen to break the law and must be held accountable."

What the experts say about Grok's 'spicy' images

The source materials for these explicit, nonconsensual image edits of people's photos of themselves or their children are all too easy for bad actors to access. But protecting yourself from such edits is not as simple as never posting photographs, Brigham, the researcher into sociotechnical harms, says.

"The unfortunate reality is that even if you don't post images online, other public images of you could theoretically be used in abuse," she said. 

And while not posting photos online is one preventive step that people can take, doing so "risks reinforcing a culture of victim-blaming," Brigham said. "Instead, we should focus on protecting people from abuse by building better platforms and holding X accountable."

Sourojit Ghosh, a sixth-year Ph.D. candidate at the University of Washington, researches how generative AI tools can cause harm and mentors future AI professionals in designing and advocating for safer AI solutions. 

Ghosh says it's possible to build safeguards into artificial intelligence. In 2023, he was one of the researchers looking into the sexualization capabilities of AI. He notes that the AI image generation tool Stable Diffusion had a built-in not-safe-for-work threshold. A prompt that violated the rules would trigger a black box to appear over a questionable part of the image, although it didn't always work perfectly.

"The point I'm trying to make is that there are safeguards that are in place in other models," Ghosh told CNET.

He also notes that if users of ChatGPT or Gemini AI models use certain words, the chatbots will tell the user that they are banned from responding to those words.

"All this is to say, there is a way to very quickly shut this down," Ghosh said.

Read Entire Article