Musk denies knowledge of Grok sexual underage images as California AG opens investigation

Musk denies knowledge of Grok sexual underage images as California AG opens investigation

Elon Musk said Wednesday that he is “not aware of any nude minor images generated by Grok,” hours before California’s attorney general opened an investigation into xAI’s chatbot over the “distribution of sexually explicit material without consent.”

Musk’s denial comes as pressure mounts from governments around the world – from Britain and Europe to Malaysia and Indonesia – after users on X began asking Grok to turn images of real women, and in some cases children, into sexualized images without their consent. Copyleaks, an AI detection and content management platform, estimated that roughly one image was published every minute on X. A separate sample collected from January 5 to January 6 found 6,700 per hour over the 24-hour period. (X and xAI are part of the same company.)

“This material … has been used to harass people across the Internet,” California Attorney General Rob Bonta said in a statement. “I urge xAI to take immediate action to ensure this does not go forward.”

The AG’s office will investigate whether and how xAI violated the law.

Several laws exist to protect targets of non-consensual sexual images and child sexual abuse material (CSAM). Last year, the Take It Down Act was signed into federal law, which criminalizes the deliberate distribution of intimate images without consent — including deepfakes — and requires platforms like X to remove such content within 48 hours. California also has its own set of laws that Governor Gavin Newsom signed in 2024 to crack down on sexually explicit deepfakes.

Grok began fulfilling user requests on X to produce sexualized images of women and children towards the end of the year. The trend appears to have taken off after certain adult content creators had Grok generate sexualized images of themselves as a form of marketing, which then led to other users posting similar prompts. In a number of public cases, including well-known figures such as “Stranger Things” actress Millie Bobby Brown, Grok responded to calls asking it to alter real images of real women by altering clothing, body positioning or physical features in overtly sexual ways.

According to some reports, xAI has started implementing security measures to address the issue. Grok now requires a premium subscription before responding to certain image generation requests, and even then the image may not be generated. April Kozen, VP of marketing at Copyleaks, told TechCrunch that Grok might fulfill a request in a more generic or toned-down way. They added that Grok seems more lenient towards creators of adult content.

Techcrunch event

San Francisco
|
13.-15. October 2026

“Overall, this behavior suggests that X is experimenting with multiple mechanisms to reduce or control problematic image generation, although inconsistencies remain,” Kozen said.

Neither xAI nor Musk have publicly addressed the issue directly. A few days after the incidents began, Musk appeared to call attention to the problem by asking Grok to create a picture of himself in a bikini. On Jan. 3, X’s security account said the company is taking “action against illegal content on X, including [CSAM],” without specifically addressing Grok’s apparent lack of security measures or the creation of sexualized manipulated images involving women.

The positioning mirrors what Musk posted today, emphasizing illegality and user behavior.

Musk wrote that he was “not aware of any nude minor images generated by Grok. Literally zero.” This statement does not deny the existence of bikini photos or sexualized edits more broadly.

Michael Goodyear, an associate professor at New York Law School and former litigation manager, told TechCrunch that Musk likely focused narrowly on CSAM because the penalties for creating or distributing synthetic sexualized images of children are greater.

“For example, in the United States, the distributor or threatened distributor of CSAM can face up to three years in prison under the Take It Down Act, compared to two for sexual images without adult consent,” Goodyear said.

He added that the “bigger point” is Musk’s attempt to draw attention to problematic user content.

“Clearly, Grok does not spontaneously generate images. It only does so according to the user’s request,” Musk wrote in his post. “When asked to generate images, it will refuse to produce anything illegal, as the operating principle of Grok is to obey the laws of a given country or state. There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”

Collectively, the post characterizes these incidents as uncommon, attributes them to user requests or conflicting incentives, and presents them as technical issues that can be resolved through fixes. It stops short of acknowledging any flaws in Grok’s underlying security design.

“Regulators may consider, with respect to free speech, requiring proactive measures from AI developers to prevent such content,” Goodyear said.

TechCrunch has reached out to xAI to ask how many times it has caught instances of non-consensual sexually manipulated images of women and children, which guardrails specifically changed, and whether the company notified regulators about the issue. TechCrunch will update the article if the company responds.

The California AG is not the only regulator trying to hold xAI accountable for the problem. Indonesia and Malaysia have both temporarily blocked access to Grok; India has demanded that X make immediate technical and procedural changes to Grok; The European Commission ordered xAI to keep all documents related to its Grok chatbot, a precursor to opening a new investigation; and the UK’s online safety watchdog Ofcom opened a formal investigation under the UK’s Online Safety Act.

xAI has previously come under fire for Grok’s sexualized images. As AG Bonta pointed out in a statement, Grok includes a “spicy mode” for generating explicit content. In October, an update made it even easier to jailbreak what little security guidelines there were, resulting in many users creating hardcore pornography with Grok, as well as graphic and violent sexual images.

Many of the more pornographic images that Grok has produced have been of AI-generated humans – something that many may still find ethically questionable, but perhaps less harmful to the people in the images and videos.

“When AI systems allow the manipulation of real people’s images without clear consent, the impact can be immediate and deeply personal,” Copyleaks co-founder and CEO Alon Yamin said in a statement sent to TechCrunch. “From Sora to Grok, we’re seeing a rapid increase in AI capabilities for manipulated media. To that end, detection and governance are needed now more than ever to help prevent abuse.”

Leave a Reply

Your email address will not be published. Required fields are marked *