California authorities have launched an inquiry into the use of Elon Musk’s AI model Grok to create sexually explicit deepfake images, following reports that the tool has been generating non-consensual content targeting women and children.
The investigation highlights growing concerns over the misuse of artificial intelligence to produce harmful online material.
Attorney General Rob Bonta announced the investigation on Wednesday, describing the reports as deeply troubling.
“The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking,” Bonta said. He emphasized that the images have been used to harass people and demanded immediate corrective action from the company.
xAI, the company behind Grok, previously warned that any user attempting to produce illegal content would face the same penalties as uploading prohibited material directly. Despite this, concerns persist over the platform’s potential to facilitate abuse.
The scrutiny of Grok is happening alongside rising pressure in the United Kingdom, where Prime Minister Keir Starmer has signaled potential measures against X, the social media platform owned by Musk. California Governor Gavin Newsom also criticized xAI’s approach, posting on X that its decision to “create and host a breeding ground for predators... is vile.”
Musk responded to the controversy by denying the existence of any underage explicit content generated by Grok.
“I am not aware of any naked underage images generated by Grok. Literally zero,” he wrote, clarifying that the AI only produces images in response to user prompts. He further described the backlash as politically motivated, suggesting that critics were using the situation as an “excuse for censorship.”
Concerns about AI misuse extend beyond xAI. In November, Wired reported that other AI tools from companies such as OpenAI and Google had been misused to digitally undress individuals.
The issue prompted three US Democratic senators to call on Apple and Google to remove X and Grok from their app stores. Following their request, X limited access to its image-generation tool, restricting it to paying subscribers, though both apps remain available on major platforms.
Legal experts say this case raises questions about the responsibility of tech companies for AI-generated content. Section 230 of the Communications Decency Act typically shields platforms from liability for material created by users.
However, Cornell University law professor James Grimmelmann argues the law may not protect companies when they themselves produce the content.
“This isn't a case where users are making the images themselves and then sharing them on X,” Grimmelmann said. “In this case xAI itself is making the images. That's outside of what Section 230 applies to.” Senator Ron Wyden of Oregon, a co-author of Section 230, has also argued that AI-generated images should not be exempt from liability and that companies must be fully accountable.
“I’m glad to see states like California step up to investigate Elon Musk’s horrific child sexual abuse material generator,” Wyden told the BBC. He is one of the three senators who urged Apple and Google to take X and Grok off their app stores.
The California probe coincides with the UK preparing legislation to criminalize the creation of intimate images without consent.
Ofcom, the UK’s media regulator, has opened its own investigation into Grok and warned that violations could result in fines of up to 10 percent of the company’s global revenue or £18 million, whichever is greater.
Starmer also told Labour MPs that X could lose its “right to self regulate,” stressing that “if X cannot control Grok, we will.” The combined actions signal growing global scrutiny of AI tools that generate sexual content without consent.