Grok faces global backlash over sexualized AI images and safety failures

Technology · Chrispho Owuor · January 6, 2026
Grok faces global backlash over sexualized AI images and safety failures
Twitter owner Elon Musk. PHOTO/Nairobi News
In Summary

Critics say the incident highlights the growing danger of AI nudify tools, which they argue are making gender-based abuse easier and more widespread. Many fear these tools are being used to target women and children without consent, turning harmless photos into harmful content within seconds.

A new image editing feature on Grok, the artificial intelligence tool linked to Elon Musk, has sparked strong backlash around the world after users began generating sexualized images, including images involving minors. What started as a product update has now turned into a serious global issue, raising alarms about online safety, AI misuse, and the duty of technology companies to protect users from harm.

The controversy began on Monday after Grok introduced an “edit image” button that allows users to change online images through written prompts. Soon after the rollout, complaints flooded social media as users showed how the tool could be used to digitally undress women or place them in sexualized situations. Some of the reported images involved children, causing outrage among users, lawyers, and regulators across several regions.

Critics say the incident highlights the growing danger of AI nudify tools, which they argue are making gender-based abuse easier and more widespread. Many fear these tools are being used to target women and children without consent, turning harmless photos into harmful content within seconds.

In Europe, the European Commission said it is taking the complaints involving Grok very seriously. The Commission, which acts as the European Union’s digital watchdog, expressed deep concern over the system’s outputs. Grok is developed by Musk’s startup xAI and is built into his social media platform X.

“Grok is now offering a ‘spicy mode’ showing explicit sexual content with some output generated with childlike images,” said EU digital affairs spokesman Thomas Regnier. “This is not spicy. This is illegal. This is appalling. This has no place in Europe.”

Regulators in the United Kingdom also moved quickly. Media regulator Ofcom said it had made urgent contact with X and xAI to understand what steps they had taken to meet their legal duty to protect users. Ofcom added that it would assess whether the response points to possible breaches that could lead to a formal investigation.

Concerns have also spread beyond Europe. Authorities in France, India, and Malaysia have either launched probes or demanded immediate action. In Paris, prosecutors last week widened an existing investigation into X to include new claims that Grok was being used to create and share child sexual abuse material. That inquiry first began last year over accusations that the platform’s algorithm had been manipulated for foreign interference.

In India, the government ordered X to remove sexualized content linked to Grok, take action against users responsible for the material, and submit an Action Taken Report within 72 hours. Officials warned of legal consequences if the platform failed to comply. By Monday, there was no public confirmation that the deadline had been met.

Malaysia’s Communications and Multimedia Commission also voiced serious concern. The regulator said indecent and grossly offensive material linked to Grok was spreading on X and confirmed it was investigating the issue. It added that representatives of the platform would be summoned to explain the situation.

Individual users have also shared their experiences. Malaysia-based lawyer Azira Aziz said she was shocked after someone used Grok to alter her profile photo into a bikini image.

“Innocent and playful use of AI like putting on sunglasses on public figures is fine,” she said. “But gender-based violence weaponizing AI against non-consenting women and children must be firmly opposed.”

Other users appealed directly to Musk, warning that the tool appeared to be used by people seeking to sexualize images of children. One user wrote on X that Grok was undressing photos of her as a child, calling it “objectively horrifying, illegal.”

When asked for comment, xAI sent an automated reply that read, “Legacy Media Lies.” Later, Grok itself admitted failures in its system.

“We’ve identified lapses in safeguards and are urgently fixing them,” Grok said. It added that “CSAM is illegal and prohibited.”

Last week, Grok also issued an apology after producing “an AI image of two young girls (estimated ages 12–16) in sexualized attire based on a user’s prompt.”

The incident adds to existing criticism of Grok, which has previously faced backlash for spreading false information during major global events. As regulators step up scrutiny, the episode has renewed debate over how fast AI tools are being released and whether enough care is being taken to prevent abuse before harm occurs.

Join the Conversation

Enjoyed this story? Share it with a friend:

Latest Videos
MOST READ THIS MONTH

Stay Bold. Stay Informed.
Be the first to know about Kenya's breaking stories and exclusive updates. Tap 'Yes, Thanks' and never miss a moment of bold insights from Radio Generation Kenya.