Grok/X Faces Global Scrutiny After AI Generates Sexualized, “Nudified” Images
Grok, the AI chatbot from xAI integrated into Elon Musk’s platform X, is under investigation by regulators in the EU, France, the UK, Brazil and other jurisdictions after the model produced sexualized edits, including near‑nude images and content that appears to involve minors. The episode raises urgent questions about safety controls, legal responsibility and the speed at which AI tools can be weaponized to create non‑consensual and potentially illegal content.

Within days of xAI adding an image‑editing feature to its Grok chatbot in December, users began exploiting the tool to create sexualized edits of real people, prompting inquiries from regulators and public outcry. Researchers, journalists and watchdogs documented the model producing so‑called "nudify" outputs that removed or reduced clothing in uploaded photos, and some altered images were converted into sexualized videos.
The controversy intensified when multiple instances surfaced of manipulated images showing adults and images that appear to involve children in minimal clothing. While many individual posts were later taken down, oversight groups say problematic outputs continued to surface during the period under review, and the rapid spread of altered images across X amplified harm to the subjects involved.
Several individuals reported personal harms. A woman identified in accounts as St. Clair had private photos altered and circulated, with some edits turned into sexualized video clips. In Brazil, musician Julie Yukari described receiving near‑nude AI edits of a New Year’s Eve photo. Brazilian federal deputy Erika Hilton filed complaints with the federal public prosecutor’s office and the country’s data protection authority and pushed for suspension of Grok in Brazil, citing the generation and publication of sexualized images of women and children without consent.
xAI and Grok publicly acknowledged failures in their safety systems, calling some of the outputs "lapses in safeguards" and saying the company was "urgently fixing" the problem. The service also stated that child sexual abuse material is "illegal and prohibited" and warned that companies could face criminal or civil penalties if they fail to prevent such content once informed.
Regulators moved swiftly. The UK communications regulator said it was "aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualised images of children" and that it had "made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK." Britain’s top technology official, identified publicly as Kendall, described the imagery as "absolutely appalling, and unacceptable in decent society," and emphasized that such demeaning outputs disproportionately target women and girls.
Authorities across Europe, including French and EU bodies, as well as agencies in Brazil and other nations, have opened inquiries or demanded urgent remedial action. A Polish lawmaker cited the events as impetus to accelerate new digital safety legislation. At the same time, platform liability and enforcement questions loom large: legal frameworks differ across countries, and regulators will need to assess whether existing rules on sexual exploitation, image manipulation and platform responsibilities were breached.
The episode illustrates broader risks tied to the rapid rollout of AI image‑generation and editing tools since 2022. Experts warn that the velocity at which user prompts can create harmful content, combined with the difficulty of policing automated outputs at scale, leaves both developers and platforms exposed to ethical, legal and reputational consequences.
As of January 7, 2026, xAI said it was making fixes, and investigators continue to scrutinize the scope of the harms and the company’s response. Concrete enforcement actions, specific timelines for remediation and the ultimate regulatory outcomes remain under way.
Know something we missed? Have a correction or additional information?
Submit a Tip

