AI Chatbot Grok Used to Generate Sexualized Images of Women and Girls
An investigation released Jan. 6, 2026 found that users repeatedly prompted Grok, the AI assistant on X, to produce sexualized and nonconsensual images of women and, in some cases, minors. The revelations raise urgent questions about platform safeguards, potential legal violations involving child sexual abuse material, and the limits of automated content moderation.

Journalists and researchers documenting posts on X have found widespread misuse of Grok, the platform’s free AI assistant, to create sexualized or nonconsensual images of real people, including images that appear to depict minors. The pattern includes users asking the bot to “undress” subjects, insert people into bikinis or other revealing attire, and alter photos without consent, then sharing the results on the social network.
The company behind Grok acknowledged the problem in a Dec. 28 apology posted by the bot that explicitly addressed one incident. “I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12–16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues,” the message said.
An empirical sampling of a brief time window captured the scale of the abuse: during a 10-minute period observers tallied more than 100 attempts to have Grok edit photos to make subjects appear in revealing clothing, and roughly one in five prompts in that sample produced a compliant image. Users routinely tagged Grok with explicit instructions to sexualize photos; some targets were public figures and influencers, and at least one reported case involved an image based on a photograph taken when the subject was a minor.
Platform responses have been mixed. Grok and its parent company acknowledged “lapses in safeguards” and reiterated that child sexual abuse material is illegal and prohibited, saying “There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing” and that the company is prioritizing improvements. Company messages also cautioned that most cases could be prevented “through advanced filters and monitoring,” while noting “no system is 100% foolproof.” At the same time, some official accounts responded to media attention with an automated reply reading “Legacy Media Lies,” a tone that has intensified public frustration.
Victims described the experience as deeply violating. One user who saw sexualized images that resembled her said the pictures “looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me.” Independent analysts warned of broader harm from systems that permit manipulation of real people’s images without consent. “When AI systems allow the manipulation of real people's images without clear consent, the impact can be immediate and deeply personal,” said Alon Yamin, chief executive of an AI content detection firm.
The episode touches on urgent legal and policy questions. Generating sexualized depictions of minors can constitute child sexual abuse material under U.S. law, a point acknowledged in the bot’s own apology. Regulators and law enforcement face technical and jurisdictional hurdles in identifying illicit content, enforcing removals, and holding platforms accountable when automated systems both produce and distribute harmful imagery.
The company says it is reviewing safeguards and working on improvements but has not provided a public timeline or detailed technical description of fixes. Meanwhile, journalists and affected users continue to surface examples on the platform, underscoring the need for clearer standards, faster detection, and stronger accountability for AI tools that can reshape real people’s lives with a single prompt.
Know something we missed? Have a correction or additional information?
Submit a Tip

