Entertainment

34 state attorneys general demand xAI stop Grok-created nonconsensual images

A coalition of 34 state and territorial attorneys general urged xAI/X to halt Grok from producing nonconsensual intimate images and child sexual abuse material and to remove existing content.

Dr. Elena Rodriguez3 min read
Published
Listen to this article0:00 min
Share this article:

A coalition of 34 state and territorial attorneys general sent a forceful letter to xAI and X on Jan. 26, 2026, demanding immediate steps to stop the company’s Grok system from creating nonconsensual intimate images and child sexual abuse material and to remove any such content already produced. The count of signatories has been widely reported as 35, a discrepancy that the coalition and media coverage have both noted.

The letter, circulated to company leaders and publicized by state offices, frames the issue as a public-safety and criminal-justice problem arising from the emerging misuse of generative artificial intelligence. Attorneys general across multiple jurisdictions said Grok-produced imagery, according to the letter, has produced sexually explicit depictions lacking consent and, in some cases, images that could constitute child sexual abuse material, which is illegal under both federal and state law.

State legal officials have increasingly targeted technology platforms over content harms, and this coordinated action underscores how quickly novel AI tools can trigger familiar enforcement challenges. Nonconsensual sexual imagery and material involving minors carry immediate criminal implications and deep personal harms for victims. Attorneys general, who enforce consumer protection, privacy and criminal statutes, contend that technology companies must deploy safeguards that prevent their systems from generating content that facilitates exploitation or abuse.

Experts and advocates say the complaint raises two interlocking questions: how to stop a generative model from outputting illicit imagery on demand, and how to find and remove such content once it has been created and distributed. Traditional content-moderation tools such as keyword filters and user reporting are often reactive and rely on human review, a process that can be slow and inconsistent for rapidly generated AI outputs. Technical solutions, including stricter prompt filtering, model steering, training-data auditing and watermarking of AI-generated outputs, exist but are imperfect and can be hard to implement across live services.

The attorneys general asked xAI and X to take immediate measures to halt production of the specified categories of images and to purge any instances already created, signaling potential civil or criminal enforcement if companies fail to act. The coalition did not publicly enumerate specific legal remedies in the letter made available by state offices, but such letters often precede investigations, demands for testimony, or civil enforcement actions.

The dispute arrives amid broader national debates over platform liability, the limits of automated moderation, and how to regulate increasingly capable generative models. Policymakers in Washington and state capitals have accelerated scrutiny of AI systems for risks that range from election misinformation to privacy invasions and now to sexual exploitation. For victims and privacy advocates, the core demand is straightforward: technologies that can reproduce likenesses and fabricate sexual content must be constrained to prevent new forms of harm.

xAI and X have not publicly released a response tied to the coalition’s Jan. 26 letter as of publication. The companies face a choice common to technology firms under rising regulatory pressure: adopt immediate operational limits that curb risky outputs, or contest enforcement efforts and risk investigations and litigation across multiple states. Whatever path they choose, the attorneys general’s action marks a significant escalation in how state governments are treating harms caused by generative AI.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.
Get Prism News updates weekly.

The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Entertainment