Malaysia to sue X and xAI over Grok deepfake harms
Malaysia will sue X and xAI over Grok after finding AI-generated sexualised, non-consensual images allegedly involving women and children.

Malaysia’s communications regulator said it would take legal action against social media platform X and AI firm xAI after investigators found the companies’ chatbot Grok had been misused to create and distribute sexually explicit, indecent and non-consensual manipulated images. The Malaysian Communications and Multimedia Commission said the material included content “allegedly involving women and children,” and that such material violated national law and the platforms’ safety commitments.
The commission said it issued takedown notices on Jan. 3 and Jan. 8 asking X and xAI to remove identified content, but that “no action has been taken.” After the companies did not comply, MCMC appointed legal counsel and announced that court proceedings “would begin soon.” The regulator’s announcement followed temporary restrictions on Grok access in Malaysia and neighbouring Indonesia the weekend before the public statement.
Grok, developed by xAI and embedded within X, launched in 2023 and added an image-generation capability last year that regulators say enabled the creation of sexualised imagery. MCMC framed its move as enforcing domestic prohibitions on obscene and pornographic materials and protecting children and adults from non-consensual abuse online. Malaysia’s legal framework treats a wide range of online harms as illegal, including child pornography, grooming, content that offends race, religion and royalty, and material linked to scams and online gambling.
Company responses were sparse. xAI’s automated reply to inquiries read “Legacy Media Lies.” X did not immediately respond to requests for comment. MCMC officials and Malaysia’s communications minister have said they were prepared to pursue the case through the courts, signaling an escalation from administrative takedown requests to formal litigation.
The Malaysian action is part of a growing wave of international scrutiny over generative AI tools and the platforms that host them. Britain’s media regulator has opened a formal investigation into X after concerns that Grok is being used to produce sexualised images. French authorities have reported X to prosecutors and regulatory bodies, and U.S. lawmakers have urged technology companies that host or distribute Grok to reassess their relationships with the service. Regulators in multiple countries are now weighing how existing laws apply to AI-generated content and whether new rules are needed to force faster removal and stronger safety measures.
Legal experts say the Malaysian case may test where responsibility lies when an AI developed by one company is operated through another company’s platform. Core questions will include whether the firms failed to act on takedown notices, how automated moderation systems performed, and what contractual or statutory duties the companies had to prevent distribution of illegal material. Observers also note the potential chilling effect on innovation if courts impose heavy liabilities on developers and platforms without clear regulatory guidance.
For now, Malaysia’s regulator has signaled swift enforcement. The MCMC move and related probes abroad make clear that governments will increasingly hold platforms and AI developers to account for how generative systems are used in the real world, even as courts and legislatures debate the technical and legal boundaries of those responsibilities.
Know something we missed? Have a correction or additional information?
Submit a Tip

