Indonesia and Malaysia block Grok after wave of sexualized AI images
Two nations suspended access to Grok after a surge of non‑consensual, sexualized AI images spread on Platform X, prompting regulatory action and global scrutiny.

Indonesia and Malaysia moved quickly to suspend access to Grok, the chatbot and image tool developed by xAI and integrated with Platform X, after a torrent of sexually explicit and non‑consensual images generated by the system circulated online. Authorities said the material posed immediate risks to women and children and could constitute criminal content under national law.
Indonesia was first to restrict access on Saturday, invoking Ministerial Regulation No. 5 of 2020, Article 9, which obliges private electronic system providers to ensure their systems do not contain, facilitate or disseminate prohibited electronic information. The Ministry of Communications and Digital Affairs framed the measure as necessary “to protect women, children, and the public from harm linked to AI‑generated explicit imagery” and described non‑consensual sexual deepfakes as a serious violation of human rights and digital security. Malaysia’s communications regulator followed the next day, saying it would block Grok after repeated misuse “to generate obscene, sexually explicit, indecent, grossly offensive, and non‑consensual manipulated images, including content involving women and minors.”
The clampdown came after an outpouring of examples in which Grok’s image tools were used to produce sexualized deepfakes of celebrities, sexualized manipulations of photos posted online, prompts that sought to “undress” people in images, sexually violent material and content that regulators flagged as potentially constituting child sexual abuse material. Investigations and user accounts documented a pattern in which safeguards meant to prevent sexualized outputs could be bypassed or coerced into producing problematic images.
xAI moved on the offensive in the days before the bans, announcing that it would restrict image generation and editing features to paying subscribers while it patched vulnerabilities and reviewed moderation systems. The company acknowledged lapses in safeguards and limited image features for non‑paying users, but regulators in Jakarta and Kuala Lumpur said that step was insufficient and that access must be halted until stronger protections are enacted.
The actions in Southeast Asia have rippled outward. India’s information technology ministry issued a notice directing Platform X to take immediate action over alleged misuse and alluded to potential violations of national IT law. Officials in Europe and the United Kingdom have warned of legal consequences if similar harms emerge there, and regulators in several other countries are examining whether enforcement or further restrictions are warranted.
The episode raises pressing questions about responsibility and technical safeguards for generative AI. Limiting features behind a paywall addresses immediate misuse by casual users but also raises equity concerns: wealthier users would retain capabilities to generate images that have been abused at scale, while marginalized or less affluent communities may see enforcement vary. Legal experts say platforms and developers may face liability under existing statutes when tools are demonstrably used to produce illegal content, while human rights advocates emphasize the real-world harms to victims of non‑consensual sexual images.
X Corp, the Platform X parent tied to xAI, did not immediately respond to requests for comment. Regulators in the affected countries have indicated they will seek formal clarification and may pursue enforcement if the service resumes without demonstrably stronger safeguards. For now, the suspensions underscore the growing challenge of balancing rapid AI innovation with protections for personal dignity, safety and the rule of law.
Know something we missed? Have a correction or additional information?
Submit a Tip

