Malaysia restricts Grok AI after sexualised non-consensual image outcry
Malaysia’s regulator has temporarily blocked Grok after repeated misuse produced sexualised, non-consensual images, including content involving minors, raising global safety concerns.

Malaysia’s communications regulator has temporarily restricted access to Grok, the generative-AI chatbot developed by xAI and integrated into X, citing repeated misuse that produced sexualised, non-consensual and even child-related images. The Malaysian Communications and Multimedia Commission said the move took immediate effect and would remain until technical and moderation safeguards are demonstrably fixed.
Regulators and campaigners had pressed for action after a mid-January wave of international criticism over Grok’s image-generation feature, which was used to create and circulate sexualised manipulated images of real people. Indonesia temporarily denied access to Grok earlier in the week, and scrutiny spread across Europe and other jurisdictions as officials examined whether the tool’s design and operation complied with legal and safety obligations.
MCMC said its restriction followed “repeated misuse” to generate “obscene, sexually explicit, indecent, grossly offensive, and non‑consensual manipulated images, including content involving women and minors.” The commission said it had engaged with X Corp. and xAI and issued formal notices earlier in the month demanding effective technical and moderation safeguards. It found the companies’ responses wanting, saying they had “failed to address the risks posed by the design and operation of the AI tools” and had relied largely on user-initiated reporting mechanisms that were “insufficient to prevent harm or ensure legal compliance.”
xAI moved to limit some of Grok’s image functions within days of the backlash, announcing that image generation and editing would be available only to paying subscribers while it addressed moderation lapses. The firm also replied to a request for comment with what appeared to be an automated response reading "Legacy Media Lies." X did not provide further public comment at the time of the restriction.
On the ground in Kuala Lumpur, an AFP reporter tested Grok prompts after the MCMC announcement and received no response, corroborating that the access limitation was active for some local users. That verification underlines the regulator’s assertion that the suspension is operational rather than merely advisory.
The Malaysian action represents one of the first concrete enforcement steps against an AI conversational product over harmful image outputs. Officials and campaigners elsewhere have criticized the paywall approach as insufficient, arguing that restricting tools to paying users does not fix core design and moderation choices that enable non-consensual deepfakes and sexualised manipulations.
MCMC has signalled a pathway to restoration: access will be reinstated only once “effective safeguards were implemented” and verified by the commission, and it said it remains open to dialogue with X Corp. and xAI on remedial measures. The case is likely to sharpen regulatory expectations globally, as governments weigh whether existing safety laws are adequate for generative AI and what technical, auditing and transparency standards companies must meet.
For technology companies, the episode is an early test of how quickly they can translate content-safety commitments into verifiable systems that prevent harm. For users and victims of manipulated imagery, the enforcement marks a rare instance of national regulators intervening directly to curtail access to an AI tool they judge to pose immediate risks.
Know something we missed? Have a correction or additional information?
Submit a Tip

