Malaysia’s Communications and Multimedia Commission (MCMC), Malaysia's communications regulator, has reportedly said that it will initiate legal action against X and xAI over user-safety failures linked to Grok.
The watchdog argues that Grok has been repeatedly misused to generate sexually explicit images involving women and minors, indecent or grossly offensive content, non-consensual manipulated images, and deepfakes created with simple prompts.
Grok's ability to generate sexualised deepfakes with minimal effort violates safety commitments and local laws, the authority said.
"Content allegedly involving women and minors is of serious concern... Such conduct contravenes Malaysian law and undermines the entities’ stated safety commitments," the commission said in a statement reported by Reuters.
Malaysia had already blocked access to Grok over the weekend before announcing legal proceedings.
The communications regulator added it had previously sent X and xAI formal warnings this month to remove the harmful content but that the companies had not taken any action.
When contacted by Reuters, xAI replied to what the news agency said it seemed to be an automated response which said: "Legacy Media Lies."
Malaysian laws are strict in regulating online content and include a ban on obscene and pornographic material.
Other strict laws include a ban on online gambling, scams, child pornography and grooming, cyberbullying, and content related to race, religion, and the royal family.
Although Malaysia has not yet specified the exact form of the proceedings, the legal action could become an important test case for the liability of AI platforms, government power over generative AI tools, and how social platforms handle deepfake abuse.
Malaysia’s move comes as backlash against Grok's ability to generate sexually explicit deepfakes is spreading quickly across governments.
Over the weekend, Indonesia also temporarily blocked Grok, while French officials reported the social media firm to prosecutors and regulators.
This week, UK regulator Ofcom also opened a formal investigation into X under the Online Safety Act to assess whether the platform has complied with its legal duties to protect users in the UK from illegal content following reports that the Grok AI tool has been used to create sexual abuse materials of adults and children.
The regulator said it urgently contacted X on Monday 5 January and set a deadline of Friday 9 January for the company to explain its compliance steps, after “deeply concerning reports” of nonconsensual intimate images and child sexual abuse material.
Ofcom will examine whether X failed to assess risks, prevent access to priority illegal content, remove illegal material swiftly, protect users from privacy breaches, assess risks to children, and use highly effective age assurance to shield children from pornography, according to its published investigation outline.





Recent Stories