The Ministry of Electronics and Information Technology has directed social media platform X to carry out a comprehensive technical, procedural and governance review of its Grok chatbot after reports that the system was being used to generate morphed and obscene images of women. The directive follows a complaint from Rajya Sabha MP Priyanka Chaturvedi and comes amid rising scrutiny of generative systems that interact with users in real time.
Grok chatbot audit and legal obligations
In a four-page letter to X’s Chief Compliance Officer for India, the ministry said Grok appeared to be misused to create fake accounts to host, generate, publish or share obscene images or videos of women in a derogatory manner. The letter emphasised that compliance with the IT Act and the IT Rules, 2021 is mandatory and that the limited exemptions under Section 79 depend on strict observance of due diligence obligations.
Immediate actions ordered
The ministry instructed X to remove or disable access without delay to all content already generated or disseminated in violation of applicable laws and to do so in strict compliance with the timelines set out in the IT Rules, 2021. The company was also told not to vitiate evidence while removing content. By Monday 5 January 2026, X must submit an Action Taken Report outlining the steps it has taken in response to the ministry’s directions.
The letter warned that non-compliance would be treated seriously and could result in strict legal consequences against the platform, its responsible officers and users who violate the law. The ministry noted that acts of this nature may attract penal action independently of the IT Act.
Platform context and responses
Grok, which operates as a separate artificial intelligence entity under X’s holding firm, maintains an account on the social media platform and interacts automatically with users. X did not provide an immediate response to the ministry’s directions. Elon Musk, X’s owner, has publicly praised Grok’s relatively unfiltered responses, which rely on fewer safeguards than those employed by most large technology companies for their language models.
The ministry’s intervention follows a recent advisory sent to all social media companies directing them to proactively remove obscene and pornographic material. IT Minister Ashwini Vaishnaw told reporters that social media firms must take responsibility for content on their platforms and noted that the Parliamentary Standing Committee on IT has recommended stricter laws on obscene content.
Implications for AI safety and platform governance
The order highlights the challenges regulators face in holding platforms to account as generative systems become more accessible. For India, the move signals that conditional liability protections for intermediaries under Section 79 will be closely linked to how diligently platforms enforce content rules and respond to misuse.
Experts say the case may accelerate efforts by platforms to implement stronger guardrails, clearer reporting and faster content removal protocols. It may also prompt other jurisdictions to press platforms for similar oversight, given growing global concern about manipulated images and the potential for harm to individuals’ reputations and safety.
For now, attention will fall on X’s Action Taken Report and whether the company opts to tighten Grok’s response controls, update its governance procedures or face legal action. The ministry’s swift timetable makes this one of the earliest regulatory tests for conversational AI systems in India.
Key Takeaways:
- India’s IT Ministry has directed X to conduct a Grok chatbot audit over reports of morphed images of women.
- X must remove offending content and submit an Action Taken Report by 5 January 2026.
- The ministry cited IT Rules 2021 and warned of legal consequences for non-compliance.

















