The Indian government has moved to curb the spread of sexually explicit images created with artificial intelligence, issuing a formal notice to X, the social media platform formerly known as Twitter. The action follows an increase in complaints that an AI model available through the platform, commonly referred to as Grok AI, was being used to generate obscene images of women and to breach individuals’ privacy.
Grok AI India removal demanded by government
The notice, sent by relevant authorities, instructs X to remove offending material without delay and to explain the steps it will take to prevent further circulation. The government has underscored that AI tools should assist in legitimate tasks, not enable violations of privacy or produce harmful sexual content. Officials have signalled that platforms hosting or enabling such content must meet India’s legal and regulatory obligations.
Although details of the notice have not been published in full, the move reflects mounting pressure on technology companies to improve content moderation and to be transparent about the controls they apply to generative models. Regulators and rights groups have raised concerns about how easy it has become for users to create realistic but fabricated images that can damage reputations and safety.
Industry observers say the government’s demand is part of a broader push to set clear standards for AI use. Platforms operating in India will increasingly face requirements to demonstrate how models are governed, how harmful outputs are filtered, and how user privacy is protected. Failure to comply with notices could lead to fines, blocking orders, or other enforcement measures under existing digital and safety laws.
Experts note that the challenges are not unique to India. Across the BRICS and partner countries, lawmakers are racing to adapt regulation to cover generative systems that can produce text, images, audio and video. The rapid development of such technology has outpaced many regulatory frameworks, prompting ad hoc responses from governments when incidents occur.
For platform operators, the incident highlights both reputational and operational risks. X will be expected to outline the technical and policy safeguards it uses for Grok AI, including moderation workflows, age and consent protections, and mechanisms for rapid takedown of abusive material. Transparency reports and clearer user-facing controls could become standard demands from regulators.
Civil society organisations have welcomed decisive enforcement but also urged a balanced approach that protects free expression while offering robust remedies for victims. They argue that removing harmful content must be paired with stronger accountability for those who misuse generative tools and with educational efforts to inform users about the risks.
As governments press technology firms for faster action, the incident may accelerate work on industry-wide solutions such as watermarking synthetic images, standards for dataset provenance, and interoperable reporting channels between platforms and authorities. For now, the immediate focus is on compliance with the notice and the removal of content deemed obscene under Indian law.
The outcome of this case will be watched closely by regulators and platforms elsewhere, as it may set practical precedents for how nations in the BRICS+ grouping handle similar challenges posed by generative artificial intelligence.
Key Takeaways:
- Indian government has issued a notice to X over complaints of obscene images generated by Grok AI India.
- The notice demands immediate removal of AI-generated sexual content and stronger safeguards to protect privacy.
- Officials warn of legal consequences if the platform fails to comply and ask for transparency on moderation steps.
- The move signals growing regulatory scrutiny of generative AI platforms operating in India and other BRICS nations.

















