Explicit AI images: MeitY moves against

India has ordered Elon Musk-owned X to remove sexually explicit content generated using its AI chatbot Grok warning of strict legal actions. What does the law say?

Last Updated: Jan 07, 2026, 12:08 IST6 min
Prefer us on Google
New
MeitY directed X to undertake an immediate technical review of Grok, including how it processes user prompts and generates images, to ensure it does not facilitate sexually explicit or unlawful content.  
Photo by Lionel Bonaventure / AFP
MeitY directed X to undertake an immediate technical review of Grok, including how it processes user prompts and generates images, to ensure it does not facilitate sexually explicit or unlawful content. Photo by Lionel Bonaventure / AFP
Advertisement

India’s technology ministry has issued formal directions to Elon Musk-owned social media platform X over the misuse of its artificial intelligence chatbot Grok, after the tool was used to generate and circulate sexually explicit images of women and children without consent.

According to sources cited by CNBC-TV18, the Ministry of Electronics and Information Technology (MeitY) wrote to X on January 2, directing the platform to remove all unlawful content generated using Grok within 72 hours and submit a detailed compliance report. The ministry warned that failure to comply would be treated seriously and could lead to “strict legal consequences”.

In a public response posted the same day, the Grok team said: “We appreciate the feedback. xAI is reviewing MeitY’s directives and working to enhance Grok’s safeguards against misuse, ensuring compliance with laws while promoting helpful AI.”

The government later granted X Corp a 48-hour extension to submit a detailed compliance report on steps taken to prevent its Grok AI chatbot from generating obscene and sexually explicit content. The deadline now ends at 5 pm on January 7, PTI reported.

What triggered the government’s intervention

The notice followed a surge of complaints after users were seen prompting Grok to digitally alter photographs of women and minors into sexually compromising images, which were then shared widely on X without consent. The ministry said this trend reflects “a serious failure of platform-level safeguards” and warned that such misuse normalises sexual harassment and amounts to gross abuse of artificial intelligence.

MeitY directed X to undertake an immediate technical review of Grok, including how it processes user prompts and generates images, to ensure it does not facilitate sexually explicit or unlawful content. The platform was also told to strictly enforce its own terms of service and suspend or terminate accounts found to be in violation.

The platform has also been asked to strictly enforce its user terms, acceptable use policies and AI usage restrictions, including suspending or terminating accounts found to be in violation.

The ministry said the trend reflects “a serious failure of platform-level safeguards and enforcement mechanisms” and warned that such misuse normalises sexual harassment against women and children. “Grok is being misused by users to create fake accounts to host, generate, publish or share obscene images or videos of women in a derogatory or vulgar manner,” it said.

Legal experts told Forbes India that while India does not yet have a standalone AI law, current statutes already provide regulators with enforcement tools.

Also Read: The World Is Looking At India To Provide A New AI Regulation Model: Union Minister Ashwini Vaishnaw

Safe harbour depends on due diligence and safeguards

Legal experts stress that safe harbour under Section 79 of the Information Technology Act, 2000 is not automatic.

Vishal Gehrana, partner designate and advocate-on-record at Karanjawala & Co, said platforms must comply with due diligence requirements to retain liability protection. “Section 79 of the Act grants intermediaries a protection from liability, but only if they observe due diligence and remove or disable access to unlawful content upon receiving actual knowledge or a lawful government order,” he said.

He pointed to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which require intermediaries to prohibit unlawful content, establish grievance redressal mechanisms, and appoint resident compliance and grievance officers in India.

“In case of continued failure to act after notice, the protection under Section 79 will be lost,” Gehrana said.

Gehrana noted that the legal framework gives the government significant enforcement powers. “Section 69A of the Information Technology Act empowers the Central government to direct blocking of information on grounds such as public order and decency, and non-compliance with such directions can attract legal consequences,” he said.

Depending on the nature of the content, provisions dealing with obscene and sexually explicit material under Sections 67 and 67A of the IT Act may also be triggered.

Data protection laws apply to AI systems

From a data protection perspective, Gehrana said the Digital Personal Data Protection Act, 2023 applies where personal data of individuals in India is processed, even if such processing takes place outside India.

“This would cover personal data used during training, operation, or storage of AI systems,” he said, adding that platforms are also subject to directions issued by the Indian Computer Emergency Response Team (CERT-In), including obligations relating to incident reporting and system logs.

“The location of the company outside India does not, by itself, place the platform beyond the reach of Indian law,” Gehrana said.

Samridh Ahuja, senior principal associate at S&A Law Offices, said the notice issued by the Ministry of Electronics and Information Technology reflects the government’s view that existing digital laws apply to AI-enabled platforms.

“Under the Information Technology Act, 2000, read with the IT Rules, intermediaries enjoy ‘safe harbour’ protections only if they comply with due-diligence obligations, including prompt removal of unlawful content upon notice and the implementation of reasonable safeguards,” he said.

Ahuja added that where platforms integrate generative AI tools, liability increasingly depends on whether the intermediary had knowledge, control, or failed to put adequate safeguards in place.

Risk-based AI governance framework needed

Butani said the Grok episode highlights the need for a more calibrated regulatory approach. “AI systems that autonomously generate content influencing public discourse must carry higher accountability than neutral platforms,” he said.

He argued that India does not need an outright ban or a one-size-fits-all AI law. “It needs clear guardrails, i.e., reduced safe harbour for high-risk AI uses, transparency around safeguards, and defined responsibility between AI model developers and deploying platforms.”

Ahuja added that India’s existing laws were designed around user-generated content, not AI systems capable of producing content autonomously and at scale.

“Going forward, India may need a more explicit AI governance framework that strengthens intermediary accountability, mandates transparency around AI deployment, and introduces risk-based compliance obligations for platforms offering generative AI, while still allowing room for innovation,” he said.

What makes Grok's image generation different from other AI platforms?

Grok is a generative AI chatbot developed by xAI, the AI startup founded by Musk. The tech billionaire launched Grok in November 2023 as an “anti-woke” chatbot with real-time access to content on X, modelled loosely on The Hitchhiker’s Guide to the Galaxy.

Unlike most AI chatbots that rely on static or periodically updated datasets, Grok is designed to have real-time access to content posted on X. This means it can respond to user queries using live posts, trends, and discussions on the platform, making it more current compared to other systems.

In recent months, Grok has also been increasingly used by users on X as a real-time fact-checking tool, with people prompting the chatbot to verify claims made by politicians, public figures and viral posts. Media reports have noted that some users have even used Grok to check statements made by Elon Musk himself on the platform. At the same time, experts and news organisations have cautioned that while Grok can provide quick responses, it remains prone to errors and misinformation, highlighting the risks of treating generative AI systems as authoritative sources of fact.

Like other AI platforms, Grok comes with an image-generation feature, which allows users to upload photographs and ask the AI to modify, analyse, or transform them based on text prompts. However, unlike many standalone AI image tools, Grok’s image feature operates inside a live social media environment. That means any image it generates can be reposted, amplified, or archived by other users within seconds. This integration is central to the regulatory concern, because harmful images do not remain private to a single user.

X Corp is expected to respond to MeitY's letter today.

First Published: Jan 07, 2026, 12:17

Subscribe Now
  • Home
  • /
  • News
  • /
  • Explicit-ai-images-meity-moves-against-x
Advertisement