Explained: How India’s new IT rules regulate AI content and deepfakes

India’s amended IT Rules bring AI‑generated and synthetic content under tighter oversight, requiring clear labels, rapid takedowns and user declarations

By
Last Updated: Feb 11, 2026, 17:02 IST3 min
Prefer us on Google
The Ministry of Electronics and Information Technology (MeitY) notified the changes on February 10 with enforcement beginning February 20.  Photo by Shutterstock
The Ministry of Electronics and Information Technology...
Advertisement

The government has amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, to explicitly bring AI generated content—termed “synthetically generated information”—under India’s intermediary framework.

The Ministry of Electronics and Information Technology (MeitY) notified the changes on February 10 with enforcement beginning February 20. This tightens platform obligations around labelling synthetic media and accelerates takedown timelines for unlawful or harmful content.

Advertisement

From a regulatory design standpoint, the objective is unmistakable: Curb the risks posed by deepfakes, impersonation, and digitally supercharged misinformation, while ensuring intermediaries that follow due diligence requirements continue to benefit from safe harbour protections.

“This is one of the first instances in India where AI-generated content is directly addressed within a binding regulatory framework. While the rules do not regulate AI systems per se, they effectively regulate AI outputs at the distribution layer—a pragmatic step in the absence of a standalone AI law,” says Supratim Chakraborty, partner at Khaitan & Co.

What is synthetically generated information?

Under the amended rules, “synthetically generated information” refers to any audio, visual, or audio visual content that is created or altered using computer tools in a way that makes it look real. In other words, if AI or software is used to produce something that could be mistaken for an actual person or real event, it falls under this definition.

At the same time, the rules make it clear that ordinary, good faith edits are not treated as synthetic content. Simple actions like trimming a video, fixing captions, translating text, improving accessibility, or creating educational material are allowed—as long as they don’t mislead viewers or create false electronic records.This separation helps regulators focus on harmful deepfakes and deceptive AI generated media without penalising harmless everyday editing.

Platforms that allow creation or sharing of AI content must clearly label synthetic media so users can immediately identify it as such, and, where technically feasible, embed permanent provenance markers or metadata that help trace origin. Intermediaries are prohibited from enabling removal or suppression of these disclosures once applied.

Advertisement

The government has moved away from the earlier draft’s prescriptive “10 percent of the frame” watermark idea; the final rules use a principle based standard that the label must be “clear” and “prominent”, giving companies flexibility on design while keeping user transparency front and centre.Also Read: DPDP Rules enactment: What it means for companies and citizens

The 3 hour takedown

The amendments compress enforcement windows dramatically.

Read More

For serious violations—such as non consensual intimate deepfakes, deceptive impersonation, child sexual abuse material, or other unlawful synthetic content—platforms are expected to act within three hours of being notified, a sharp tightening from earlier norms around 36 hours.

Some complaint categories under Rule 3 also see shorter internal timelines, signalling a broader push towards rapid response on high risk content. Supporters say speed is essential to limit viral harm, while critics warn the compressed window could force automated removals with limited human review.

For platforms—global giants and Indian startups alike—the rules translate into a materially higher compliance burden. The operational implications are significant, particularly given the short transition window. “Platforms will need to embed metadata and labelling tools, deploy automated detection systems, and recalibrate grievance and takedown workflows within compressed timelines,” adds Chakraborty. The amendments also leave interpretive questions, he says. “The obligations extend to intermediaries that ‘offer a computer resource’ enabling synthetic content, which could raise scope issues for different categories of AI service providers.”

What this means for users

For everyday users, the biggest change is more transparency and faster recourse. You should increasingly see visible disclosures on AI generated visuals, audio, and videos; if a deepfake targets you, the platform is now obligated to move far quicker to remove or disable access. At the same time, users carry new responsibilities: If you upload synthetic media, you may be required to declare it. Violations can lead to immediate suspension or termination, and in certain cases, platforms may disclose the identity of the violating user to the complainant—a provision that heightens accountability but also raises debates around due process and safety if moderation mistakes occur.

First Published: Feb 11, 2026, 17:05

Subscribe Now
Naini Thaker is an Assistant Editor at Forbes India, where she has been reporting and writing for over seven years. Her editorial focus spans technology, startups, pharmaceuticals, and manufacturing.
  • Home
  • /
  • News
  • /
  • Explained-how-indias-new-it-rules-regulate-ai-content-and-deepfakes
Advertisement