Explained: India’s AI content labelling regulation

The draft amendment to IT rules proposes social media users to self-declare if their uploads are AI-generated, and platforms to label such content. The regulation is open for feedback

By
Last Updated: Oct 23, 2025, 15:18 IST6 min
The MeitY draft mandates that artificial intelligence and social media platforms must label AI-generated content
Credit : Shutterstock
The MeitY draft mandates that artificial intelligence ...
Advertisement

The Ministry of Electronics and Information Technology (MeitY) has proposed a draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, aimed at curbing the spread of deepfakes and misinformation online. The draft released on October 22 mandates that AI (artificial intelligence) and social media platforms must label AI-generated content. The declaration must cover 10 percent of the content’s area and applies to all types of synthetic information/content, including text, video and audio, and is not limited to photorealistic content, according to officials and the draft text.

Advertisement

The draft is open for public and industry feedback till November 6. The regulation is yet to come into effect. According to an initial statement by the Internet Freedom Foundation, this is too short a deadline and should be extended by at least two weeks.

As per the proposed rules, dual responsibility is placed on platforms and users. Users must self-declare whether the information/content they upload is AI-generated or synthetically modified. If they fail to do so, platforms like OpenAI, Meta, Google and X will be required to proactively detect and label such content using “reasonable and proportionate technical measures”. This means if a user is not declaring if the content is synthetically generated or modified, then social platforms will need to enable tech to identify such content and make the declaration. In the absence of such a declaration—sufficient to satisfy the intermediary (social platforms)—they can remove the content from their platform. 

The regulation also calls for non-removable metadata or identifiers to be embedded in AI-generated content, ensuring traceability and transparency. Platforms are prohibited from altering or removing these labels. The draft defines AI-generated information as content “artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that reasonably appears to be authentic or true”.

Advertisement

The move comes amid growing concerns over the misuse of generative AI tools to impersonate individuals, manipulate elections and spread false information, and is prompted by similar moves by the European Union and China. 

Read More

According to Apar Gupta, advocate & founder director of the Internet Freedom Foundation, it is important to note that in the draft, the phrase “artificially or algorithmically created” is not defined. So, it can even mean any person using any kind of visual editing or photo editing software. Its implementation is not restricted only to what may be conventionally understood to be deepfakes. He says: “As per the draft, the ambit stretches much more broadly. And this would, for instance, include any kind of photo which may be created, let's say, through Canva (design platform), which may be even of an object, such as a computer, a laptop, or a coffee mug.”

Advertisement

Content like this doesn't lead to any problems, but will still need to be labelled. The labelling, Gupta explains, can be a disclaimer that appears at the start of a movie or a TV show, and needs to be fairly prominent.

Gupta feels labelling is a process which needs to be supported. However, when it's bundled with the power of censorship, with a takedown power, which is implemented by a platform through a technical tool, it will result in instances of censorship. “The issue here is that if for any content just because a visual editing software has been used, which has AI features, does it make the content illegal? Labelling is helpful, but censorship is not,” he explains, adding that censorship needs to be restricted to those categories in which AI is used in a way which contravenes a provision of law. 

During the announcement, IT Minister Ashwini Vaishnaw said: “People are using some prominent persons’ images and creating deepfakes, which is affecting their personal lives, privacy as well as [creating] various misconceptions in society. So, the step we have taken is making sure that users get to know whether something is synthetic… or something is real. Once users know that, they can take a call in a democracy. But it’s important that users know what is real. That distinction will be led through mandatory data labelling.”

Advertisement

As far as deepfakes are concerned, Gupta says they are not illegal as much as they may be concerning. For instance, if a person makes a deepfake of themselves, it is with their consent. They may want to tell people that they have used an AI tool for it and in such cases, labelling practices need to be encouraged. 

An initial statement released by the Internet Freedom Foundation on the Draft Amendment Rules says: "While we recognise the real harms of deepfakes by non-consensual intimate imagery and election manipulation, these proposals, as framed, risk overbroad censorship, compelled speech, and intrusive monitoring that chill lawful expression online."

According to the foundation, the following are major concerns regarding the proposed rules:

  • The rule can be applied to any content “algorithmically created, generated, modified or altered… in a manner that… appears… authentic or true, a breadth that can capture satire, remix or benign edits, hence the ambit of the regulation is universal.
  • The rule would force tools that enable creation/editing to embed permanent identifiers and display visible or audible labels covering “at least 10 percent” of a work regardless of context and forbid their removal. This is compelled speech and risks the mandatory insertion of “disclaimers” on User Generated Content that is reminiscent of cinema censorship, and now OTT video censorship regimes. It has a high risk of collateral censorship and is unlikely to deter bad actors who will simply not comply.
  • The draft also explicitly ties “synthetically generated information” into other due diligence and traceability clauses, reinforcing privacy and encryption concerns.

According to Meghna Bal, director of the Esya Centre, a technology policy-focussed think tank in New Delhi, the draft rules are neither feasible, nor effective. She explains that detection capabilities always run behind the ability to generate synthetic content, partly because a common process of generating synthetic content involves fooling a detector. “Users may be forthright when it comes to declaring benign synthetic content, but they certainly won’t be forthcoming when it is something problematic. There is also a growing body of evidence to suggest that watermarking/labelling/metadata identifiers may be ineffective as they are easy to manipulate.” 

Advertisement

She adds that empirical evidence shows that non-AI-generated doctored videos and audio are just as convincing as AI-generated synthetic information. So, there is no need for an arbitrary distinction as such. “Moreover, not all synthetically generated information is illegal. A lot of it is benign, so it is unclear why this exceptionalism is being created for it,” she says.

Bal suggests that the current rule might not lead to much change. She believes that the only way to create awareness is to sensitise people and organise concerted campaigns educating them about misinformation—AI-generated or otherwise. “The size of the label is irrelevant. I’ve seen instances where people believe Sora videos are real, even though there is a prominent label, because they do not know what Sora is. These solutions only work if they are implemented from the ground up, not the other way around. The only way to do so is to sensitise people about misinformation, and equip them with the tools necessary for them to be able to discern what is false and what is not,” she says. 

First Published: Oct 23, 2025, 15:18

Subscribe Now
Samidha graduated with a bachelor's in mass media from Sophia College, Mumbai, right before joining Forbes India, where she writes about various startups across industries. She also works on News by N
  • Home
  • /
  • News
  • /
  • Explained-indias-ai-content-labelling-regulation
Advertisement