Nikhil Sud is a lawyer by training and specializes in legal and policy issues related to technology. He serves as Regulatory Affairs Specialist at the Albright Stonebridge Group. Views expressed are personal and do not constitute legal advice.
The general intent behind India’s new IT rules (Rules)—to help curb harmful content online—may be praiseworthy. But the Rules, as they stand, raise several concerns—many of which are non-obvious and profound, including several instances of overreaching and ambiguous language, and other issues, as discussed below. These concerns jeopardise innovation, free speech, and relatedly, prospects of emancipation—or even of maintaining the status quo—for the already marginalised.
The Rules contain overbroad and unclear language
Examples abound. Several have been widely discussed, including in ongoing legal proceedings. However, some critical examples have received insufficient attention.
For instance, though the overbroad definition of “social media intermediary” has received substantial attention, the overbroad definition of “intermediary”—stated in the underlying statute, the Information Technology (IT) Act—merits further attention. This is because even if “social media intermediary” is narrowed, large problematic sections of the Rules would still apply to “intermediaries,” which are defined so broadly that they include practically any online service, well beyond intended targets.
Though it may seem—and as has been mentioned in some policy circles—that those sections do not expand intermediaries’ requirements beyond the rules earlier in play (since 2011), a closer look reveals that they do. And they do so in ways that are often overbroad and unclear and have not received enough scrutiny.
For example, “unlawful in any manner” has been changed to “inconsistent with or contrary to the laws.” “Inconsistent with” a law means the same thing as “contrary to” a law. Why then do the Rules use both phrases? Is this hasty drafting and unintentional? Seemingly not. The fact that the previous, clearer single-pronged standard (“unlawful in any manner”) was changed to this new two-pronged language suggests the new language and its two prongs are intentional. It therefore seems that the Rules envision “inconsistent with the laws” to mean something different from “contrary to the laws,” even though they mean the same thing. Perhaps therefore by “inconsistent with the laws,” the Rules mean “inconsistent with the spirit of the laws.”
If so, that casts an alarmingly broad and ambiguous net—because (a) the spirit of the laws, by definition, is more than just the laws; and (b) what counts as the spirit is vague and invites highly subjective assessments. Consider, if you will, a law that says: “no red shirts allowed.” You therefore – law-abiding as you are – abstain from red. You wear a yellow shirt instead. Ah, but enforcers decide that the spirit of the ban on red was to ban bright colours. Well, gulp. And this alarmingly broad and ambiguous “spirit” net is made even more alarming because the Rules do not articulate the net, leaving readers to attempt deducing it as I’ve done here.
Similarly, the “patently false and untrue” standard is new. And “false” and “untrue” are synonyms, but the fact that they have both been used suggests the Rules envision a difference between the terms’ meanings or connotations. What difference the Rules envision is unclear, inviting inadvertent violations. If the intended difference is instead between “patently false” and “untrue,” that is equally puzzling, because “patently false” is synonymous with “patently untrue,” rendering the additional “untrue” unnecessary.
Additional potentially concerning expansions (to intermediaries’ obligations) that merit greater attention than they have received include the new and virtually limitless standard requiring merely “the intent to…cause any injury to any person” (3(b)(x)); the broadening of “grossly defamatory” to “defamatory” (3(b)(ii)); and the creation of the “which may be reasonably perceived as a fact” (3(b)(vi)) standard regarding deception.
Further, any argument that the definition of “intermediaries” cannot be narrowed because it features in the underlying statute (the IT Act)—unlike the definition of “social media intermediaries” which features in the Rules—may not pass muster. The Rules could seek to limit their own application and alternatively, legislation can be pursued. Also, any argument that the broad and unclear language discussed above is not particularly concerning because intermediaries need only inform users essentially not to post such content is weak. This is because the Rules also suggest a duty to censor such content, including through the requirements to implement grievance redressal and publish compliance reports (the latter applicable only to significant social media intermediaries).
Ambiguous and overbroad language also features in the Rules’ requirements unrelated to intermediaries. For instance, “online curated content” and “news and current affairs content” are defined so broadly that they amount essentially to “any web page.” Additional examples include language attacking “content … likely to … disturb the maintenance of public order” and language requiring a publisher to “take into consideration India’s multi-racial and multi-religious context and exercise due caution.”
The Rules have many additional concerning provisions
These include but are not limited to traceability; automated filtering suggestions; “voluntary” user verification provisions; the enormous potential for regulatory overreach embedded in the three-tier oversight mechanism despite assurances of self-regulation; prematurity given India’s pending data protection law; unreasonable timelines; criminal liability; and the lack of judicial oversight—an omission that is glaring but all too familiar in recent years’ technology-related policies, even though policymakers have themselves commendably noted the importance of judicial scrutiny.
Additionally, the strikingly inadequate consultations for these Rules may be part of a concerning broader pattern of little or no consultation, as seen in certain other recent technology-related policies. Hopefully, policymakers thoroughly consult all stakeholders to evolve the Rules appropriately, including sections that are blank canvases, such as “guidance … on various aspects of the Code of Ethics.” Further, the Rules are inconsistent with international approaches; other regions including but not limited to the U.S. and Europe lack such broad and ambiguous restrictions. Additionally, the Rules rely at least partly on a “levelling the playing field” argument which ignores key differences between the online and offline worlds. Policymakers have relied on this argument in other contexts too recently, such as data localisation and competition—but there too this argument is flawed, albeit for different reasons. For instance, competition policy—such as the non-personal data framework India is developing—that could force larger players to share resources with smaller players to “level the playing field” risks destroying competition on the merits by protecting competitors rather than consumers, flipping the fundamental tenet of competition policy on its head.
The Rules could cause serious damage
The Rules’ overbroad ambiguous language and other concerning provisions discussed above could dramatically hamper Indian users’ ability to produce and consume content on the many Indian and international platforms currently thriving in India. This, in turn, could dramatically chill investment, innovation, and free expression. Relatedly and most devastatingly, the Rules could muzzle into oblivion the marginalised, who may often turn to such platforms—to express themselves or for content that helps widely and empathetically represent them—as part of their ongoing march toward unconditional social acceptance. Such groups may include India’s LGBTQ+ community; some platforms in India already hesitate to tell their stories, fearing backlash based on alarmingly outdated notions of offensiveness. Overbroad and vague Rules that could, even if unintentionally, exacerbate such hesitation among users and platforms could reverse the enormous progress that has been achieved—and prevent the enormous progress that remains to be achieved—on such fundamental human rights issues.
The writer is a lawyer by training and specialises in legal and policy issues related to technology. Views expressed are personal and do not constitute legal advice.