Written By: By Team Iota, Apoorv Agarwal, Kaveri Rawal, Bandita
Introduction
The Information Technology (Intermediary Guidelines and Digital Medical Ethics Code) Rules, 2021(hereinafter referred as “Intermediary Rules, 2021”) regulate social media intermediaries, digital news publishers and OTT platforms,while providing a framework for the regulation of content by online publishers on the digital platforms.
The Ministry of Electronics and Information Technology (“MeitY”), in its first comprehensive and structured attempt to regulate AI generated content and strengthen intermediary accountability, has introduced amendment to Intermediary Rules, 2021 by the Information Technology (Intermediary Guidelines and Digital Medical Ethics Code) Rules, 2026 on 10.02.2026 (hereinafter referred as “Intermediary AmendmentRules, 2026”). These amendments will come into effect from 20.02.2026.
The amendments have been introduced in view of the rapid advances in artificial intelligence and machine learning which have made it significantly easier to create, generate, modify or alter highly realistic synthetic audio, visual and audio – visual content, including deepfakes, that can appear real, authentic or true and may mislead or deceive the users[1]. In recent years, several such instances have demonstrated how quickly manipulated content can circulate on social media platforms, often beyond effective remedial control.
The amendments aim at introducing the wider scope of Synthetically Generated Information (hereinafter referred as “SGI”) and curb the misuse thereof. The Intermediary Amendment Rules, 2026 define synthetically generated content as audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or a real-world event.
The issue is not that AI-generated material exists, but rather that it may quickly and widely conflate fake and real content. The attempt to combat damages including deepfake pornography, disinformation and cyber fraud is reflected in the regulatory framework of classifying and removing unlawful SGI. At the same time, synthetic creation is increasingly being used for legitimate artistic, satirical and technological purposes.Therefore, any regulatory strategy must carefully discern between allowed creative usage and malevolent manipulation to avoid stifling innovation or constitutionally protected speech in the name of damage prevention.Top of FormBottom of Form
Regulations without Law
A Global Online Safety Survey[2] conducted by Microsoft shows that 65% of people in India are using generative AI, more than double the 31% of global average in the same period. According to the survey, India not only emerged as the most risk-exposed country in the survey with the highest AI adoption market but also one of the most vulnerable countries to AI misuse. From a regulatory perspective, this combination of high adoption and high vulnerability presents unique governance challenges.
In the absence of any stringent and dedicated law governing the use of AI, the MeitY has sought to crack down the misuse by way of Intermediary Amendment Rules, 2026.At present, there is no standalone statute governing AI and its usage.While the misuse of AI generated content may attract penalties under existing criminal and cyber laws, there is no dedicated legislative framework specifically addressing AI-related harms.
The IntermediaryAmendment Rules 2026, althoughstrive to tackle the problem of misuse of AI and imposeintermediary accountability, have been introduced largely through delegated legislation, without extensive parliamentary debate or structured public consultation.This raises concerns regarding democratic legitimacy and regulatory transparency.
The IntermediaryAmendment Rules, 2026 regulate labelling, metadata embedding and takedown mechanisms; however,they remain silent on algorithmic bias, data sourcing practices and developer accountability. In effect, the regulatory burden is concentrated on intermediaries, while core design and training decisions of AI systems remain largely unregulated.
Neutrality to Accountability: Intermediaries or Co-Regulators
The Information Technology Act, 2000 (hereinafter referred as “the Act”),as amended in 2008, exempted the Intermediaries from liability for any third-party information, subject to due diligence obligations.The Intermediary Rules 2021 reinforced this neutrality. However, Intermediary Amendment Rule 2026 reflects a shift from neutrality to enhanced accountability.
Rule 3(3)(a)(i) of Intermediary Rule 2021(as amended), mandates the intermediaries to deploy reasonable and appropriate technical measures, including automated tools, to prevent dissemination of unlawful SGI.While the intermediaries formally retain safe harbour protection, the Rules impose a proactive obligation to monitor and respond to potentially unlawful content.[3]
This obligation signals a structural departure from the traditional safe harbour doctrine under Section 79 of the Act. The Hon’ble Supreme Court in Shreya Singhal Vs Union of India[4] rejected the idea of intermediaries independently determining the illegality of the information. The present framework, however, appears to move towards a model of delegated content adjudication by private entities. With the present scale of India’s digital ecosystem, such authority is likely tobe exercised through automated systems that prioritize speed over contextual assessment. This increases the risk of erroneous takedowns, disproportionate restrictions and collateral censorship.Smaller intermediaries with limited resources may struggle to comply, potentially leading to market exit and reduced competition.
In effect, the Rules transform intermediaries into co-regulators, raising important constitutional questions relating to proportionality, transparency and procedural fairness under Article 19.
TheThreeHours Rule– Faster Enforcement First, Online Freedom Later
Under Rule 3(1)(d) of the Intermediary Rule 2021 (as amended),an intermediary must remove or disable access to unlawful information within three hours of receiving an official direction. Non-compliance results in loss of safe harbour protection under Section 79 of the Act. [5]
In a market with over one billion internet users, this positions India among the most aggressive regulators of online content. While swift action may offer relief to victims of deepfake abuse and identity manipulation, it simultaneously compresses the space for meaningful review and contestation.
Although expedited takedowns can benefit creators affected by unauthorised reproductions, including viral AI-generated artworks, they also encourage precautionary censorship. This particularly affects artists working in satire, parody and experimental AI art, where contextual interpretation is essential.The Hon’ble Supreme Court time and again has ruled that freedom of expression cannot be suppressed merely because of a potential hostile audience or protestand that the state must protect the artists[6]. In the absence of counter – notice mechanism, a speed driven takedown regime has imposed greater risks on the creative expression protected under India’s free speech framework.
Furthermore, the framework assumes the existence of sophisticated technical infrastructure across platforms. This assumption disproportionately disadvantages smaller start-ups and regional intermediaries, effectively creating regulatory entry barriers. The result is a deterrence-based rather than compliance-oriented regime. Regulation based on assumption of availability of infrastructure makes it vulnerable in implementation.
While the policy may help curb the harmful material online, it also raises question of maintaining online freedom in today’s world, where both small or large businesses are going online, concerns with respect to faster enforcement of regulation with stricter oversight are to be considered.
Cross- Border Platforms
The internet ecosystem is inherently transnational with major intermediaries operating through overseas parent entities and deploying globally trained AI models. The Intermediary Amendment Rules, 2026 do not comprehensively address the jurisdictional and enforcement complexities inherent in cross-border governance.While the Act provides for extraterritorial application, enforcement against foreign platforms without substantial Indian presence remains uncertain. At the same time, platforms must navigate overlapping regimes such as the EU AI Act, the Digital Services Act and US free speech jurisprudence. This regulatory fragmentation increases compliance costs and legal uncertainty, particularly in relation to labelling standards, traceability requirements and disclosure obligations. A platform compliant in one jurisdiction may inadvertently violate norms in another.
Conclusion
The Intermediary Amendment Rules, 2026represent a significantregulatoryresponse to the growing influence of AI-generated content in India’s digital sphere. By acknowledging and regulating SGI, the amendments reflecta shiftfrom intermediary neutrality to enhanced accountability. The approach aims to address the damages caused by deepfakes and misleading synthetic media within the current intermediary liability system through labelling rules, metadata embedding, and accelerated removal timescales.
However, the framework remains largely executive-driven and content-centric. Broader issues such asalgorithmic bias, data governance, and AI developer accountability are still outside its scope in the absence of comprehensive AI legislation. There is a pressing need for parliamentary engagement on a technology that profoundly affects democratic participation, privacy and public discourse.Moreover, risks of over-censorship and disproportionate impact on creative expression persist. Cross-border enforceability and infrastructural capacity will be critical to successful implementation.
While the amendments constitute an important first step, a coherent, participatory and technologically informed legislative framework is essential for sustainable AI governance in India.
[1] Frequently Asked Questions on the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 released by the Ministry of Electronics and Information Technology (MeitY)
[2]Global Online Safety Survey by Microsoft; August 1, 2025 https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/bade/documents/products-and-services/en-us/digitalsafety/2026_Global_Online_Safety_Survey_Results.pdf
[3] Rule 3(1)(cb) read with Rule 3(1)(ca),The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
[4]2015 (5) SCC 1
[5] Section 79. Information Technology Act, 2000
[6]S. Rangarajan Etc vs P. Jagjivan Ram, 1989 SCC (2) 574
