The surge in AI-generated content across text, images, audio and video has made it increasingly difficult to distinguish it from human-created material, as large language models rapidly scale content creation.
In response, the Ministry of Electronics and Information Technology (MeitY) has proposed a “continuous and clearly visible” AI label on such content. The move aims to curb misinformation and marks a shift from post-facto moderation to disclosure at the point of creation.
However, it raises a key question: can disclosure-based regulation reduce harm, or will it add costs without addressing the core issue?
What the proposal seeks to do
The proposal, likely through amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, mandates AI labelling across formats and platforms.
What problem is it trying to solve?
The policy responds to a growing trust deficit in digital content. Generative AI enables high-quality content at near-zero cost, amplifying risks such as deepfakes used in political misinformation, financial fraud, and reputational attacks.
Rishi Agrawal, CEO and co-founder of Teamlease Regtech, told Business Standard that the shift to a “zero-friction AI ecosystem” has increased high-velocity misinformation, with synthetic content now created and distributed at scale.
With over 950 million internet users in India, the potential impact is significant.
Transparency vs effectiveness
The proposal hinges on whether labelling will change user behaviour or simply shift responsibility.
While transparency assumes users will act cautiously, misinformation often spreads faster than corrections, and warnings are frequently ignored. This creates a gap between awareness and action, meaning labels may inform without necessarily protecting users.
Compliance burden: Platforms and creators
The proposal presents operational challenges.
Technical feasibility and scale
Detecting AI-generated content is possible but complex at scale.
Agrawal said “the feasibility question has two parts — detection and scale”. While detection has improved through artefacts and metadata, “the challenge is largely infrastructural, not technical”, as it is compute-intensive and expensive.
He added that with sufficient computing capacity, “deploying detection at scale is well within reach”.
However, Kalindhi Bhatia, partner at BTG Advaya, said feasibility is “partial at best” due to the sheer volume of content. “As AI evolves, the gap between synthetic and human generated content continues to narrow, making accurate identification at scale increasingly difficult and prone to both false positives and false negatives,” she told Business Standard.
She added that editing, reposting, or stripping metadata reduces detection accuracy, making consistent labelling harder.
Cost implications
Compliance will require investment in detection systems, moderation teams, and monitoring tools.
Agrawal noted detection is “compute-heavy and expensive”, requiring infrastructure investment across players. “To prevent compliance asymmetry, the industry must prioritise plug-and-play infrastructure and standardised provenance tools,” he said.
Higher costs could slow experimentation and add friction to content creation.
Impact on smaller creators
Agrawal warned of a “two-tier market,” where large platforms absorb costs while smaller players face friction or exit risks, though labelling could also act as a transparency signal for startups.
Kalindhi Bhatia said the “primary compliance burden sits with intermediaries”, with labelling acting as disclosure rather than restricting AI use, though smaller creators may still face some friction.
Platform liability and regulatory expansion
The proposal increases platform accountability, requiring continuous compliance across large volumes of user-generated content, while extending oversight to digital creators.
Kalindhi Bhatia said this reflects growing scrutiny of synthetic media. Agrawal described it as a shift from “passive hosting to active accountability”.
However, labels indicate origin, not accuracy, and may be ignored, leading to “label fatigue”. While Agrawal said continuous labelling enables “proactive accountability”, its effectiveness depends on behavioural change.
Global context
The European Union’s AI Act combines transparency with risk-based classification, while the United States relies largely on voluntary, platform-led labelling.
India’s approach is mandatory, broad-based, and enforcement-driven.
Why it matters
AI labelling will be effective only if it changes user behaviour. If ignored or bypassed, it risks becoming a compliance-heavy response to a deeper trust issue.
As Kalindhi Bhatia noted, the real test lies in “the extent to which platforms can ensure consistent compliance” and how enforcement evolves.
MeitY’s proposal is a first step, but not a complete solution.