Three protocols that can safe-keep democratic rights for a billion-strong people in the age of data mining

The India AI Impact Summit took place last week and, as India accelerates AI deployment at unprecedented scale, the critical question is not just about technical capability but also institutional capacity to govern algorithmic power.

The choice is stark. India can build systems that are efficient but opaque, fast but unaccountable, and powerful but uncontestable. Or we can build systems that are efficient and transparent, fast and accountable, and powerful and governable.

If India demonstrates that democratic governance can work at the scale of a billion people in the age of AI, we provide proof of concept for the entire world.

Here are three protocols that matter.

Licence plates for AI agents

Within two years, AI systems will routinely act on behalf of organisations and individuals, conducting transactions, filing returns, bidding on contracts. Who authorises them? What are they permitted to do? How is that authority verified?

India needs a protocol for AI agent identity, similar to how UPI handles payment authorisation. Every agent operates under a specific time-limited mandate — “This agent can submit procurement bids up to ₹5 lakh in defined categories for 90 days”. When the mandate expires, the agent stops. No auto-renewal. The system forces periodic human review.

This is not surveillance. It is accountability infrastructure. Just as UPI verification is federated (banks verify mandates, not the government), AI agent verification would operate peer to peer. India has already built Aadhaar and UPI. We know how to create protocols that work at billion-person scale.

The decision packet

When a UPI payment fails, you receive an immediate error code: U30 for “Transaction declined by receiver’s bank”. Algorithmic decisions in governance need a similar infrastructure.

Every rejection must emit a ‘decision packet’ containing the outcome, the factors that drove it and, critically, a counterfactual explanation. Not “rejected because of income criteria” but “your declared income is ₹3 lakh; to qualify it must be under ₹2.5 lakh”.

This gives citizens agency. They can identify data errors and take corrective action.

Appeals must move at system speed. If an algorithm can reject in five seconds, the remedy cannot take 30 days. Three tiers: Automated error correction in seconds; borderline cases reviewed in 24 to 48 hours; and contested decisions resolved in seven days or auto-approved.

Statistical tripwires auto-suspend systems when reversal rates (how often appeals succeed) exceed 5 per cent. The data becomes the regulator. This extends India’s RTI tradition into the algorithmic age: From the Right to Information to the Right to Understand.

The data passport

A District Magistrate spends three years learning about his region: Groundwater patterns, migration cycles, why certain projects fail.

Then they transfer. That knowledge walks out. Six months later, an AI system deploys with zero contextual knowledge and repeats solved mistakes.

India needs protocols that enable institutional memory to compound rather than reset. Data passports work like international travel: Data stays with the ministry that collected it but can move for specific purposes under time-limited visas — “This data can travel to Ministry of Health for aggregate malnutrition analysis for 90 days.”

Protocol guardians (data trusts) do not store data. They verify requests, stamp passports, maintain audit trails. Like immigration officers verify travellers but do not store their details.

This is the best way to navigate Centre vs State vs local government data jurisdictions without changing the defined or designed dynamics of those relationships. Helpful to navigate between departments and regulators and other bodies as well.

Invisible architecture

But protocols alone are not enough. Infrastructure is useless if the people interacting with it cannot understand what they are seeing.

Every technological revolution produces two types of infrastructure. First comes the visible: The fibre-optic cables, server farms. Then comes the invisible: The shared mental models that allow a society to govern the new power it has created.

Citizens need to know that every automated rejection should come with a decision packet and that systems refusing to explain themselves are defective.

Bureaucrats need to understand that when a vendor promises 99 per cent accuracy, the question is: “What was your reversal rate in pilots?” Journalists need to learn to read algorithmic dashboards the way financial reporters read balance sheets.

Most critically, we must teach the next generation algorithmic civics. Not just coding, but also auditing the systems that govern their lives.

Imagine a civics class where students not just read about the Constitution but actively audit a local scholarship algorithm or housing waitlist. They ask: “Who collected this data? Whose voices are missing? Where does optimisation hide bias?”

India’s advantage

We built the world’s largest identity layer. We built the world’s largest payments layer… without creating surveillance infrastructure.

The question is whether we recognise AI governance as the same category of challenge. Whether we build authorisation, accountability and knowledge infrastructure, not eventually but now, while the choices are still being made… whether we will spend the 2030s governing algorithmic power or being governed by it?

More Like This

Published on February 23, 2026



Source link

YouTube
Instagram
WhatsApp