Indian enterprises are among the heaviest users of artificial intelligence and machine-learning (AI/ML) tools globally, and a substantial volume of sensitive data is moving into these systems, according to the recently released Zscaler ThreatLabz 2026 AI Security Report. The findings suggest that data flows tied to enterprise AI usage, and the associated data leakage incidents, are growing faster in India than in any other regions.
The ThreatLabz report, which analysed nearly one trillion AI/ML transactions observed through the Zscaler Zero Trust Exchange in 2025, found that enterprises worldwide transferred 18,033 terabytes of data to AI/ML applications over the year, a 93 per cent year-on-year increase. While these figures capture global activity, India stood out as the second-largest source of enterprise AI/ML traffic, with 82.3 billion transactions and 309.9 per cent year-on-year growth, behind the US. India also accounted for 46.2 per cent of all AI/ML traffic in the Asia-Pacific and Japan (APJ) region.
Indian enterprise usage and the volume of exposure
The growth figures reported for India point to rapid adoption of AI/ML tools across sectors that routinely handle sensitive data. While Zscaler’s dataset does not break down sector-wise traffic by country, the report identified finance and insurance, manufacturing, and engineering and IT functions as major sources of enterprise AI usage globally. In India, IT services, banking, financial services and insurance (BFSI), and technology companies are widely seen as heavy users of these tools, contributing to both traffic volumes and the scale of potential exposure.
Data leakage incidents linked to mainstream AI tools
The report flags data leakage as a key security concern as enterprises scale up their use of AI tools. According to Zscaler’s analysis, ChatGPT alone accounted for about 410 million data loss prevention (DLP) violations in the dataset, while coding assistant tools such as Codeium saw a 100 per cent year-on-year increase in data leakage incidents.
These violations include cases in which enterprise security controls flagged sensitive information being sent to AI applications. The most common categories of exposed data included personally identifiable information such as names and identifiers, national identifiers, source code, and medical and financial data, the report said.
Security context alongside rapid adoption
The ThreatLabz report also shows that enterprises are increasingly applying policy controls to AI traffic. Across the global dataset, about 39 per cent of AI/ML transactions were subject to blocks or inspection policies, reflecting efforts to balance AI adoption with security governance. Even so, with AI now embedded across multiple workflows and tools, maintaining consistent oversight remains a challenge.
The report notes that many enterprise AI systems are being integrated into business processes faster than organisations can build visibility and controls around them, contributing to gaps in data protection and threat detection. It also states that in controlled scans, some enterprise AI deployments surfaced critical weaknesses within minutes rather than hours, underlining how quickly such systems can be exposed if not hardened against targeted attacks.