India powers AI world's growth but risks becoming its unpaid intern

India powers AI world's growth but risks becoming its unpaid intern



By Catherine Thorbecke

 


India is fast becoming one of the world’s biggest AI user bases. The question now is how it can turn that scale into superpower status rather than just training Silicon Valley for free. 


That will be a tall order for a country largely caught flat-footed by the boom. But let’s start with the basics: The three main building blocks of AI are talent, compute (including high-end chips and infrastructure), and data. India doesn’t lack engineers, but it currently doesn’t have foundational research training at scale or enough advanced processors at public labs and universities. What it does have, in abundance, is data. It should start treating this like a strategic asset rather than leaking it out as a free export.

 
 


It’s a key reason US Big Tech is making a blitz for the market. With roughly a billion people online and a massive, mobile-first population, India generates a torrent of messages, voice notes, digital payments — and increasingly the kind of human feedback that makes AI systems better — on a daily basis. The world’s most-populous country is the second-biggest user base of both OpenAI’s ChatGPT and Anthropic PBC’s Claude after the US, while accounting for just a fraction of these platforms’ revenue. The dynamic exposes how much more the market matters for training purposes right now than making money.

 


These free-to-use services and promotions bombarding Indian consumers come with a cost. It’s part of a strategic Silicon Valley land grab for Indian languages, voices and behaviors that will make foreign systems smarter first. The South Asian nation risks repeating a familiar historical pattern of exporting the raw materials for pennies then buying back the imported models at a premium. Meanwhile, it will be left to absorb the jobs shock and social impacts at home. 

 


India’s linguistic diversity also raises the stakes. The country has more than 20 official languages and dozens more that are unofficial. If models aren’t trained on enough local speech and cultural contexts, they’ll misunderstand users and become unreliable in classrooms, clinics, courts and even customer support settings. Closing this language gap sits at the heart of Prime Minister Narendra Modi’s promise to democratize AI and make its impacts real for everyone from farmers to small-business owners, rather than just English-speaking elites. 

 


At the same time, the AI future that the likes of Meta Platforms Inc. or OpenAI are selling — marked by personal agents and voice-powered ambient devices — won’t work in India unless they can listen and speak local languages and get the nuances right. 

 


Some startups, including Andreessen Horowitz-backed Poseidon AI and Big Tech-supported nonprofit efforts are already trying to crowdsource and create local language datasets. New Delhi should be paying far more attention, not just because data-labeling and collection practices have adopted a global reputation of being exploitative, but also because these efforts could anchor a domestic ecosystem. India can’t demand “AI for all” while outsourcing the work of building the linguistic foundation. Done well, though, these datasets can become infrastructure for its AI economy.

 


The same logic applies beyond language. India should push hard for the creation of specialized, high-impact and localized datasets in sectors like health care or finance. AI can improve diagnostics and personalized care, but the most valuable data for accomplishing this still lies in largely inaccessible hospital systems. On the sidelines of the AI Impact Summit, I attended a dialogue convened by nonprofit coalition iSpirit, where local entrepreneurs laid out a framework to let researchers tap this more sensitive data securely. Privacy fears are real and should be taken seriously, but accessing this data could also mean saving lives. Unlocking and organizing it is the hard, unglamorous work that takes Modi’s branding of “AI for good” beyond just slogans.

 


Ultimately, India’s data reckoning should be about who controls this strategic input to AI and who captures the value from it.

 


The answer isn’t to wall off user outputs from the world. It’s about finding creative solutions and leveraging them to set rules that reflect what’s actually being extracted. If its peoples’ data is a key ingredient for building advanced AI, the government should demand more than apps and marketing in return.

 


It can ask for partnerships that build capacity, including public compute commitments, access to high-end chips, serious training pipelines for AI researchers and collaborations that go beyond token commitments. New Delhi should also set norms that treat local datasets as a public good and consider revenue-sharing models that maintain the upsides at home. Transparency is crucially important. Policymakers should require foreign model builders to disclose the kind of data that shaped their systems and how it has been evaluated for harms and biases in Indian contexts.  

 

More than building foundation models, setting equitable data policies is where India has the biggest opportunity to truly lead the Global South in the AI era. Otherwise, it risks becoming an open mine and fueling systems that automate local jobs, concentrate power abroad and deepen dependencies. 
(Disclaimer: This is a Bloomberg Opinion piece, and these are the personal opinions of the writer. They do not reflect the views of www.business-standard.com or the Business Standard newspaper)



Source link

Google releases critical security patch for Chrome: Here's what it does

Google releases critical security patch for Chrome: Here's what it does



Google has rolled out a critical security patch for its Chrome browser, upgrading Windows and macOS users to version 145.0.7632.116/117, while Linux users are receiving version 144.0.7559.116. The phased update, which will be available over the next few days and weeks, fixes three high-severity vulnerabilities that could pose serious security risks if not addressed.

 


Google classifies these Common Vulnerabilities and Exposures (CVEs) as high severity, indicating a strong potential for exploitation.

 


Notably, two of the vulnerabilities relate to out-of-bounds memory access — a type of flaw often leveraged in remote code execution or sandbox escape attacks when paired with other exploits. Users and organisations on Windows and macOS are advised to check their Chrome version and install the update as soon as it is released in their region.

 


Which issues have been fixed


According to a report by Cyber Security News, the first flaw — CVE-2026-3061 — is an out-of-bounds read bug in Chrome’s Media component. The issue was flagged by security researcher Luke Francis on February 9, 2026. Such vulnerabilities in media pipelines are particularly risky, as specially crafted media files or malicious web content could trigger them, raising the possibility of drive-by attacks through compromised websites.

 


The second vulnerability, CVE-2026-3062, impacts Tint, Chrome’s internal WebGPU shader compiler. Reported by researcher Cinzinga on February 11, 2026, it involves both out-of-bounds read and write conditions and is considered the most technically serious of the three. Out-of-bounds write flaws in graphics or shader processing can result in memory corruption, potentially allowing attackers to execute arbitrary code within the browser’s renderer. With WebGPU gaining traction, components such as Tint are becoming a larger attack surface.

 


The third issue, CVE-2026-3063, relates to an improper implementation within Chrome DevTools and was reported by M Fauzan Wijaya (Gh05t666nero) on February 17, 2026. Although such flaws are generally less critical than memory corruption bugs, weaknesses in developer tools can still open the door to cross-origin data exposure, privilege misuse or security boundary bypass in certain scenarios.


Notably, Google said that detailed bug reports will remain restricted until most users have installed the fix, a move aimed at reducing the risk of exploitation before patches are widely deployed.



Source link

AI won't replace software engineers, will uplevel skills: Microsoft's Smith

AI won't replace software engineers, will uplevel skills: Microsoft's Smith



Artificial intelligence should not replace software engineers or IT professionals but rather enhance their skills and creativity, Brad Smith, President of Microsoft Corporation said, addressing fears of widespread job losses in the tech sector.


In an interview with PTI, Smith — one of the highest-ranking executives in the Redmond, Washington-headquartered Microsoft — spoke on IT services, job losses, and AI’s impact on cognitive work, weighing in on one of the most intensely debated topics in the sector.


He said Microsoft’s goal is to build technology that helps people get smarter.


AI, he explained, can take over repetitive coding tasks, freeing developers to focus on product design, architecture, testing, and security, effectively “upleveling” the software engineering profession.

 


Rather than reducing jobs, he argued, AI will make them more interesting and fulfilling, likely increasing demand for skilled professionals and boosting wages.


Smith expressed frustration, at times, with technology leaders who focus solely on building “machines that are smarter than people”. He asserted that for Microsoft, the primary goal is to develop technology and machines that help people enhance their own abilities.


“We should always start by asking ourselves, what are we trying to accomplish?” Smith said, urging the industry to define clear objectives for AI, saying that while the technology should drive productivity and economic growth, it must specifically be aimed at creating higher-quality jobs. Achieving these results, he argued, depends entirely on making them a deliberate priority or outcome.


“The first thing we should always ask ourselves is, what’s our goal? I do get frustrated at times when I hear technology leaders talk about how they want to build machines that are smarter than people. There is a role for machines that keep getting smarter, but our biggest goal is different from that,” he said.


For Microsoft the goal is to make machines that help people get smarter.


“That’s what technology can do when technology is at its best. So take that as the first piece, including in cognitive services,” he said.


According to him, technology reaches its full potential when it enhances human interaction. Smith noted that AI can serve as a vital tool to improve how people listen, read, and speak.


Citing recent discussions with Indian government officials in Delhi around human communications, he highlighted the focus on technology’s ability to bridge communication gaps, such as translating 22 different Indian languages, to help people better engage with one another.


“Now take that same concept and apply it to IT services. Think about software. How can we use AI to improve the creation of software? That’s what we do at Microsoft. That’s what people in the great Indian IT companies do as well. And the reality is, there’s a lot we can do,” he said.


As AI automates routine coding, it liberates developers to focus on higher-level tasks such as product design, system architecture, and project management, Smith said, adding that this shift also allows engineers to better oversee code testing and security, ensuring the final product aligns with their specific objectives.


“Now, when you think about it that way, I think it helps us look at what’s really going on and what we should really want to see going on. We’re not talking about using AI to replace software engineers, but we are talking about using AI to change the art of software engineering, to uplevel the people who are in this extraordinary and extraordinarily important profession,” the senior Microsoft executive said.


To tune and sync the skill sets to address the jobs of future would be key, he said, pointing out that this may acutally result in higher salaries.


“Usually, when that happens, you find you want more people who can do this work. You’re even willing to pay them more than in the past. Once again, in a way, we should always start by asking ourselves, what are we trying to accomplish? Yes, we want AI to improve productivity. Let’s improve productivity and enhance economic growth, and create better jobs. I believe we can do that, but only if we say that needs to be our goal,” he said, adding that governments have a role to play in amplifying these as desired outcomes and to “remind us that it is a goal worth pursuing”.



Source link

WhatsApp may introduce anti-spoiler feature for text messages: What's new

WhatsApp may introduce anti-spoiler feature for text messages: What's new



WhatsApp is reportedly developing a new feature that will allow users to hide text messages as spoilers. According to a report from WABetaInfo, the feature will let users conceal parts of their messages so recipients must tap to reveal the hidden content. The update was spotted in the latest WhatsApp beta for iOS, version 26.6.10.71. As reported, the spoiler feature is currently limited to text messages, with no confirmation yet on support for images, videos, or other media. Apple’s iMessage already offers a similar feature that lets users hide text using ‘invisible ink’ formatting.


WhatsApp’s spoiler formatting feature: What’s new

According to WABetaInfo, the feature will work when a sender marks a message as a spoiler. This content will then appear hidden by default in the chat. The recipient will need to tap the message bubble to remove the spoiler formatting and read the text. 

 


 
The report noted that the option to hide the message will appear in the text-formatting context menu after selecting the text of a message. Users can tap on the “Spoiler” option to apply the formatting. WhatsApp will automatically add two vertical bars (double pipes) before and after the selected text. Users can also manually type the double pipes to mark a spoiler. For example, typing “||text||” will format that word or sentence as a spoiler. It will also be possible to apply the formatting to an entire message. 


Messages will likely remain marked as spoilers even after being viewed. If a user closes and reopens the app, they will need to tap the message again to reveal it. This ensures the content stays hidden by default. 


 
The report stated that some beta testers may already see the Spoiler option in the menu. However, the feature is still under development and does not yet work. WhatsApp is expected to start testing the feature more widely in a future beta update.

 

First Published: Feb 23 2026 | 12:03 PM IST



Source link

86 nations, two int'l organisations sign AI Summit declaration: Vaishnaw

86 nations, two int'l organisations sign AI Summit declaration: Vaishnaw



As many as 86 countries and two international organisations have signed the AI Impact Summit declaration, IT Minister Ashwini Vaishnaw on Saturday said, adding that the US, UK, Canada, China, Denmark, and Germany are among the signatories.


The strong global backing for the declaration comes at the conclusion of the India AI Impact Summit in New Delhi.


Vaishnaw told reporters that nations across the world have formalised and upheld principles of ‘welfare of all, and happiness of all’.


“Prime Minister Narendra Modi’s human-centric AI vision been accepted by the world. Democratising Artificial Intelligence resources so AI facilities, services and technology can reach everyone in society has been accepted by all,” the minister said.

 


Balancing economic growth with social good has been prioritised, he added.


“Not just economic growth, even social harmony has to be kept in mind. Safety and trust are at the centre, they have been brought among the main points,” Vaishnaw said, adding that a secure, trustworthy and robust AI framework has been focused on.


Other major areas of thrust include innovations and development of human capital, he noted.


“For all these areas, all countries have agreed to work together. Almost all countries that participated, including the US, the UK, Canada, China, Denmark, Egypt, Indonesia, and Germany… everyone has participated,” the minister said.


The mega AI Impact Summit secured investment commitments of over USD 250 billion in infrastructure alone, with Vaishnaw on Friday terming it a “grand success”.


Vaishnaw had said participation at the summit crossed five lakh visitors, reflecting strong domestic and global engagement with India’s AI push.


The India AI Impact Summit brought together global policymakers, industry leaders and technology experts, positioning India as a key player in shaping international AI governance and infrastructure development.


“More than five lakh visitors participated in the exhibition, learnt a lot, and interacted with many experts from around the world. We had practically every major AI player in the world participating in large numbers. We had so many startups getting the opportunity to showcase their work. Overall, the quality of the discussion was phenomenal,” he had said.


Be it the ministerial dialogue, the leaders’ plenary, the main inauguration function, or the Summit overall, the quality of participation and dialogue was phenomenal, Vaishnaw had pointed out.


The investment pledges have crossed USD 250 billion for infra-related capital and around USD 20 billion on VC/deep tech investments.


Vaishnaw had said that the Summit reflected the world’s confidence in India’s role in the new AI age.


Delhi played host to a lineup of global tech heavyweights this week – Google’s Sundar Pichai, OpenAI’s Sam Altman, Microsoft’s Brad Smith and Anthropic’s Dario Amodei – as discussions spanned most intensely debated global topics in the tech universe, from AI’s opportunities and risks, all the way to AGI, governance and the future of jobs.



Source link

OpenAI flagged, banned Canada mass shooting suspect months before attack

OpenAI flagged, banned Canada mass shooting suspect months before attack



By Thomas Seal

 


OpenAI flagged and banned the suspect in one of Canada’s worst-ever mass shootings for violating ChatGPT’s usage policy June last year, without referring her to police. 


The artificial intelligence company said that the suspected killer — Jesse Van Rootselaar — had an account that was detected about eight months ago by systems that scan for misuse, including the possible furthering of violent activities.

 


Canadian police alleged that the 18-year-old killed eight people and injured about 25, before taking her own life in the remote western Canadian town of Tumbler Ridge earlier this month.

 


OpenAI identified an account associated with Van Rootselaar about eight months ago, with tools to detect misuse of its AI models to further violent activities, and banned it, the company said. 

 
 


The Wall Street Journal first reported OpenAI’s identification of Van Rootselaar, citing anonymous sources as saying that the alleged killer “described scenarios involving gun violence over the course of several days,” which triggered an internal debate among roughly a dozen staffers, some of whom urged leaders to alert police, the report said. 

 


OpenAI said it considered referring the account to law enforcement at the time, but didn’t identify credible or imminent planning and determined it didn’t meet the threshold. After the shooting, the company contacted Canadian authorities.

 


“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” an OpenAI spokesperson said by email. “We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.” 

 


The company said it trains ChatGPT to discourage imminent real-world harm.



Source link

YouTube
Instagram
WhatsApp