The Centre has asked banks to step up cybersecurity preparedness amid rising global concerns about advanced artificial intelligence (AI) systems, particularly Anthropic’s unreleased model, Claude Mythos.

 


At a meeting chaired by Finance Minister Nirmala Sitharaman, lenders were advised to take pre-emptive steps to secure IT systems, protect customer data and guard financial resources against emerging AI-linked threats.

 


The warning comes as governments and regulators worldwide assess whether such powerful AI tools could expose vulnerabilities in critical systems, including banking infrastructure.

 


What is Claude Mythos?

 


Claude Mythos is an advanced AI model developed by Anthropic, designed to handle complex cybersecurity tasks such as identifying software bugs, analysing systems and even generating exploits.

 
 


The model is currently being tested under a controlled initiative called ‘Project Glasswing’, where limited organisations are allowed to use it for defensive cybersecurity purposes.

 


According to the company, Mythos can outperform humans in certain cybersecurity tasks. Reports suggest it can identify and exploit thousands of vulnerabilities, including long-standing flaws in operating systems and web browsers.

 


Anthropic has also indicated that unauthorised access was made to the model during testing, reinforcing concerns about its potential misuse.

 


Experts say such capabilities fundamentally alter how quickly cyber threats can emerge.

 


“AI is compressing the timeline of cyber risk, with the gap between disclosure and exploitation collapsing,” said Vishak Raman, vice-president of sales, India, SAARC, SEA & ANZ at Fortinet.

 


“Vulnerabilities that once took weeks or months to discover and exploit can now be identified and weaponised in hours,” he told Business Standard.

 


Rahul Agarwalla, managing partner at SenseAI, told Business Standard that models like Mythos compress expertise, time and scale, allowing tasks that once required specialised researchers to be executed faster, cheaper and with far more precision.

 


Why has government warned banks against it?

 


The government’s advisory stems from concerns that such AI systems could be misused to “weaponise” software vulnerabilities, posing risks to financial systems.

 


During the meeting, Sitharaman asked banks to remain vigilant and strengthen their cybersecurity frameworks. Banks were also told to report suspicious activity immediately to authorities and maintain close coordination with relevant agencies.

 


The concern, experts say, is less about new forms of cybercrime and more about the speed and scale at which existing threats could evolve.

 


“The bigger risk is that it industrialises existing ones,” Agarwalla said. “Fraud can become more personalised, phishing harder to detect, and vulnerabilities can be discovered faster than organisations can patch them.”

 


He added that attackers could test multiple exploit paths “at machine speed”, especially across financial systems and legacy infrastructure.

 


Raman echoed this concern, noting that the same tools that help defenders identify weaknesses can also be used to exploit them. “The same capability that detects and patches a vulnerability can also weaponise it,” he said, calling it a “reshaping of the threat landscape”.

 


Why is Anthropic not releasing Mythos publicly?

 


Anthropic has decided against a public release of Claude Mythos Preview, citing serious cybersecurity and national security risks. The company believes that in the wrong hands, the model could enable sophisticated cyberattacks by automating the discovery and exploitation of vulnerabilities at scale.

 


By limiting access to vetted partners, Anthropic is signalling a shift towards controlled deployment of high-risk AI systems rather than open release.

 


This reflects a broader industry recognition of the dual-use nature of such tools.

 


Do such AI models strengthen defence or create asymmetric risks?

 


The debate among experts is not whether such models are beneficial or harmful, but how their impact plays out over time.

 


Agarwalla said in the near term, attackers may have an advantage due to fewer constraints. “The window between vulnerability discovery and exploitation has essentially collapsed,” he said.

 


However, both experts noted that these models could significantly strengthen cybersecurity in the long run.

 


“The same tool that accelerates attacks can accelerate patching at a scale humans never could,” Agarwalla said, adding that institutions need to fasten adoption of AI-driven security systems.

 


Raman noted that defenders can gain an advantage in environments where monitoring and response systems are well integrated, allowing threats to be detected and contained at scale.

 


Have other countries raised concerns about it?

 


Regulators in countries such as Australia and New Zealand are monitoring its implications, particularly for banking systems. In the UK, financial regulators and major banks have held discussions with cybersecurity agencies to assess risks to critical IT infrastructure.

 


The core concern globally is the same: whether such AI systems could outpace existing cybersecurity defences and expose financial networks to new kinds of threats.

 


As Agarwalla put it, the future of cybersecurity is likely to be “AI versus AI” — a race between systems designed to exploit vulnerabilities and those built to defend against them.



Source link

YouTube
Instagram
WhatsApp