Microsoft plugs remote code vulnerability in Notepad app on Windows 11

Microsoft plugs remote code vulnerability in Notepad app on Windows 11


Microsoft has fixed a security flaw in Notepad that could have allowed attackers to trick users into clicking harmful links inside Markdown files. The company resolved the issue in its latest patch update, rolling out a fix to block any possible exploitation. According to Microsoft, the vulnerability could have been used to remotely load and run malicious files on a victim’s computer. Although Microsoft said there is no evidence that the flaw was actively exploited.


How did the Notepad vulnerability in Windows 11 work

According to the company, the problem affected Markdown files opened in Notepad. For context, Markdown files are simple text files that use a lightweight formatting language called Markdown. They let users add basic formatting such as headings, bold text, links, lists and images using plain text symbols. If a user clicked on a specially crafted malicious link inside one of these files, it could trigger what Microsoft described as “unverified protocols”. This would allow attackers to execute remote code on the system. As per Microsoft, the vulnerability has been identified as CVE-2026-20841. 

 

 


Microsoft only added support for Markdown in Notepad on Windows 11 last year. The feature allowed users to open and edit Markdown files directly in the basic text editor. However, the addition reportedly drew criticism, with some saying that Microsoft was adding unnecessary features and AI capabilities into core apps such as Notepad and Paint, contributing to concerns about bloatware in the operating system.

 


The company said it has no evidence that hackers exploited the flaw in real-world attacks. Still, it chose to patch the issue as part of its regular security updates. The fix ensures that Notepad no longer allows such links to launch unsafe protocols that could compromise a device.


According to a report by The Verge, this is not the first time a text editor has faced security concerns. Recently, the third-party Notepad++ app also disclosed that some users may have downloaded a malicious update linked to Chinese state-sponsored attackers.

 

First Published: Feb 12 2026 | 12:11 PM IST



Source link

Apple reportedly pushes back plans to release revamped Siri to iOS 27

Apple reportedly pushes back plans to release revamped Siri to iOS 27



Apple has reportedly delayed the release of a revamped Siri once again. According to a report by Bloomberg, Apple’s virtual assistant has run into snags during testing in recent weeks, which might have pushed back the release of several highly anticipated functions from iOS 26.4 to iOS 26.5 (expected to be released in May) and even iOS 27 (likely to be rolled out in September). 

 

Apple was said to have initially planned to introduce the new features with iOS 26.4, but the rollout strategy now appears to have changed. Bloomberg reported that in recent days, Apple engineers have been directed to test the upcoming Siri enhancements on iOS 26.5, indicating that the release may have been pushed back by at least one version.

 
 


Early internal builds of iOS 26.5 also reportedly include a settings option that lets employees activate a “preview” mode for the new features. This points to the possibility that Apple may label the rollout as an early or limited release, signalling to users that the functionality could be unfinished or subject to performance issues — similar to how the company handles beta software testing.


Revamped Siri hiatus


Apple unveiled plans for a more advanced version of the assistant, revamped Siri, in June 2024. The company demonstrated features that would allow Siri to access personal data and interpret on-screen content to respond more intelligently to user requests. Although the company later said the upgraded Siri would arrive in 2026, it did not provide a precise timeline publicly. The report states that internally Apple had targeted March 2026, aligning the release with iOS 26.4 — a goal that was still in place as recently as last month.

 


Recent testing has since reportedly revealed new issues with the software, prompting further delays. The assistant has reportedly struggled to consistently interpret queries and, in some cases, has taken longer than expected to process requests, leading to the latest postponement. The report notes that this remains a fluid situation, and Apple’s plans may change further.


Revamped Siri: What to expect


In a joint announcement made last month, Apple and Google said they have signed a long-term partnership agreement. As part of the arrangement, Apple’s next-generation foundation models will be built using Google’s Gemini models and cloud infrastructure. The collaboration will support upcoming capabilities, including a more personalised version of Siri and enhancements tied to Apple Intelligence.

 


This revamped version is likely to bring notable additions, including a chatbot for Siri, Personal context understanding, on-screen awareness, and the capability to perform app actions without opening them. 


Siri as chatbot


According to Bloomberg, Apple is developing an initiative codenamed “Campos” aimed at evolving Siri into a full-fledged AI chatbot, that might be similar to Gemini. Unlike the existing version, which primarily delivers brief replies and handles command-based prompts, the upgraded assistant is expected to support extended conversations and more sophisticated queries. The revamped Siri is said to enable both voice and text interactions and could eventually replace the current interface across Apple devices.


Personal context understanding


The revamped Siri is expected to gain deeper awareness of user data stored across emails, messages, photos, calendar events, and local files. Apple has previously showcased examples in which Siri could retrieve specific details — such as a licence number from an image — or surface recommendations shared within message threads.


On-screen awareness


Another planned enhancement involves Siri recognising and responding to content currently displayed on a device’s screen. For instance, users may be able to issue contextual commands like saving an address from a message or setting a reminder based on an email being viewed.


In-app actions


Siri is also expected to carry out actions within applications without requiring users to manually open them. Earlier demonstrations included scenarios such as locating an image, making edits, and saving it to a folder entirely through voice instructions.



Source link

Apple rolls out iOS 26.3 with new features, security fixes: How to update

Apple rolls out iOS 26.3 with new features, security fixes: How to update



Apple has rolled out the iOS 26.3 update for supported iPhone models, introducing a range of new features and refinements across system apps and core experiences, along with security patches. The update adds improvements to the wallpaper gallery, a tool for transitioning from an iPhone to an Android device, a notification forwarding feature on third-party accessories, and more. Here’s a complete look at what the latest iOS update brings to iPhones:


Apple iOS 26.3 features: What’s new for iPhones


According to Apple’s official release notes, the iOS 26.3 update includes the following additions and changes: 


Transfer from iPhone to Android: The updated iOS 26.3 adds support for a smoother iOS-to-Android switching experience. The feature lets users place their iPhone near an Android device to start the transfer. After the two devices are connected, users can select data such as photos, messages, notes, apps, passwords, contacts and more to move across. Health information, Bluetooth-connected devices and secure content such as locked notes are not carried over to the new device. 

 

New weather wallpapers: Apple has updated the iPhone’s wallpaper gallery with a separate Weather section. Earlier combined under “Weather & Astronomy”, the two categories now appear as individual rows. The update brings three new weather-themed wallpapers, each supporting live updates that mirror real-time conditions based on the user’s current location, along with varied font styles and weather widgets. 
ALSO READ: Elon Musk restructures xAI’s teams following co-founders’ departure 


Limit precise location: Apple’s iPhones powered by its in-house modem — such as the iPhone 16e and iPhone Air — are getting a new “Limit precise location” feature with iOS 26.3. The option is designed to restrict the level of location data shared with mobile carriers, enhancing user privacy. It will be available only on supported networks, which currently include Boost Mobile in the US, Telekom in Germany, and EE and BT in the UK, among others. 


Proximity pairing (EU only): Apple has added Europe-only changes for third-party wearables, which were shared by the European Commission. Devices such as headphones and smartwatches can use some of the same functionality available to AirPods and the Apple Watch going forward. Proximity pairing allows third-party devices to pair with an iOS device in an AirPods-like one-tap way by bringing an accessory close to an iPhone or an iPad. 


Security updates: iOS 26.3 addresses dozens of security vulnerabilities, including one that had been actively exploited. As reported by MacRumors, Apple said a flaw in the dyld dynamic link editor could allow arbitrary code execution and may have been used in an “extremely sophisticated attack” on targeted individuals running versions before iOS 26. The update also patches issues in CoreAudio, Game Center, Messages and Photos, which could have led to app crashes, exposure of sensitive information, sandbox bypass, or unauthorised access to photos from the Lock Screen. 


iOS 26.3 eligible iPhone models


  • iPhone 17 series: iPhone 17, iPhone 17 Pro, iPhone 17 Pro Max, iPhone Air

  • iPhone 16 series: iPhone 16, iPhone 16 Plus, iPhone 16 Pro, iPhone 16 Pro Max, iPhone 16e

  • iPhone 15 series: iPhone 15, iPhone 15 Plus, iPhone 15 Pro, iPhone 15 Pro Max

  • iPhone 14 series: iPhone 14, iPhone 14 Plus, iPhone 14 Pro, iPhone 14 Pro Max

  • iPhone 13 series: iPhone 13, iPhone 13 mini, iPhone 13 Pro, iPhone 13 Pro Max

  • iPhone 12 series: iPhone 12, iPhone 12 mini, iPhone 12 Pro, iPhone 12 Pro Max

  • iPhone 11 series: iPhone 11, iPhone 11 Pro, iPhone 11 Pro Max

  • iPhone SE (2nd generation and later)


How to update to iOS 26.3


  • Go to Settings.

  • Tap on General and go to the “Software Update” section.

  • If the update is available, your iPhone will display the option to Download and Install. Tap on it to begin the process.

  • Once the download is complete, you will have the option to update immediately, install later, or select “Remind Me Later”.

  • Tap on Install to update immediately or choose another option according to your preference.

  • If prompted, enter your passcode to proceed.



Source link

Saaras V3 beats Gemini, GPT-4o on Indian speech benchmarks, says Sarvam AI

Saaras V3 beats Gemini, GPT-4o on Indian speech benchmarks, says Sarvam AI



Indian AI startup Sarvam AI has released a new version of its speech recognition model, Saaras V3, and says it outperforms several widely used global systems, including Google’s Gemini 3 Pro, OpenAI’s GPT-4o Transcribe, Deepgram Nova-3, and ElevenLabs Scribe v2, on benchmarks focused on Indian languages and Indian-accented English.

 


The company’s co-founder Pratyush Kumar shared the results in a post on X, alongside benchmark charts comparing Saaras V3 against competing models on the IndicVoices and Svarah datasets. According to him, Saaras V3 recorded a lower word error rate than the other models across the most widely used Indian languages in the IndicVoices benchmark and also led on the Svarah benchmark, which focuses on Indian-accented English.

 
 


On the subset of the 10 most popular languages in the IndicVoices dataset, Sarvam reports that Saaras V3 achieved a word error rate of about 19.3 per cent, compared to higher error rates for Gemini 3 Pro, GPT-4o Transcribe, Deepgram Nova-3, and Scribe v2. The company also said the performance gap widens on the remaining languages in the dataset, which include several lower-resource Indian languages.


Saaras V3 on IndicVoices benchmark (Source: Sarvam)

On the Svarah benchmark, which is built around Indian-accented English speech from speakers across multiple states, Saaras V3 again recorded the lowest word error rate among the compared systems, according to the figures shared by Sarvam. 


Saaras V3 on Svarah benchmark (Source: Sarvam)


Saaras V3: What’s new


Sarvam says Saaras V3 is built on a new architecture and expands support to all 22 scheduled Indian languages, along with English. A key change in this version is native support for real-time, streaming speech recognition, where the model begins producing text while audio is still playing, instead of waiting for the full clip to finish.

 


According to the company’s technical blog, Saaras V3 is trained on more than one million hours of multilingual audio covering different Indian languages, accents, and recording conditions, with a focus on code-mixed and noisy speech. Training involved large-scale pre-training, followed by supervised fine-tuning and reinforcement learning, and additional post-training steps aimed at reducing long-tail errors and improving consistency across languages.


Sarvam says the streaming version of the model is designed to keep accuracy close to the batch mode while reducing latency, making it suitable for use cases such as live captions, voice assistants, call-centre tools, and real-time transcription.


Beyond basic transcription


Sarvam says Saaras V3 is positioned as more than a simple speech-to-text system. The model supports automatic language detection, word-level timestamps, and speaker diarisation, which allows it to separate and label different speakers in a conversation. These features are aimed at use cases such as call analytics, meeting transcripts, media subtitling, and customer support tools, where structure and speaker attribution matter in addition to raw text.

 


The company has also exposed different operating modes that trade off latency and accuracy, ranging from a “fast” mode focused on low time-to-first-token to more accuracy-focused settings for applications where transcription quality is the priority.


Sarvam Vision and earlier benchmark claims


The Saaras V3 results follow earlier benchmark claims by Sarvam around its document-focused models. In previous disclosures, the company said its Sarvam Vision model posted higher accuracy scores than several general-purpose systems on tests focused on document OCR, layout understanding, and multi-script Indian documents. Those evaluations covered tasks such as reading order detection, table parsing, and handling complex page layouts, areas where models trained mainly on Western and English-language data often struggle with Indian scripts and formats.


Sarvam has positioned Sarvam Vision as a vision-language system built specifically for documents rather than for general image understanding, combining a core model with separate components for layout and structure analysis. The company has argued that this task-specific approach, along with training on Indian-language and Indian-format data, explains the performance differences seen in those benchmarks. The Saaras V3 results extend that same argument into speech recognition, particularly for Indian languages, code-mixed inputs, and Indian-accented English.


What is Sarvam AI


Sarvam AI is a Bengaluru-based startup focused on building speech, language, and multimodal AI systems for Indian use cases. Instead of training a single general-purpose chatbot, the company has been developing a set of task-specific models aimed at areas such as speech recognition, speech synthesis, translation, and document understanding, where performance depends heavily on how well systems handle local languages, scripts, and formats.

 


Alongside Saaras, its speech recognition line, Sarvam’s portfolio includes Bulbul, a text-to-speech system for Indian languages; Saarika, a speech-to-text model focused on transcription; Mayura, a text translation model; and Sarvam-M, a multilingual reasoning language model. On the vision side, Sarvam Vision is its document understanding model designed for OCR and layout-aware reading of scanned and photographed documents. The company has also built applications such as Samvaad, a voice-based conversational system that runs on top of its speech and language models.

 


It is one of the 12 startups working with Indian government under the IndiaAI mission to develop indigenous multilingual and multimodal large language models.



Source link

AI Impact Summit 2026: How India plans to deploy, govern and procure AI

AI Impact Summit 2026: How India plans to deploy, govern and procure AI



As New Delhi prepares to host the India-AI Impact Summit 2026 from February 16 to 20 at Bharat Mandapam, it is also positioning itself as the first country in the Global South to convene a global, government-led conversation on artificial intelligence. The summit, which is anchored around the themes of People, Planet and Progress, is designed to move the AI debate beyond principles and into questions of deployment, governance and state capacity, especially how governments build, buy, and use AI systems at scale.

 


And it is within this larger global conversation that India’s governance and procurement policies for AI, about how it plans to regulate, deploy, and buy AI systems, will take shape.

 
 


The IndiaAI Mission, which started in 2024 with a budget of ₹10,372 crore, is the main focus of India’s approach to tackling artificial intelligence. The government uses this initiative as its main method to implement AI technologies throughout public services while building the fundamental systems needed for widespread usage.

 


IndiaAI Mission: The backbone of state-led AI deployment 


The IndiaAI Mission is structured around seven deployment pillars: compute capacity, datasets, innovation centres, application development, future skills, startup financing, and safe and trusted AI. Together, these pillars are meant to address the full lifecycle of AI in government, from infrastructure and data to skills, governance and use cases.

 


A key focus is compute capacity, with the government planning access to large-scale AI infrastructure, including high-performance GPUs, to support public sector projects, startups and researchers. Alongside this, the mission emphasises creation and curation of high-quality datasets for public good applications, particularly in healthcare, agriculture and governance.

 


The deployment aspect of AI connects directly to the Digital India initiative because AI systems are increasingly being integrated into digital public infrastructure, along with data-driven decision-making systems, which are used by ministries, state governments, and local municipalities. The AI Impact Summit in itself has been positioned as a platform to showcase and assess these deployments, rather than focusing on announcing new policy.

 


From NPAI to IndiaAI: how the framework evolved 


India’s current AI architecture builds on earlier institutional efforts. The National Program on Artificial Intelligence (NPAI), launched by the Ministry of Electronics and Information Technology (MeitY), functions as an umbrella initiative focused on social impact, inclusion and innovation.

 


NPAI rests on four pillars: a National Centre on AI, a Data Management Office, skilling in AI, and responsible AI. These elements now operate in parallel with, and are complemented by, the broader IndiaAI Mission, which expanded the scope in 2024 to include large-scale compute, startups, and application deployment.

 


The National Centre on AI conducts applied research and pilot projects, which test their results in key areas that include healthcare, agriculture, education and smart cities through partnerships with academic institutions and business organisations. These pilots feed into government deployment strategies rather than remaining standalone research projects.

 


How the government is procuring AI systems 


India does not yet have a single, unified AI procurement policy. Instead, procurement is being shaped through revised norms, mission guidelines and existing public procurement platforms.

 


Under the IndiaAI Mission, MeitY revised eligibility conditions in 2024 to widen participation. Minimum turnover requirements for primary bidders were reduced from ₹100 crore to ₹50 crore, and for consortium members to ₹25 crore. Technical thresholds for AI compute procurement were also relaxed, including lower GPU performance and memory requirements, to allow more domestic firms to compete.

 


Procurement is aligned with Make in India rules, with preference for Class I and Class II suppliers, reinforcing local sourcing and domestic capacity building.

 


Separately, AI tools are being used to improve procurement processes themselves, including efficiency enhancements on the Government e-Marketplace (GeM) platform.

 


Budget support for India’s AI climate 


The Union Budget 2026-27 has reinforced this direction, allocating funds for AI computation and skilling through the India Semiconductor Mission 2.0, therefore, complementing IndiaAI’s infrastructure push.

 


The India-AI Impact Summit will provide an operational platform for these ideas to be tested, refined and translated into action, including how public procurement will evolve to match India’s stated goals of safe, inclusive, and accountable AI integration across governance and public services.



Source link

Elon Musk restructures xAI's teams following co-founders' departure

Elon Musk restructures xAI's teams following co-founders' departure



By Carmen Arroyo


 
Elon Musk said he restructured xAI, his artificial intelligence startup, following the exit of two of its co-founders earlier this week.


XAI will be organized in four core areas, the billionaire told staff in a meeting on  Wednesday: Grok’s chatbot and voice product; Coding; the Imagine video product; and Macrohard, an AI software company run by digital agents. Musk presented the plan in an all-hands meeting with xAI staffers, which he made public on the social network X.

 


“What matters is velocity and acceleration,” he told employees. “If you are moving faster, you will be the leader.” He also thanked the people who have departed the company.

 
 


The meeting followed the back-to-back departures of Jimmy Ba and Tony Wu, two of the startup’s co-founders, along with a handful of other staff members who’ve left over the past few days.

 


Aman Madaan, who joined xAI in 2024, is leading the main chatbot and voice division. In the meeting, he noted that xAI is quickly developing its models, spurred on by the success of OpenAI’s voice model. “We had nothing, but in six months we developed it from scratch,” he said.

 


Co-founder Manuel Kroiss will lead the coding team, while Guodong Zhang, another of the co-founders, will be overseeing video generation, while helping with coding. Toby Pohlen, also part of the founding team, will be in charge of Macrohard, a division named as a play on Microsoft Corp.

 


“Most of the AI compute is gonna be understanding real-time video generation,” Musk said. “And we expect to be leaders in that.”

 


They all emphasized that xAI is looking to hire. 

 


Twelve original xAI co-founders, including Musk, launched the company in 2023. Ba and Wu are the fifth and sixth from that group to exit in the past two years. Kyle Kosic left in 2024, followed by Igor Babuschkin and Christian Szegedy in 2025. Greg Yang, another co-founder, said last month that he would step back from his role after being diagnosed with Lyme disease. 

 


The exits follow xAI’s recent merger with SpaceX, a move that valued the combined company at $1.25 trillion, Bloomberg reported. That deal could ease a funding crunch for xAI, which has been raising large sums of capital as it burns through cash in its bid to build out data centers, buy expensive computing chips and pay for talent.

 


xAI has a large Colossus data center site in Memphis, Tennessee, and is planning an expansion of the complex. The company has already purchased a third building in the area that will bring its computing capacity to almost 2 gigawatts, Musk said late last year. That expansion, which is technically across state lines in Mississippi, will include an investment north of $20 billion. The new building, which Musk has dubbed Macroharder, will require 10,000 to 20,000 GB300 systems, Musk said in the call.

 


Nikita Bier, who is in charge of X’s product, said that the social network and its adjacent apps, including Grok, have reached about 1 billion users. January was the best month ever in terms of engagement for X, he said. He also noted that new users spend 55 per cent more time a day in the app than they did six months ago. The app “has been rebuilt to be better than ever,” and is now generating $1 billion in annual recurring revenue tied to subscriptions, he said.

 


Musk said the company will launching a new X Chat app for those who only want to use it for messaging. He reiterated that he won’t be adding ads to Grok. X Money, an initiative the social network has been working on for years and that’ll be used to send money within the app, will be available to a limited number of external test users in the coming months, he said. “It’ll be the place where all the money is. It’s going to be a game changer,” Musk said.



Source link

YouTube
Instagram
WhatsApp