Now you can listen to Gemini-powered summaries in Google Docs: How it works

Now you can listen to Gemini-powered summaries in Google Docs: How it works


Google adds Gemini-powered audio summaries in Docs (Image: Google)


Google has started rolling out a new AI-powered feature in Google Docs that lets users listen to short audio summaries of their documents. According to Google, audio summaries in Docs provide a brief verbal synopsis of the contents of users’ documents, including multiple tabs. The update builds on the Text-to-Speech feature in Docs, introduced in August 2025, which uses Gemini to turn written content into spoken audio.


Google Docs Gemini-powered audio summary: How it works


According to Google’s blog, the new feature is designed to give a brief overview of a document. Instead of going through long reports or detailed notes, users can listen to a short audio summary that highlights the most important parts. The company said the summaries are usually under a few minutes long and use natural-sounding voices to enhance the listening experience.

 
 

Additionally, users can personalise their listening experience. The feature offers different voice styles such as narrator, persuader, and coach. Playback speed can also be adjusted, from 0.5x to 2x, depending on how fast or slow someone prefers to listen. 

 


Google explained that someone can quickly catch up on meeting notes before a discussion or listen to the highlights of a long report while doing other tasks. The company added that the tool can also help users follow along with their writing to improve understanding or even spot errors more easily.

 

On the web version of Google Docs, the feature can be found under the Tools menu. Along with the existing “Listen to this tab” option, users will now see “Listen to document summary.” When selected, it opens an audio player with playback controls, a timeline scrubber, and options to change the voice style. 

 


Google said the feature will roll out over the coming weeks for specific subscribers, including:


  • Business Standard and Plus

  • Enterprise Standard and Plus

  • Google AI Ultra for Business add-on

  • Google AI Pro for Education add-on

  • Google AI Pro and Ultra

 

First Published: Feb 13 2026 | 12:30 PM IST



Source link

Apple reaffirms its stance to release revamped Siri in 2026: What happened

Apple reaffirms its stance to release revamped Siri in 2026: What happened



Apple has reportedly confirmed that there is no delay in the release of the revamped Siri and that it will roll it out to users sometime in 2026. According to a report by MacRumors, Apple confirmed to CNBC that it is on track to release a more intelligent version of Siri this year. This confirmation comes on the heels of a Bloomberg report claiming that the iPhone maker has postponed the release of the overhauled AI assistant and won’t be releasing it with iOS 26.4, which was expected earlier.


What went down


Earlier, there were reports from multiple publications suggesting that Apple was internally planning to release the revamped Siri with iOS 26.4, which was expected in March. However, more recently, a report from Bloomberg claimed that the iPhone maker postponed this internal target and pushed the rollout of its features to iOS 26.5 and iOS 27.

 
 


According to the report, Apple’s engineering teams have been asked to evaluate the forthcoming Siri upgrades on iOS 26.5, suggesting the features may not arrive until a later update cycle.

 


Initial internal versions of iOS 26.5 are also said to feature a dedicated settings toggle allowing staff to enable a “preview” version of the changes. This indicates Apple could introduce the enhancements as an early-access rollout, making it clear that the features may still be in development or prone to inconsistencies, much like its approach to beta releases.

 


To be clear, Bloomberg did not suggest that this will be pushed beyond iOS 27, meaning the report also hinted that the revamped version of Siri will be released this year. Apple’s statement falls in line with this, solidifying that Apple did not commit to the release being tied to any specific version of iOS 26.


Revamped Siri: What to expect


In a joint statement issued last month, Apple and Google confirmed a long-term partnership. Under this deal, Apple’s upcoming foundation models will be developed using Google’s Gemini models along with its cloud infrastructure. The tie-up will power new features, including a more customised version of Siri and upgrades connected to Apple Intelligence.

 


The updated Siri is anticipated to introduce several new capabilities, such as a chatbot-style interface, improved personal context recognition, on-screen awareness, and the ability to execute app-related tasks without launching them.


Siri as chatbot


As per earlier reports from Bloomberg, Apple is working on a project internally known as “Campos” to transform Siri into a more advanced AI chatbot, potentially comparable to Gemini. Unlike the current assistant, which focuses on short responses and command-driven requests, the enhanced version is expected to handle longer interactions and more complex queries. It may support both text and voice conversations and could gradually replace Siri’s existing interface across Apple devices. 


Personal context understanding


The upgraded Siri is likely to better interpret user information stored across emails, messages, photos, calendar entries, and on-device files. Apple has earlier demonstrated how Siri could pull specific information — such as extracting a licence number from an image — or highlight suggestions mentioned in message conversations.


On-screen awareness


Another reported improvement would allow Siri to understand and respond to whatever content is visible on the screen. For example, users might be able to ask Siri to save contact details from a message or create a reminder based on an email currently open.


In-app actions


Siri is also expected to perform tasks within apps without users needing to open them manually. Previous demonstrations showed actions like finding a photo, editing it, and saving it to a folder completed entirely through voice commands.



Source link

Google rolls out native YouTube app for Apple Vision Pro: Here's what's new

Google rolls out native YouTube app for Apple Vision Pro: Here's what's new


Google has launched a dedicated YouTube app for the Apple Vision Pro headset, bringing a native experience to visionOS users. With this release, users can watch standard YouTube videos as well as YouTube Shorts inside a fully immersive environment. It includes a dedicated Spatial section for discovering 3D, VR180, and 360-degree videos. Additionally, it supports high-resolution playback, including up to 8K.


YouTube app for Apple Vision Pro: Details


One of the key highlights of the native YouTube app for visionOS is a dedicated Spatial tab. This section helps users discover spatial content, including 3D, VR180, and 360-degree videos. These formats are particularly suited to mixed-reality viewing and make better use of the headset’s capabilities. For the newer Vision Pro model powered by the M5 chip, the app also supports 8K resolution playback, offering improved clarity for compatible videos.

 
 


The new app is available on the visionOS App Store and supports both M2 and M5 chip models of the headset. According to its App Store listing, the app requires visionOS 26 or later and is available in 77 languages, including Hindi, English, Tamil, Telugu, Urdu, Marathi, and more.

 


The app is designed to work with visionOS gesture controls. Users can resize video windows, scrub through timelines, and navigate the interface using hand movements, without the need for physical controllers.

 

Since the launch of the first-generation Vision Pro headset in 2024, YouTube has been accessible through web browsers. While Google said in February 2024 that a dedicated app for visionOS was on its roadmap, the company did not reveal a release schedule. The dedicated app for Apple’s mixed-reality headset comes at a time when Google’s own Android XR platform for mixed-reality headsets and smart glasses has started to appear in the market, with devices such as Samsung’s Galaxy XR headset. 

 


In contrast, several major streaming platforms such as Disney+, Amazon Prime Video, Paramount+, and Peacock had already launched their dedicated apps for the Vision Pro.



Source link

Apple to open sixth store in India at Borivali on Feb 26, second in Mumbai

Apple to open sixth store in India at Borivali on Feb 26, second in Mumbai


The company opened its first store in Mumbai and second store in Delhi in April, 2023 (Photo: Reuters)


iPhone maker Apple has unveiled the barricade for its upcoming Apple Borivali store, which will be its sixth store in India and second in Mumbai, the company said on Friday.


The company opened its first store in Mumbai and second store in Delhi in April, 2023.


It continued expansion with the opening of three stores in 2025–Apple Hebbal in Bengaluru and Apple Koregaon Park in Pune, and more recently at Apple Noida.


“Apple Borivali opens Thursday, February 26 ,” Apple said in a statement.


The company’s barricade at Borivali store features the distinctive peacock-inspired visual identity first introduced at the opening of Apple Hebbal in Bengaluru followed by Apple Koregaon Park in Pune, and Apple Noida.

 


“The design signals confidence, detail, and a sense of arrival, seen through Apple’s lens of creativity. Apple Borivali will serve a growing community of startups and businesses,” the statement said.

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

First Published: Feb 13 2026 | 11:21 AM IST



Source link

Nvidia GeForce Now app launched for Amazon Fire OS users: Eligible devices

Nvidia GeForce Now app launched for Amazon Fire OS users: Eligible devices



Nvidia has rolled out a dedicated GeForce Now app for devices running Amazon’s Fire OS. This means Fire TV Stick users can now access the cloud gaming service and stream supported titles from their existing libraries. Apart from the Fire TV hardware, players only need a compatible controller and a stable internet connection to start gaming. This also puts Nvidia in direct competition with Amazon, which offers its own cloud gaming service called Luna.

 


Notably, Nvidia GeForce Now is not yet available in India. However, it is set to debut soon, as confirmed on the company’s website. Amazon Luna is also not available in India at the moment. 

 


Nvidia GeForce Now App: Compatible devices


In a blog post, Nvidia said the GeForce Now app for Fire TV is currently supported on the Fire TV Stick 4K Plus (2nd Gen) and Fire TV Stick 4K Max (2nd Gen) running Fire OS 8.1.6.0 or newer, as well as the Fire TV Stick 4K Max (1st Gen) with Fire OS 7.7.1.1 and above.

 


The service streams games at up to 1080p at 60fps in SDR using H.264 encoding and stereo audio, giving Fire TV users another way to play on larger screens. 


What is Nvidia GeForce Now


Nvidia GeForce Now is a cloud gaming service that allows users to stream PC games from remote servers to devices such as laptops, desktops, smartphones, smart TVs, and streaming devices. Instead of running games locally, titles are rendered in Nvidia’s data centres and streamed over the internet, enabling users to play high-performance games without requiring powerful hardware.

 


The service supports games purchased from digital storefronts including Steam, Epic Games Store, and other supported platforms, letting players access their existing libraries. GeForce Now offers multiple membership tiers, with higher tiers providing benefits such as faster servers, extended session lengths, and access to advanced graphics features including ray tracing and higher frame rates.



Source link

Most white-collar jobs will be automated in 12-18 months: Microsoft AI CEO

Most white-collar jobs will be automated in 12-18 months: Microsoft AI CEO



Amid the rapid rise of artificial intelligence (AI) and ongoing layoffs across industries, Microsoft AI chief executive officer (CEO) Mustafa Suleyman has warned that AI will replace a significant share of white-collar jobs within the next 12 to 18 months.

 


Suleyman cautioned that the impact will extend beyond coders and software engineers to professionals such as lawyers, accountants, project managers, and marketing executives.

 


In an interview with the Financial Times, Suleyman said: “White-collar work, where you’re sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person, most of those tasks will be fully automated by an AI within the next 12 to 18 months.”

 
 


He further added that AI agents are expected to coordinate more effectively within the workflows of large institutions over the next two to three years. These AI tools will continue to learn and improve over time, taking increasingly autonomous actions.

 

Suleyman also noted that as AI advances, creating new models will become easier and more accessible. “Creating a new model is going to be like creating a podcast or writing a blog,” he said. “It is going to be possible to design an AI that suits your requirements for every institutional organisation and person on the planet”, he added, reported Financial Times.

 

His comments come at a time when companies are accelerating the adoption of AI. Recently, Anthropic’s Claude model shook stock markets after raising concerns about the future of Software-as-a-Service (SaaS) firms such as Infosys and Tata Consultancy Services (TCS). 

 


Microsoft pursuing “true self-sufficiency” in AI

 


To keep pace with AI’s rapid evolution, Microsoft is pursuing what Suleyman described as “true self-sufficiency” in AI by developing its own powerful foundation models and reducing reliance on OpenAI.

 


“We have to develop our own foundation models, which are at the absolute frontier, with gigawatt-scale compute and some of the very best AI training teams in the world,” Suleyman told the Financial Times. Microsoft is investing heavily in assembling and organising the vast datasets required to train advanced AI systems.

 

This shift follows a restructuring of Microsoft’s relationship with OpenAI last year. Microsoft holds nearly a 27 per cent stake in the ChatGPT maker, valued at $135 billion. The agreement secures Microsoft’s long-term access to OpenAI models until at least 2032, while also giving OpenAI greater freedom to seek new investors and infrastructure partners.

 


In a bid to sharpen its competitive edge, Microsoft is also investing in other model developers such as Anthropic and Mistral. According to Suleyman, the company has accelerated development of its own in-house models, with a launch expected sometime this year.

 


Microsoft has forecast capital expenditure of $140 billion in its fiscal year ending in June, as it ramps up spending on the infrastructure required to build advanced AI systems.

 


Microsoft plans AI expansion in healthcare

 

According to Suleyman, healthcare is another key focus area for Microsoft. The company aims to build what he described as “medical superintelligence”, AI systems capable of helping address staffing shortages and long waiting times in overstretched healthcare systems.  Last year, Microsoft unveiled an AI diagnostic tool that it claims can outperform doctors on certain tasks.

 


He added that Microsoft’s broader goal is to develop “humanist superintelligence,” ensuring that AI technologies remain under human control. This stance comes amid growing concerns that rival AI labs are racing to build increasingly powerful systems that may resist oversight by their creators.

 


Suleyman said: “These tools, like any other past technology, are designed to enhance human wellbeing and serve humanity, not exceed humanity”, reported Financial Times



Source link

YouTube
Instagram
WhatsApp