BharatGen to launch 17B-parameter multilingual AI model at AI Impact Summit

BharatGen to launch 17B-parameter multilingual AI model at AI Impact Summit



BharatGen is set to launch a 17-billion-parameter multilingual AI model, called Param2, at the India AI Impact Summit 2026, which is set to kick-off on February 16 in New Delhi. According to the announcement, the model will support 22 Indian languages and is being positioned as part of India’s broader push to develop “sovereign” AI systems trained on domestic data and run on local infrastructure.

 


The model is being developed under the BharatGen consortium, which operates out of the Technology Innovation Hub at IIT Bombay and is supported by the Department of Science and Technology. The programme is part of the wider IndiaAI Mission, which is funding compute infrastructure, data platforms and model development for public-sector and national-scale AI projects.

 


BharatGen Param2: What it is


Param2 is described as a 17-billion-parameter “mixture of experts” model built to work across 22 Indian languages. According to the release, it has been trained on Indian-language data and is intended to be used across text, speech and vision tasks, rather than only as a text-based language model.

 


BharatGen said the model will be demonstrated at the summit through sector-focused applications developed with government and industry partners. These include use cases in governance, healthcare, education, cultural digitisation and financial services. Examples cited include a government-facing system for urban development and revenue departments, healthcare applications for doctor–patient interaction, and tools for digitising and accessing archival and cultural documents.

 


BhartGen also said the underlying models are being developed to support reasoning, maths and code-related tasks, and are being trained using infrastructure made available under the IndiaAI Mission, along with data drawn from what it calls the Bharat Data Sagar repository.


Why multilingual AI models matter for India


The focus on a large, multilingual model reflects a broader policy direction that has been visible across multiple government-backed AI projects over the past few years. Public services in India operate across dozens of languages, and much of the information citizens interact with comes in a mix of formats, including text, scanned documents and speech.


Text-only, English-first systems are a poor fit for many of these workflows. Multilingual models are meant to reduce that gap by allowing the same system to work across languages, rather than relying on separate tools or manual translation layers. This is also why platforms such as BHASHINI and projects like Adi Vaani and BharatGen have focused on language coverage as a core design goal, rather than as an add-on.

 


In practice, this approach is aimed first at government and public-sector use cases, such as forms, advisories, helplines and document processing, where language and format diversity is the norm rather than the exception.


How this fits into India’s wider “sovereign AI” push


BharatGen’s Param2 model sits alongside a growing set of India-focused AI efforts that are being framed around domestic data, local infrastructure and language coverage. In earlier work, the programme has already released models for text, speech recognition, text-to-speech and document understanding, and Param2 extends this into a larger, general-purpose multilingual system.


In parallel, Indian startups have also been building models focused on local languages and use cases. Bengaluru-based Sarvam AI, for instance, has reported benchmark results for its speech and document models on Indian-language tasks, where it says its systems outperform several global models on specific Indic benchmarks. That work, like BharatGen’s, is focused less on general chatbots and more on tasks such as speech recognition, OCR and document understanding, where language coverage and script handling make a measurable difference.


Displaying BharatGen Logo.png. Page 1 of 3khalid.anzar@bsmail.i



Source link

Facebook now lets you animate profile photos using Meta AI: What's new

Facebook now lets you animate profile photos using Meta AI: What's new


Facebook Meta AI-powered feature update (Image: Meta)


Facebook is introducing a set of new features powered by Meta AI that make profiles, Stories and Feed posts more visually engaging. The update allows users to animate their profile pictures, restyle images in Stories and Memories, and add animated backgrounds to text posts. According to Meta, the goal is to give people more creative ways to express themselves without needing advanced editing skills.


Facebook update: What’s new


Animate profile picture

 

The update adds the ability to animate profile pictures. Users can turn a still image into a short animation using preset effects such as natural movement, confetti, party hat, wave, or heart. Meta said that more animation options are expected to be added over time, including seasonal effects. Facebook recommends using a clear photo of a single person facing the camera, without objects blocking the face. Users can choose a picture from their camera roll or from photos already uploaded to Facebook. Once animated, the image can be shared to the Feed and displayed on the user’s profile.

 


Restyle feature

 

As part of the update, Facebook is also introducing a feature called “Restyle,” which lets users modify the appearance of photos in Stories and Memories. After choosing an image, users can tap the Restyle option to apply preset effects or enter a custom text prompt to generate a new look. The tool includes categories such as Styles (anime or illustrated), Moods (like glowy), Lighting effects (such as ethereal), various colour tones, and background options like a beach or cityscape. Facebook will also recommend older Memories that can be refreshed with these effects before being shared again. 

 


Add animated backgrounds to text posts

 


The platform is also gradually rolling out animated backgrounds for text posts in the Feed. When creating a text-only post, users can tap the rainbow “A” icon to choose from different still or moving backgrounds, including designs like falling leaves or ocean waves. Meta said that seasonal backgrounds will be introduced in the future.

 

First Published: Feb 12 2026 | 2:57 PM IST



Source link

Google pushes back release of Android 17 beta 1 update: What to expect

Google pushes back release of Android 17 beta 1 update: What to expect



Google has reportedly postponed the Android 17 beta update, which was supposed to go live on February 11 for select Pixel devices. According to a report by Android Central, Google reached out to several publications regarding the launch of the Android 17 beta update; however, Google’s Android Developers blog post and other official documentation did not go live. The company then reportedly confirmed that Android 17 beta will be coming soon, but it will not be released on February 11.

 


Google has not officially shared a timeline for when Android 17 beta will go live. However, it has clarified the shift in its rollout strategy this year. Instead of launching a separate Developer Preview build for Android 17 first, the company will move directly to the public beta phase.

 
 


Google said users already enrolled in the Android Beta Program and running Android 16 QPR3 Beta 2.1 will automatically receive the Android 17 Beta 1 update, without needing to register again. Devices that remain in the beta programme will continue to get subsequent beta releases. However, once Android 17 beta is installed, users typically cannot revert to a stable version without wiping their device until the beta cycle concludes, which is expected around June 2026. Those who do not wish to test Android 17 are advised to opt out via Google’s official beta website and wait for the stable Android 16 QPR3 update.


Android 17 beta: What to expect


With Android 17, Google is reportedly stepping up efforts to ensure apps work better on large-screen devices. Smartphones such as the Samsung Galaxy Z Fold 7, Pixel 10 Pro Fold, other foldables, and even Android’s desktop mode continue to face issues when apps are not designed for bigger displays.

 


As per Android Central, with the next major Android update, Google is entering the “next phase” of its adaptive app strategy, removing the option for developers to bypass orientation and resizability standards. This effectively means apps will be required to properly scale and function on large-screen form factors.

 


As per the report, the update will also introduce other enhancements, including a new generational garbage collection system aimed at lowering CPU load, along with tighter notification controls intended to reduce memory usage. In addition, Android 17 might bring upgraded tools for media and camera applications, designed to improve audio consistency across apps and devices, and enable smoother switching between camera modes.

 



Source link

India AI Impact Expo 2026: Nvidia, Google, and OpenAI among 400 exhibitors

India AI Impact Expo 2026: Nvidia, Google, and OpenAI among 400 exhibitors



Dominant AI ecosystem players Nvidia, Google and OpenAI are among 400 exhibitors who will participate at the five-day-long India AI Impact Expo 2026, a senior government official said on Thursday.


Software Technology Parks of India (STPI), Director General, Arvind Kumar, told PTI that the expo will serve as a matchmaking for AI ecosystem players, where Indian innovators will also showcase their potential.


“Leading AI ecosystem players, including NVIDIA, Google, and OpenAI, will be among 400 exhibitors at the India AI Impact Expo. Their top executives have also confirmed their participation. They will also hold meetings with Indian companies,” he said.

 


Kumar said over 100 countries have confirmed participation in the summit, which includes 50 ministerial-level delegations.


He said that the preparation for the India AI Impact Summit and Expo started immediately after Prime Minister Narendra Modi announced the hosting of the next AI summit in India.


“We have started work on the expo. The entire summit, comprising the expo, will be held between February 16th and 20th. Almost all the technology companies in the country are participating in this. Many government departments and ministries will also participate in it,” he said.


The expo will be held in an area of about 75,000 square metres in Pragati Maidan.


Kumar said the expo will also host sessions to connect start-ups with investors.


Around 700 sessions are planned during five days for discussion on AI and its impact.


The India AI Impact Summit will be structured around three core pillars — People, Planet and Progress and discussions will focus on employment and skilling, sustainable and energy-efficient AI, and economic as well as social development.


It has seven thematic working groups, co-chaired by representatives from the Global North and Global South, that will present concrete deliverables, including proposals for AI Commons, trusted AI tools, shared compute infrastructure and sector-specific compendiums of AI use cases.



Source link

India AI Impact Expo 2026: Nvidia, Google, and OpenAI among 400 exhibitors

'AI bigger than Covid': New York-based CEO sounds alarm on rapid evolution



New York-based Chief Executive Officer Matt Shumer said that artificial intelligence (AI) is something bigger than Covid-19 pandemic. He noted that people cannot keep talking about AI in an “eventually we should discuss this” way, but instead need to understand that “this is happening right now”.

 


Shumer is the CEO of OthersideAI, which offers HyperWrite, an AI-assisted writing tool. In a blog post titled ‘Something Big Is Happening’, Shumer sounded an alarm that AI is changing things rapidly. He said AI is moving from being a “helpful tool” to something that “does my job better than I do”.


What is going to change?


Shumer said that AI is likely to take on roles across law, finance, medicine, accounting, consulting, writing, design, analysis, and customer service. “Not in 10 years. The people building these systems say one to five years. Some say less. And given what I’ve seen in just the last couple of months, I think ‘less’ is more likely,” he wrote.

 
 


Commenting on those who believe AI is not good enough, Shumer said: “If you tried ChatGPT in 2023 or early 2024 and thought ‘this makes stuff up’ or ‘this isn’t that impressive’, you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense. That was two years ago. In AI time, that is ancient history.”


How is AI developing?


Shumer said the hardest part used to be writing code for AI. Now, AI is writing code to improve itself. He added that the latest models show “judgment” and “taste” and can make intelligent decisions that many once believed AI would never handle.

 


He also shared how quickly AI has evolved:


  • 2022: AI struggled with basic arithmetic and could give wrong answers like 7 × 8 = 54.

  • 2023: It could pass the bar exam.

  • 2024: It could write working software and explain graduate-level science.

  • Late 2025: Some top engineers said they had handed over most of their coding work to AI.

  • February 2026: New models arrived that made previous versions feel outdated.


What will be its impact on jobs?


According to Shumer, AI is not replacing just one skill; it is becoming a general substitute for cognitive work. Here’s what AI can do effectively across industries:


  • Legal work: Reviews contracts, summarises case law, drafts briefs, and conducts research at the junior associate level.

  • Financial analysis: Builds financial models, analyses data, writes investment memos, and generates reports.

  • Writing and content: Produces marketing copy, journalism, and technical writing that many cannot distinguish from human work.

  • Software engineering: Writes large volumes of functional code and automates complex, multi-day projects; significant job reduction likely.

  • Medical analysis: Interprets scans, analyses lab results, suggests diagnoses, and reviews medical literature.

  • Customer service: Handles complex, multi-step customer queries, far beyond traditional chatbots.


“A lot of people find comfort in the idea that certain things are safe, that AI can handle the routine work but not human judgment, creativity, strategy, or empathy. I used to say this too. I’m not sure I believe it anymore,” he said.


What does Shumer suggest?


Shumer said the person who can say in a meeting, “I used AI to finish this in one hour instead of three days,” instantly becomes more valuable. He urged professionals to learn the tools, practise using them, and demonstrate results.

 


He warned that those who dismiss AI as a fad or feel threatened by it risk falling behind. At the same time, he advised strengthening financial stability by building savings, limiting unnecessary debt, and maintaining flexibility.

 


He recommended focusing on skills that are harder to replace, such as trusted relationships, licensed responsibility, physical presence, and roles in regulated industries. These may not offer permanent protection, but they provide time to adapt.

 


Finally, Shumer encouraged building a habit of constant learning. Since AI tools evolve rapidly, adaptability is key. He suggested spending one hour a day actively using and experimenting with AI. Over six months, he said, that alone could put someone ahead of most of their peers.



Source link

Threads introduces 'Dear Algo' AI feature to personalise feed: How it works

Threads introduces 'Dear Algo' AI feature to personalise feed: How it works



Threads is rolling out a new feature called “Dear Algo” that gives users more control over what they see in their feed. According to the latest blog from Meta, the AI-powered tool allows people to temporarily adjust their feed by telling the platform what kind of posts they want to see. Instead of relying solely on the usual “Not Interested” button, users can now submit a request that temporarily changes their feed. The feature is currently available in the US, New Zealand, Australia, and the UK. The company is planning to expand it to more countries soon.


Threads’ ‘Dear Algo’ feature: How it works


Meta explains that to use Dear Algo, users must create a public post beginning with the phrase “Dear Algo,” followed by what they want to see more or less of. For instance, if someone wants more podcast-related content, they can write, “Dear Algo, show me more posts about podcasts.” Once the request is posted, Threads adjusts that user’s feed for three days.

 
 

One key point to note is that these requests are made publicly. This means other users can view them and even repost someone else’s request to apply the same preferences to their own feeds. While this approach could turn personalisation into a more shared experience and help users discover new topics, some people may feel uneasy about making their content preferences visible to everyone. 

 

In a blog post, the company said the platform is meant for keeping up with what’s happening in real time, and sometimes users want their feed to quickly match what they care about at that moment. The feature is designed to reflect how quickly interests can change. 

 


Although Threads, X, and Bluesky already allow users to hide unwanted content with the ‘Not interested’ option, Dear Algo goes further by letting people temporarily reshape their feed through a direct request.

 



Source link

YouTube
Instagram
WhatsApp