Qi2 wireless charging debuts on Android with HMD Skyline launch: What is it

Qi2 wireless charging debuts on Android with HMD Skyline launch: What is it


Wireless charging (Image: Shutterstock)


Finnish mobile phone brand HMD (Human Mobile Devices) debuted Qi2 wireless charging in Android smartphone space with the launch of Skyline. This is an important development because the second-generation wireless charging standard by Wireless Power Consortium (WPC) was introduced in early 2023, but has been skipped by the big technology players, such as Google and Samsung, to date despite growing adoption of wireless charging features in smartphones. Nevertheless, more brands are expected to follow the suit since there now is a smartphone with support for Qi2 wireless charging support. On that note, let us explore the Qi2 wireless charging standard details:


What is Qi2


Qi2 is the latest open wireless charging standard from the Wireless Power Consortium (WPC), which is a group that works with technology companies to set standards for safety, efficiency and interoperability of wireless power applications.


Qi2 charging standard is based on Apple’s MagSafe technology, which allows Qi2-branded devices to feature a ring of magnetic coils to attain improved alignment with chargers, ensuring faster wireless charging speeds.


Benefits of Qi2 charging


Qi2 certified devices allow 15W fast charging with compatible chargers while the previous generation Qi standard was limited to 5W wireless charging. Additionally, Qi2 wireless charging relies on electromagnetic coils that create a magnetic field, this ensures better alignment with a compatible device, improving charging efficiency.


While Qi2 compatible chargers have backward compatibility, which essentially means they could charge Qi certified smartphones, they work best with Qi2 certified devices due to their magnetic alignment feature. Apart from charging improvements, the presence of magnetic coil within the device opens up options for new smartphone accessories such as stands, wallets and more.


Users can also purchase a case with a magnetic ring for Qi supporting devices to enable magnetic attachments similar to Qi2-supporting devices.

First Published: Jul 26 2024 | 4:25 PM IST



Source link

POCO M6 Plus 5G with 108MP camera launching on August 1: What to expect

POCO M6 Plus 5G with 108MP camera launching on August 1: What to expect



Xiaomi’s spinoff smartphone brand POCO has set the launch of M6 Plus 5G smartphone in India for August 1. The POCO M6 Plus will join the POCOC M6 Pro in the company’s M6 smartphone series. Ahead of the launch, POCO has confirmed that the M6 Plus will feature a dual-camera system on the rear, featuring a 108-megapixel main sensor of an f/1.75 aperture. POCO said the megapixel-rich main sensor will enable 3x in-sensor zoom.


POCO M6 Plus: What to expect


The upcoming POCO M6 Plus smartphone could be a rebranded version of the Redmi 13 5G smartphone, which was launched in India earlier this month. If true, the POCO M6 Plus would sport a 6.79-inch fullHD+ display of 120Hz refresh rate. While the company has already confirmed that the POCO M6 Plus will feature a 108 MP primary rear camera sensor of an f/1.75 aperture, it is likely that the main camera will be assisted by a 2MP macro camera at the back. The smartphone could get a 13MP front facing camera.


A Qualcomm Snapdragon 4 Gen 2 chip is expected to power the POCO M6 Plus, while it gets up to 8GB RAM. The smartphone is anticipated to feature a 5030mAh battery and would likely get support for 33W wired charging.


POCO M6 Plus: Expected specifications


  • Display: 6.79-inch display, FHD+ resolution, 550 nits peak brightness

  • Processor: Qualcomm Snapdragon 4 Gen 2 chip

  • RAM: up to 8GB RAM

  • Storage: 128GB

  • Rear camera: 108MP primary + 2MP macro

  • Front camera: 13MP

  • Audio: Bottom firing speakers, 3.5mm jack

  • Battery: 5030mAh

  • Charging: 33W wired

  • OS: Android 14-based Xiaomi HyperOS

First Published: Jul 26 2024 | 3:26 PM IST



Source link

Microsoft introduces AI-powered Bing to rival Google Search: What is new

Microsoft introduces AI-powered Bing to rival Google Search: What is new



Microsoft is set to revamp the Bing Search experience by leveraging artificial intelligence to generate an overview of search results. The US-based software giant said it is combining the power of generative AI and large language models (LLMs) with the search results page to create a “bespoke and dynamic response” to a user’s query.


Explaining the new experience, Microsoft said that Bing generative search will show a snapshot with the information that is easy to read and understand, with links and sources that show where it came from. Moreover, Microsoft confirmed that the regular search results will continue to be prominently displayed on the page for familiar experience.


The search feature is being shipped for a small percentage of queries, according to Microsoft. This will prevent AI hallucinations.


The Bing generative search experience is similar to Google’s AI Overviews and is also an extension of the company’s AI-powered chat answers on Bing released in February 2023. The new feature has been built using the company’s large and small language models, combined with generative AI.

Microsoft said the snapshot featuring content overview will be displayed inside a grey box at the top of Bing search result page. A document index with various sections will also be shown. The old search results will move to the right side of the page when Bing generative search appears. The new feature search results will show snapshot, source links and related search results.


Microsoft said that the Bing generative search understands the query, reviews many sources of information, matches content and then shows results to present the most suitable result in AI generated layout.


Microsoft said that it is looking at how generative search impacts traffic to publishers and will share more updates based on the feedback and learnings.

First Published: Jul 26 2024 | 3:04 PM IST



Source link

Google Gemini gets 1.5 Flash model integration for free-tier users: Details

Google Gemini gets 1.5 Flash model integration for free-tier users: Details


Google Gemini 1.5 Flash (Image: Google)


Google has announced that it is updating the free-tier of its Gemini artificial intelligence chatbot with the new Gemini 1.5 Flash model. Google in a blog post stated that users can now experience  1.5 Flash in the unpaid version of Gemini for faster and more helpful responses. The company said that the new model integration will offer improvements across-the-board and will lower the response latency.


Google Gemini 1.5 Flash


Google launched the Gemini 1.5 Flash AI model at its Google I/O developers conference in May. The company said that the new model optimises high-volume, high-frequency tasks at scale and is more cost-efficient. Although lighter than the Gemini 1.5 Pro model, the 1.5 Flash is capable of multimodal reasoning. This essentially means that the AI model is capable of processing images, videos, voice and text.


At the time of announcement, Google said that the Gemini 1.5 model has been trained by the bigger Gemini 1.5 Pro model using a process called “distillation”. This means that most essential knowledge and skills from a larger model are transferred to a smaller, more efficient model. Google said that this makes Gemini 1.5 Flash optimum for text summarisation, chat applications, image and video captioning, data extraction from long documents and more.


Google said that the Gemini 1.5 Flash model is now available to free-tier users in the Gemini Mobile app and on the web. The model is available in over 230 countries including India and supports more than 40 languages including English, Hindi, Tamil, Telugu, Malayalam, Kannada, Bengali, Gujarati, Urdu and more.

 

First Published: Jul 26 2024 | 1:48 PM IST



Source link

Reddit results not showing up in many search engines, except Google: Report

Reddit results not showing up in many search engines, except Google: Report



Reddit’s recent actions in restricting access to its data for search engines other than Google have stirred significant attention and debate within the tech community. Initially, Reddit forged a prominent partnership with Google earlier this year, aimed at integrating Reddit’s vast dataset into Google’s AI models. This collaboration resulted in Reddit’s content gaining heightened visibility in Google Search results, surpassing even the original websites linked within Reddit posts.


However, this exclusivity appears to have extended beyond mere collaboration. Reddit has recently updated its Robots Exclusion Protocol (robots.txt) to prohibit all bots from indexing any part of its site. This move, as stated by Reddit, aims to curb the misuse of public content, particularly by commercial entities engaged in scraping Reddit’s data for various purposes without consent.


The rationale behind Reddit’s stricter control over its content includes concerns over unauthorised scraping and potential misuse of its data. Although not officially confirmed, the adjustment to their robots.txt file coincides with their increased involvement in AI training, suggesting a possible motivation to protect their data integrity and value.


This shift has effectively blocked other search engines from crawling and indexing Reddit’s content comprehensively. Reports indicate that non-Google search engines either display outdated information or fail to show Reddit results altogether, consolidating Google’s dominance in showcasing Reddit content.


Critics argue that Reddit’s actions reflect a broader strategy to safeguard its data assets, attract investors, and diversify revenue streams. The platform’s purported warnings to Google about leveraging its data for AI training without compensation underscore Reddit’s evolving stance on data ownership and commercial use.


Reddit’s decision to limit search engine access may underscores its strategic pivot towards data protection and monetisation, influencing how its vast community-generated content is accessed and utilised across digital platforms.

First Published: Jul 26 2024 | 1:38 PM IST



Source link

ChatGPT Voice Mode with GPT-4o model coming to Plus members soon: OpenAI

ChatGPT Voice Mode with GPT-4o model coming to Plus members soon: OpenAI



OpenAI will be rolling out “Voice Mode” for the GPT-4o model in ChatGPT for Plus members starting next week. OpenAI CEO, Sam Altman, while responding to a question on X (formerly Twitter), regarding the feature’s availability said that voice mode for GPT-4o will be available in a limited “alpha” release for ChatGPT Plus subscribers from next week.


When OpenAI released its new flagship AI model GPT-4o in May, it announced significant improvements to its talkback feature for ChatGPT. While Voice Mode already exists in ChatGPT across both free and paid tires, its capability is quite limited.


Voice Mode for GPT-4o


In the current version, Voice Mode to talk to ChatGPT works with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average. This latency is the result of a data processing pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio. According to OpenAI, this process results in loss of lots of information to the main source of intelligence, GPT-4.


With the GPT-4o model, which the company said is trained end-to-end across text, vision, and audio, all inputs and outputs are processed by the same neural network. This lowers down the latency for natural conversational experience and improves results since all the information is processed over the same neural network. Additionally, OpenAI said that GPT-4o is more capable of handling interruptions, manages group conversations effectively, filters out background noise, and adapts to tone.

First Published: Jul 26 2024 | 1:03 PM IST





Source link

YouTube
Instagram
WhatsApp