Samsung Galaxy Fit 3 with large display, aluminium case unveiled: Details | Gadgets – Business Standard

Samsung Galaxy Fit 3 with large display, aluminium case unveiled: Details | Gadgets – Business Standard


Samsung has announced the Galaxy Fit 3, its entry-level health-and-fitness tracker. The announcement, however, is not global but currently limited to the Philippines market only. In a blogpost on its Philippines newsroom, Samsung announced that the Galaxy Fit 3 will be available from February 23 in select markets. The fitness band features an aluminium frame and comes in grey, silver and pink gold colour options.


Although the company has not announced regions where the Galaxy Fit 3 will be available, the fitness band is currently not listed on the Samsung India e-store. It essentially means India is not among the select markets that are getting the Galaxy Fit 3 starting from February 23. However, the availability is expected to expand to more regions in the coming days.


Samsung Galaxy Fit 3: Details


The Samsung Galaxy Fit 3 band sports a 1.6-inch rectangular AMOLED display (256×402 resolution) in vertical orientation. The fitness band has 16MB RAM and 256MB on-board storage. It is powered by a 208mAh battery, which the company said could last for up to 13 days on a single charge. The list of sensors included on the fitness tracker are accelerometer, barometer, gyro sensor, optical heart rate sensor, and light sensor.


As for the health-related features, the Galaxy Fit 3 offers sleep monitoring functionality with tracking user’s sleep patterns, snoring detection, and blood oxygen level monitoring. Additionally, the fitness band supports over 100 workout modes including running, elliptical, rowing machine, pool swim, and more. These features are aided by a 5ATM rating and IP68 certification for water and dust resistance.


In terms of safety features, the Galaxy Fit 3 supports Fall Detection and Emergency SOS. With Fall detection enabled, the band gives users the option to call emergency services. The Emergency SOS feature allows users to send an SOS immediately by pressing the side button five times.


Users can also use the Galaxy Fit 3 as a mobile controller, with the band having the option for controlling the smartphone camera when taking photos and setting timers. It can also control media on the connected device.

First Published: Feb 23 2024 | 12:25 PM IST



Source link

Google Pay SoundPod: Here is everything about Paytm Soundbox alternative | Tech News – Business Standard

Google Pay SoundPod: Here is everything about Paytm Soundbox alternative | Tech News – Business Standard



After introducing it in a limited pilot, Google is ready to roll out the SoundPod to merchants across India in coming months. Powered by Google Pay platform, SoundPod is an audio device that helps merchants track QR code payments with audio alerts when a payment is received. Google Pay in a press note stated that it has received positive feedback from participating merchants and thus has decided to make the product available widely.


In related news, Google has decided to simplify payments on its platform in the US by discontinuing the Google Pay app in the region with most features moving over to the Google Wallet app. The US-related announcement, however, will have no bearing on how Google Pay functions for users and merchants in India, said Ambarish Kenghe, Vice President-Product at Google Pay. “To be able to play a role in India’s digital payments story is a matter of deep pride for us, providing invaluable lessons on how digital transformation happens in tech-forward societies, and we continue to stay deeply invested in this journey for the long term,” he added.


Last month, Google Pay collaborated with NPCI International Payments Limited (NIPL) to make UPI payments possible from outside India and to facilitate the adoption of UPI-like frameworks in countries beyond India.

First Published: Feb 23 2024 | 12:21 PM IST



Source link

Reddit strikes mn deal allowing Google to train AI models on its posts | Tech News – Business Standard

Reddit strikes $60mn deal allowing Google to train AI models on its posts | Tech News – Business Standard



Reddit has struck a deal with Google that allows the search giant to use posts from the online discussion site for training its artificial intelligence models and to improve products such as online search.


The arrangement, announced Thursday and valued at roughly $60 million, will also give Reddit access to Google AI models for improving its site search and other features.


Separately, the San Francisco-based company announced plans for its initial public offering Wednesday. In documents filed with the Securities and Exchange Commission, Reddit said it reported net income of USD 18.5 million its first profit in two years in the October-December quarter on revenue of USD 249.8 million. The company said it aims to list its shares on the New York Stock Exchange under the ticker symbol RDDT.


The Google deal is a big step for Reddit, which relies on sometimes contentious volunteer moderators to run its sprawling array of freewheeling topic-based discussions. Those moderators have publicly protested earlier Reddit decisions, most recently blacking out much of the site for days when Reddit announced plans to start charging many third-party apps for access to its content.


But it is also highly significant for Google, which is hungry for access to human-written material it can use to train its AI models to improve their understanding of the world and thus their ability to provide relevant answers to questions in a conversational format.


Google praised Reddit in a news release, calling it a repository for an incredible breadth of authentic, human conversations and experiences and stressing that the search giant primarily aims to make it even easier for people to benefit from that useful information.


Google played down its interest in using Reddit data to train its AI systems, instead emphasizing how it will make it even easier for users to access Reddit information, such as product recommendations and travel advice by funneling it through Google products.


It described this process as more content-forward displays of Reddit information that aim to benefit both Google’s tools and to make it easier for people to participate on Reddit.

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

First Published: Feb 23 2024 | 9:38 AM IST



Source link

Nvidia's H100 data center chip driving the AI boom. All you need to know? | Company News – Business Standard

Nvidia's H100 data center chip driving the AI boom. All you need to know? | Company News – Business Standard


By Ian King


Computer components are not usually expected to transform entire businesses and industries, but a graphics processing unit Nvidia Corp. released in 2023 has done just that. The H100 data center chip has added more than $1 trillion to Nvidia’s value and turned the company into an AI kingmaker overnight. It’s shown investors that the buzz around generative artificial intelligence is translating into real revenue, at least for Nvidia and its most essential suppliers. Demand for the H100 is so great that some customers are having to wait as long as six months to receive it.

 


1. What is Nvidia’s H100 chip?

 


The H100, whose name is a nod to computer science pioneer Grace Hopper, is a graphics processor. It’s a beefier version of a type of chip that normally lives in PCs and helps gamers get the most realistic visual experience. But it’s been optimized to process vast volumes of data and computation at high speeds, making it a perfect fit for the power-intensive task of training AI models. Nvidia, founded in 1993, pioneered this market with investments dating back almost two decades, when it bet that the ability to do work in parallel would one day make its chips valuable in applications outside of gaming.


2. Why is the H100 so special?

 


Generative AI platforms learn to complete tasks such as translating text, summarizing reports and synthesizing images by training on huge tomes of preexisting material. The more they see, the better they become at things like recognizing human speech or writing job cover letters. They develop through trial and error, making billions of attempts to achieve proficiency and sucking up huge amounts of computing power in the process. Nvidia says the H100 is four times faster than the chip’s predecessor, the A100, at training these so-called large language models, or LLMs, and is 30 times faster replying to user prompts. For companies racing to train LLMs to perform new tasks, that performance edge can be critical.


3. How did Nvidia become a leader in AI?

 


The Santa Clara, California company is the world leader in graphics chips, the bits of a computer that generate the images you see on the screen. The most powerful of those are built with hundreds of processing cores that perform multiple simultaneous threads of computation, modeling complex physics like shadows and reflections. Nvidia’s engineers realized in the early 2000s that they could retool graphics accelerators for other applications, by dividing tasks up into smaller lumps and then working on them at the same time. Just over a decade ago, AI researchers discovered that their work could finally be made practical by using this type of chip.


4. Does Nvidia have any real competitors?

 


Nvidia controls about 80% of the market for accelerators in the AI data centers operated by Amazon.com Inc.’s AWS, Alphabet Inc.’s Google Cloud and Microsoft Corp.’s Azure. Those companies’ in-house efforts to build their own chips, and rival products from chipmakers such as Advanced Micro Devices Inc. and Intel Corp., haven’t made much of an impression on the AI accelerator market so far.


5. How does Nvidia stay ahead of its competitors?

 


Nvidia has rapidly updated its offerings, including software to support the hardware, at a pace that no other firm has yet been able to match. The company has also devised various cluster systems that help its customers buy H100s in bulk and deploy them quickly. Chips like Intel’s Xeon processors are capable of more complex data crunching, but they have fewer cores and are much slower at working through the mountains of information typically used to train AI software. Nvidia’s data center division posted an 81% increase in revenue to $22 billion in the final quarter of 2023.


6. How do AMD and Intel compare to Nvidia?

 


AMD, the second-largest maker of computer graphics chips, unveiled a version of its Instinct line in June aimed at the market that Nvidia’s products dominate. The chip, called MI300X, has more memory to handle workloads for generative AI, AMD Chief Executive Officer Lisa Su told the audience at an event in San Francisco. “We are still very, very early in the life cycle of AI,” she said in December. Intel is bringing specific chips for AI workloads to the market but acknowledged that, for now, demand for data center graphics chips is growing faster than for the processor units that were traditionally its strength. Nvidia’s advantage isn’t just in the performance of its hardware. The company invented something called CUDA, a language for its graphics chips that allows them to be programmed for the type of work that underpins AI programs.


7. What is Nvidia planning on releasing next?

 


Later this year, the H100 will pass the torch to a successor, the H200, before Nvidia makes more substantial changes to the design with a B100 model further down the road. CEO Jensen Huang has acted as an ambassador for the technology and sought to get governments, as well as private enterprise, to buy early or risk being left behind by those who embrace AI. Nvidia also knows that once customers choose its technology for their generative AI projects, it’ll have a much easier time selling them upgrades than competitors hoping to draw users away.

First Published: Feb 23 2024 | 9:05 AM IST



Source link

57% Indian consumers prefer AI tools over human interaction, shows data | Tech News – Business Standard

57% Indian consumers prefer AI tools over human interaction, shows data | Tech News – Business Standard



Around 57 per cent Indian consumers prefer using Artificial Intelligence (AI) tools rather than engaging in human interaction while looking for products and services online, findings of a recent Adobe survey reveal. 


However, human interaction remains a top choice when considering aspects of decision-making, customer support, and returns or cancellations, said the survey.


The research, titled Adobe’s State of Digital Customer Experience report done in collaboration with Oxford economics, found that around 59 per cent Indians do not feel positive about buying from a brand that isn’t transparent about the use of their personal data.


Key trends:




Mere 15% Indian brands are leveraging generative AI to enhance customer experience (CX) initiatives compared to 18% globally.




41% of Indian brands are seeing CX as a business priority today.




87% of Indian brands are prioritising CX enhancements over other business goals.




76% of brands already have or will pilot GenAI solutions to support CX.




Overall, 53% of Indian brands want to improve GenAI capabilities in the next 12 months.

First Published: Feb 22 2024 | 11:39 PM IST



Source link

OPPO Reno11 series smartphones to get generative AI features in Q2: Details | Tech News – Business Standard

OPPO Reno11 series smartphones to get generative AI features in Q2: Details | Tech News – Business Standard


Representative Image: OPPO Reno 11 Pro


OPPO on February 22 announced its plans to roll out generative artificial intelligence (GenAI) features on the Reno 11 series smartphones. The Chinese smartphone maker announced that the company would roll out generative AI features and tools such as OPPO AI Eraser to the Reno 11 series globally in the second quarter of this year. Additionally, leveraging its own AI model AndesGPT,  OPPO will explore more tools and services focusing on three major characteristics – dialogue enhancement, personalisation, and cloud-device collaboration.


Alongside, OPPO announced the establishment of OPPO AI Center for research and development into AI and its applications. The AI Center, OPPO said, aims to strengthen its AI capabilities and explore a broader range of user-centric AI products and features that will enable the company to bring users the latest experiences at the forefront of AI.


“Following feature phones and smartphones, next-gen AI Smartphones will represent the third major transformative stage in the mobile phone industry. In the era of AI Smartphones, both the mobile phone industry and user experience will witness revolutionary changes,” said Pete Lau, Chief Product Officer of OPPO. “OPPO is dedicated to becoming a contributor and promoter of AI Smartphones. We look forward to working together with our industry partners to jointly drive the innovation of the mobile phone industry and reshape the intelligent experience of mobile phones.”


Last year, OPPO introduced its large language model, AndesGPT, which comprises 180 billion parameters. It already uses GenAI features in its flagship Find X7 series, which boasts intelligent object removal in photos and phone conversation summary features. Since the smartphone is limited to its home country, it plans to expand GenAI to other geographies, including India, by rolling out AI features to the Reno11 series smartphones.

First Published: Feb 22 2024 | 4:30 PM IST



Source link

YouTube
Instagram
WhatsApp