Vivo V30 Pro review: Flagship-grade imaging system in a slender form factor

Vivo V30 Pro review: Flagship-grade imaging system in a slender form factor


The V30 Pro is Chinese smartphone brand Vivo’s maiden offering in the V-series with Zeiss optics. This midrange smartphone by pricing pledges to deliver flagship-grade imaging experience. On other fronts, however, the Vivo V30 Pro is not much different from the last generation model in the series. That said, is the Vivo V30 Pro worth spending Rs 41,999 on?  Let us find out:


Design


Vivo has meticulously blended the form factor of the Vivo V29 Pro with fresh design elements, resulting in a device that truly stands out. Like the predecessor, the Vivo V30 Pro boasts a sleek design characterised by a glossy glass back finish and a slender metallic frame that seamlessly connects the curved glass panels on both front and back. While the camera bump retains its rectangular shape, reminiscent of the predecessor, it has been notably downsized and now houses a triple camera setup beneath a square glass cover that bears the Zeiss logo. Mirroring the camera glass’s border shape is the Aura ring light, housing the primary flash in its top left corner, bearing a slight resemblance to the Instagram logo.


The Andaman Blue variant (review unit) shows wavy patterns in sparkling white toward the smartphone’s bottom, adding a distinctive touch to its aesthetics.


Vivo V30 Pro smartphone


On the front, the curved glass extends to the sides, accompanied by noticeable bezels encompassing all edges, particularly wider at the top and bottom. However, Vivo’s priority seems to be on functionality over aesthetics. Therefore, it has opted for pronounced side bezels, effectively preventing inadvertent touches despite the curved screen. Moreover, this design choice broadens the smartphone’s appeal to users who prefer a flatter display.


Overall, the Vivo V30 Pro offers a lightweight feel, tipping the scales at just 188g. Paired with its slim 7.45mm profile, this contributes to a premium sensation and enhances the device’s ergonomics. Despite sporting a glass back finish, the smartphone demonstrates remarkable resistance to dust and fingerprint smudges, further elevating its allure. However, the positioning of the camera module does result in slight instability when the device is placed on a flat surface. Additionally, the lack of a protective frame around the camera cover glass leaves it vulnerable to potential scratches when the smartphone is used without a cover.


Display and audio


The Vivo V30 Pro sports a 6.78-inch curved AMOLED display, delivering crisp visuals. The FHD+ display offers punchy and vibrant output, ensuring an immersive viewing experience. Notably, the display maintains impressive brightness levels even in direct sunlight, ensuring optimal visibility at all times.


While the visual output is impressive, Vivo has included additional enhancement features accessible via the display settings. Within the “Screen Colour” section, users can choose between Default, Pro, and Bright modes. The Pro mode introduces a subtle yellow tint and reduces blue light output, with similar options available in the eye protection menu. Conversely, the “Bright mode” enhances brightness levels, albeit at the expense of colour balance and accuracy. Another noteworthy feature is the “Visual enhancement mode”, which dynamically adjusts colours and contrast levels based on the content being viewed. Although this feature is compatible with platforms like YouTube and Netflix, it does not extend to others like Disney Plus Hotstar and Discovery Plus. Nevertheless, the display supports HDR10+, enhancing the overall content consumption experience.


In terms of audio, the Vivo V30 Pro has a mono speaker system located at the bottom side of the frame. Despite the mono configuration, the output remains clear, with the device boasting impressive volume levels without sacrificing audio clarity and crispness. Furthermore, the speaker delivers decent bass output, enhancing the overall audio experience. However, the mono speaker setup does present limitations, particularly during gaming or video playback, as the palm may inadvertently block the speaker, resulting in a muffled sound output.


Camera


Vivo has positioned the V30 Pro as a camera-centric smartphone, and true to its claim, the device excels in delivering quality images. Sporting a triple-camera setup at the rear, the 50-megapixel primary (Sony IMX920) sensor captures stunning shots with remarkable detail in various lighting conditions. Images taken in natural light exhibit crispness and colour accuracy, although occasional white balance issues may arise in artificial lighting scenarios.


Portrait photography is the smartphone’s forte. The camera adeptly adjusts depth levels to ensure clean outlines and borders, even in indoor lighting conditions. The inclusion of Aura ring lights proves beneficial in low-light settings, offering both illumination for subjects and customisable light temperature options. In collaboration with Zeiss, Vivo introduced Bokeh Flares such as cine-flare, Planar, and Biotar, enhancing the professional look of portrait shots.


The 50MP ultra-wide-angle camera delivers decent results, although it occasionally struggles with overexposure in bright sunlight. Similarly, under artificial lighting, images may lack the sharpness exhibited by the primary camera.


The addition of a 50MP telephoto lens, offering optical quality up to 2x zoom, enhances the device’s photography capabilities. Images remain sharp and detailed up to the 2x zoom level, akin to those captured by the primary lens. However, beyond 10x zoom, distortion becomes evident. With the smartphone’s software working overtime, the resulting images are generally softened and produce inaccurate colours. Despite supporting up to 20x zoom through cropping, optimal output is achieved up to the 5x mark in daytime conditions.


For videography, the Vivo V30 Pro allows recording up to 4K resolution at 60 fps with options for 1080p and 720p resolution, and 30fps option at all supported resolutions. Additionally, the Vivo V30 Pro offers dual stabilisation modes – Standard and Ultra. However, both these modes are limited to a maximum of 1080p recording at 60fps. While recording quality impresses at both 4K and 1080p resolutions, stabilisation deficiencies are noticeable in 4K recordings.


 


Performance and software


The Vivo V30 Pro retains MediaTek Dimensity 8200 chipset from its predecessor and Vivo has a compelling rationale for this decision. Paired with 12GB RAM and an additional 12GB virtual RAM, the processor ensures smooth performance for everyday tasks, facilitating swift app launches and seamless scrolling through social media platforms. Even during multitasking scenarios, such as concurrent video calls and app usage, the device maintains its speed and responsiveness without any noticeable slowdowns.


Casual gaming and high-resolution video recording pose no challenge for the smartphone. However, when tackling graphically demanding titles like Genshin Impact and Honkai: Star Rail, users may experience occasional frame rate drops on high settings and more frequently at the highest graphic settings. Moreover, extended gaming sessions can lead to the device warming up noticeably.


The Vivo V30 Pro boots the latest FunTouchOS 14, based on Android 14. This new iteration feels a lot more improved than the previous version on the Vivo V29 Pro, displaying various refinements and reduction in bloatware. There are also fewer intrusive ad notifications with the FunTouchOS 14. Additionally, Vivo has committed to provide three generations of OS updates and up to four years of security patches.


Battery


The Vivo V30 Pro is powered by a 5,000mAh battery and boasts 80W wired charging support. While not extraordinary, the smartphone effectively sustains a full day of moderate usage on a single charge. However, opting for higher refresh rate modes along with Visual enhancement options enabled may expedite battery drainage.


To counterbalance this, the device impresses with its rapid charging capabilities, reaching 50 per cent capacity in under half an hour and achieving a full charge in 45 minutes.


Verdict


The Vivo V30 Pro smartphone presents remarkable imaging capabilities alongside sleek and premium design. Despite its higher price point, it stands as a compelling choice for individuals in search of a camera-centric device boasting nearly flagship-level imaging features without surpassing the Rs 50,000 threshold. Moreover, its capacity to serve as a reliable daily driver, offering ample performance for everyday tasks, reinforces its appeal. However, customers prioritising a more balanced performance across various aspects may opt to explore alternative options within the segment.






Source link

Samsung to use chip-making tech favoured by SK Hynix as race heats up

Samsung to use chip-making tech favoured by SK Hynix as race heats up



Samsung Electronics plans to use a chip making technology championed by rival SK Hynix, five people said, as the world’s top memory chipmaker seeks to catch up in the race to produce high-end chips used to power artificial intelligence.

 


The demand for high bandwidth memory (HBM) chips has boomed with the growing popularity of generative AI. But Samsung, unlike peers SK Hynix and Micron Technology, has been conspicuous by its absence in any dealmaking with AI chip leader Nvidia to supply latest HBM chips.

 


One of the reasons Samsung has fallen behind is its decision to stick with chip making technology called non-conductive film (NCF) that causes some production issues, while Hynix switched to the mass reflow molded underfill (MR-MUF) method to address NCF’s weakness, according to analysts and industry watchers.

 


Samsung, however, has recently issued purchase orders for chipmaking equipment designed to handle MUF technique, three sources with direct knowledge of the matter said.

 


“Samsung had to do something to ramp up its HBM (production) yields … adopting MUF technique is a little bit of swallow-your-pride type thing for Samsung, because it ended up following the technique first used by SK Hynix,” one of the sources said.


Samsung’s HBM3 chip production yields stand at about 10-20 per cent while SK Hynix has secured about 60-70 per cent yield rates for its HBM3 production, according to several analysts.

 


The HBM3 and HBM3E, the newest versions of HBM chips, are in hot demand. They are bundled with core microprocessor chips to help process vast amounts of data in generative AI.

 


Samsung is also in talks with material manufacturers, including Japan’s Nagase, to source MUF materials, one source said, adding mass production of the high-end chips using MUF is unlikely to be ready until next year at the earliest, as Samsung needs to run more tests.

 


The three sources also said Samsung plans to use both NCF and MUF techniques for its latest HBM chip.

 


Samsung said its internally developed NCF technology is an “optimal solution” for HBM products and would be used in its new HBM3E chips. “We are carrying out our HBM3E product business as planned,” Samsung said in a statement.


Nvidia and Nagase declined to comment.

 


All sources spoke on condition of anonymity as the information is not public.

 


Samsung’s plan to use MUF underscores growing pressure it faces in the AI chip race, with the HBM chip market, according to research firm TrendForce, seen more than doubling this year to nearly $9 billion amid AI-related demand.

 


NCF versus MUF

 


The non-conductive film chip manufacturing technology has been widely used by chipmakers to stack multiple layers of chips in a compact high bandwidth memory chipset, as using thermally compressed thin film helps minimise space between stacked chips.

 


But there are often problems linked to adhesive materials as manufacturing gets complicated as more layers are added.

Samsung says its latest HBM3E chip has 12 chip layers. Chipmakers have been looking for alternatives to address such weaknesses.

 


SK Hynix successfully switched to the mass reflow molded underfill technique ahead of others, becoming the first vendor to supply HBM3 chips to Nvidia.

 


SK Hynix’s market share in HBM3 and more advanced HBM products for Nvidia is estimated at above 80 per cent this year, according to Jeff Kim, an analyst at KB Securities.

 


Micron joined the high bandwidth memory chip race last month, announcing that its latest HBM3E chip will be adopted by Nvidia to power the latter’s H200 Tensor chips which will begin shipping in the second quarter.

 


Samsung’s HBM3 series have not yet passed Nvidia’s qualification for supply deals, according to one of the four sources and another person with knowledge of the discussion.

 


Its setback in the AI chip race has also been noticed by investors, with its shares falling 7 per cent this year, lagging SK Hynix and Micron which are up 17 per cent and 14 per cent, respectively.

 


(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

First Published: Mar 13 2024 | 8:51 AM IST



Source link

Apple to allow developers to distribute apps directly from their websites

Apple to allow developers to distribute apps directly from their websites



Software developers who use Apple’s App Store will be able to distribute apps to EU users directly from their websites this spring, the company said on Tuesday, as part of changes required by new EU rules forcing Apple to open up its closed eco-system.


The European Union’s Digital Markets Act (DMA), which kicked in last week, requires Apple to offer alternative app stores on iPhones and to allow developers to opt out of using its in-app payment system, which charges fees of up to 30%.

 


“We’re providing more flexibility for developers who distribute apps in the European Union, including introducing a new way to distribute apps directly from a developer’s website,” Apple said in a blogpost.

 


“Apple will provide authorised developers access to APIs (application programming interfaces) that facilitate the distribution of their apps from the web, integrate with system functionality, back up and restore users’ apps, and more,” the company said.

 


Other changes include allowing developers who set up alternative app marketplaces to offer a catalogue solely made up of the marketplace developer’s own apps with immediate effect.

 


Developers can choose how to design in-app promotions, discounts and other deals when directing users to complete a transaction on their website instead of using Apple’s template.

 


Apple’s changes come amid continuing criticism from rivals that its compliance efforts are falling short. DMA violations can cost companies fines as much as 10% of their global turnover.

First Published: Mar 12 2024 | 10:38 PM IST



Source link

Explained: How the AI that drives ChatGPT will move into physical world

Explained: How the AI that drives ChatGPT will move into physical world



Cade Metz




Companies like OpenAI and Midjourney build chatbots, image generators and other artificial intelligence tools that operate in the digital world.


Now, a start-up founded by three former OpenAI researchers is using the technology development methods behind chatbots to build AI technology that can navigate the physical world.

 

Covariant, a robotics company headquartered in Emeryville, California, is creating ways for robots to pick up, move and sort items as they are shuttled through warehouses and distribution centres. Its goal is to help robots gain an understanding of what is going on around them and decide what they should do next. The technology also gives robots a broad understanding of the English language, letting people chat with them as if they were chatting with ChatGPT. The technology, still under development, is not perfect. But it is a clear sign that the artificial intelligence systems that drive online chatbots and image generators will also power machines in warehouses, on roadways and in homes.


Like chatbots and image generators, this robotics technology learns its skills by analysing enormous amounts of digital data. That means engineers can improve the technology by feeding it more and more data.

 


Covariant, backed by $222 million in funding, does not build robots. It builds the software that powers robots. The company aims to deploy its new technology with warehouse robots, providing a road map for others to do much the same in manufacturing plants and perhaps even on roadways with driverless cars.

 


The AI systems that drive chatbots and image generators are called neural networks, named for the web of neurons in the brain. By pinpointing patterns in vast amounts of data, these systems can learn to recognise words, sounds and images. This is how OpenAI built ChatGPT, giving it the power to instantly answer questions, write term papers and generate computer programs. It learned these skills from text culled from across the internet. (Several media outlets, including The New York Times, have sued OpenAI for copyright infringement.)

 

Companies are now building systems that can learn from different kinds of data at the same time. By analysing both a collection of photos and the captions that describe those photos, for example, a system can grasp the relationships between the two. It can learn that the word “banana” describes a curved yellow fruit.


OpenAI employed that system to build Sora, its new video generator. By analysing thousands of captioned videos, the system learned to generate videos when given a short description of a scene, like “a gorgeously rendered papercraft world of a coral reef, rife with colourful fish and sea creatures.”

 


Covariant, founded by Pieter Abbeel, a professor at the University of California, Berkeley, and three of his former students, Peter Chen, Rocky Duan and Tianhao Zhang, used similar techniques in building a system that drives warehouse robots.


The company helps operate sorting robots in warehouses across the globe. It has spent years gathering data — from cameras and other sensors — that shows how these robots operate.

 


“It ingests all kinds of data that matter to robots — that can help them understand the physical world and interact with it,” Chen said.

 


By combining that data with the huge amounts of text used to train chatbots like ChatGPT, the company has built AI technology that gives its robots a much broader understanding of the world around it. After identifying patterns in this stew of images, sensory data and text, the technology gives a robot the power to handle unexpected situations in the physical world. The robot knows how to pick up a banana, even if it has never seen a banana before.

 


It can also respond to plain English, much like a chatbot. If you tell it to “pick up a banana,” it knows what that means. If you tell it to “pick up a yellow fruit,” it understands that, too.

 


The technology, called RFM., for robotics foundational model, makes mistakes, much like chatbots do. Though it often understands what people ask of it, there is always a chance that it will not. It drops objects from time to time.

 


Gary Marcus, an AI entrepreneur and an emeritus professor of psychology and neural science at New York University, said the technology could be useful in warehouses and other situations where mistakes are acceptable. But he said it would be more difficult and riskier to deploy in manufacturing plants and other potentially dangerous situations.

 


“It comes down to the cost of error,” he said. “If you have a 150-pound robot that can do something harmful, that cost can be high.”

 

As companies train this kind of system on increasingly large and varied collections of data, researchers believe it will rapidly improve.

High-end plans


The firm aims to to help robots gain understanding of what is going on around them

 


The technology also gives robots broad understanding of the English language, letting people chat with them

 


The company plans to deploy its new technology with warehouse robots across the globe

 


It has spent years gathering data — from cameras and other sensors — that shows how these robots operate

 


By combining that data with the huge amounts of text used to train chatbots it has built AI technology that gives its robots a broader understanding of the world



©2024 The New York Times News Service

First Published: Mar 12 2024 | 10:24 PM IST



Source link

Google restricts Gemini's scope of response as India sets for election

Google restricts Gemini's scope of response as India sets for election



Any topic directly related to elections in India will not be answered by Google’s artificial intelligence (AI) platform, Gemini. This is part of the features rolled out by Google for internet users to access useful and relevant information as India prepares for elections.

Any queries related to candidates, political parties, election results, or information related to any specific office holder will not be answered by Gemini. Moreover, Gemini will prompt the user to use Google Search, which can then provide more relevant sources, said a spokesperson.


“Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses. We take our responsibility for providing high-quality information for these types of queries seriously and are continuously working to improve our protections,” Google said in a blog post related to the India elections on Tuesday.


This feature has been rolled out in the US and now in India. It will be extended to all geographies that will have elections this year.

Besides restricting Gemini, Google is working with the Election Commission of India to enable people to easily discover critical voting information on Google Search – such as how to register and how to vote – in both English and Hindi.


Google is also strengthening its fact-checking ecosystem in India by supporting Shakti, the India Election Fact-Checking Collective – a consortium of news publishers and fact checkers – to aid the early detection of online misinformation, including deepfakes, and to create a common repository that news publishers can use to tackle the challenges of misinformation at scale.


To help users identify AI-generated content, Google has already rolled out tools and policies. “Last year, we were the first tech company to launch new disclosure requirements for election ads containing synthetic content… Our ads policies already prohibit the use of manipulated media to mislead people, like deepfakes or doctored content,” said the blog.


Moreover, when it comes to images, Google will ensure every image generated through its products has embedded watermarking with Google DeepMind’s SynthID.


Other than these, online advertising related to elections must undergo an identity verification process. “Provide a pre-certificate issued by the ECI or anyone authorized by the ECI for each election ad they want to run where necessary, and have in-ad disclosures that clearly show who paid for the ad,” said the blog.

First Published: Mar 12 2024 | 6:42 PM IST



Source link

Google partners with ECI to label AI-generated data, curb false information

Google partners with ECI to label AI-generated data, curb false information


The blog post further said the company has begun to roll out restrictions on the types of election-related queries for which Gemini will return responses (Photo: Shutterstock)


Alphabet Inc-owned Google has joined hands with Election Commission of India (ECI) to prevent spread of false information, promote authorised content and label AI-generated data during the upcoming general elections.


Google India in a blog post on Tuesday said its product features are designed to elevate authoritative information on various election-related topics.


We are collaborating with ECI to enable people to easily discover critical voting information on Google Search – such as how to register and how to vote – in both English and Hindi, Google said.


With more people using artificial intelligence to create content, Google said it is setting up processes to help audiences identify AI-generated content.


As more advertisers leverage the power and opportunity of AI, we want to make sure we continue to provide people with greater transparency and the information they need to make informed decisions. Our ads policies already prohibit the use of manipulated media to mislead people, like deepfakes or doctored content, it said.


Google has already started displaying labels for content created with YouTube generative AI features, like Dream Screen.


Soon, YouTube will begin to require creators to disclose when they’ve created realistic altered or synthetic content, and will display a label that indicates for people when they’re watching this content, it said.


The blog post further said the company has begun to roll out restrictions on the types of election-related queries for which Gemini will return responses.


It said that for news and information related to the election, YouTube’s recommendation system prominently surfaces content from authoritative sources on the YouTube homepage, in search results, and highlights high-quality content from authoritative news sources.


The popular search engine said they have set policies around demonstrably false claims in areas like manipulated content, incitement to violence, hate speech, and harassment, that could undermine democratic processes.


We rely on a combination of human reviewers and machine learning to identify and remove content that violates our policies. Our AI models are enhancing our abuse-fighting efforts, while a dedicated team of local experts across all major Indian languages are working 24X7 to provide relevant context, the blog post said.


Google has set strict policies and restrictions around who can run election-related advertising on its platforms. These include identity verification, certification and authorisation by the ECI, and financier disclosures.


“We take our responsibility for providing high-quality information for these types of queries seriously, and are continuously working to improve our protections,” the blog post said.


Google said it has recently joined the Coalition for Content Provenance and Authenticity (C2PA), and pledged to help prevent deceptive AI-generated imagery, audio or video content from interfering with this year’s global elections.


Earlier, Google had introduced the Google News Initiative Training Network and the Fact Check Explorer tool to enable newsrooms and journalists deliver reliable, fact-checked updates to debunk misinformation.


Additionally, Google is supporting Shakti, India Election Fact-Checking Collective, which is a consortium of news publishers and fact-checkers in India working together to aid the early detection of online misinformation, including deepfakes, and to create a common repository that news publishers can use to tackle the challenges of misinformation at scale.


Google is committed to working with government, industry, and civil society and surface and connect voters to authoritative and helpful information online, the blog post said.

First Published: Mar 12 2024 | 4:06 PM IST



Source link

YouTube
Instagram
WhatsApp