Apple may launch iPhone 17e, more MacBook Pro models in Feb: What to expect

Apple may launch iPhone 17e, more MacBook Pro models in Feb: What to expect


After updating AirTags in January, Apple is expected to have a busy February, with two key product launches likely lined up across its iPhone and Mac lineups. The company could introduce the iPhone 17e as the next update to its more affordable iPhone range, while also preparing to roll out higher-end MacBook Pro models powered by the M5 Pro and M5 Max chips.


iPhone 17e: What to expect


The iPhone 17e is expected to follow Apple’s recent approach of equipping its more affordable models with current-generation hardware. After the iPhone 16e adopted the A18 chip, the iPhone 17e is expected to move to the A19 processor that debuted on the iPhone 17, bringing updates to performance, the display engine, and neural processing.

 


A report by Macotakara suggested that the phone may also introduce Apple’s newer C1X modem, which is said to offer higher speeds than the C1 used in the iPhone 16e, along with Apple’s in-house N1 networking chip for Wi-Fi, Bluetooth, and Thread connectivity. On the design side, the iPhone 17e is expected to retain its existing form factor, including a notch instead of the Dynamic Island and a single rear camera.

 


In terms of hardware refinements, the device could feature slimmer bezels while keeping the same 6.1-inch display. It is still expected to use a 60Hz panel, without ProMotion or an always-on display. Camera changes may include the addition of Apple’s 18MP Centre Stage front camera to improve framing during video calls, while the rear camera is expected to remain a single 48MP unit. The iPhone 17e is also expected to add MagSafe support.


MacBook Pro with M5 Pro, M5 Max: What to expect


Apple introduced the base 14-inch MacBook Pro with the standard M5 chip in October, but that launch did not include the usual higher-end variants with more powerful processors. The company is now expected to complete the lineup with models running on the M5 Pro and M5 Max chips, including larger display options.


These higher-end MacBook Pros are likely tied to the macOS 26.3 software cycle, which is currently in beta and could be released later this month, suggesting that these MacBook Pro models might launch around the same time. However, Bloomberg’s Mark Gurman has reported that these MacBook Pro models could instead launch in the first week of March.

 


Beyond the expected jump in performance from the new chips, no major design changes or new hardware features are currently anticipated for these models.



Source link

Apple could bring third-party AI voice apps to CarPlay beyond Siri: Report

Apple could bring third-party AI voice apps to CarPlay beyond Siri: Report


Apple is reportedly preparing to open up its CarPlay system to third-party voice-controlled artificial intelligence (AI) apps. According to Bloomberg, this update would allow drivers to access other AI chatbots in CarPlay, instead of depending only on Apple’s voice assistant. Until now, Apple has only allowed Siri to handle voice commands inside CarPlay. The reported change would reflect a shift in Apple’s approach, allowing AI companies such as OpenAI, Anthropic, and Google to offer voice-enabled versions of their apps on CarPlay.

 


For context, CarPlay is Apple’s in-car system that mirrors key iPhone features on a car’s dashboard screen, allowing drivers to use navigation, music, calls, and messages through touch or voice controls while driving.

 


Third-party AI support in CarPlay


According to the report, Apple is working to add support for third-party AI apps with voice mode in CarPlay in the coming months. The plan has not been officially announced, and an Apple spokesperson declined to comment.

 

This could allow drivers to ask questions or get suggestions from AI chatbots without taking their hands off the wheel. For example, a user could ask ChatGPT for nearby restaurant recommendations directly through the car’s display. However, the Bloomberg report noted that Apple customers have been asking for such features for months, and there is no guarantee that all AI developers will choose to support CarPlay. 

 


As reported by Bloomberg’s Mark Gurman, Apple will not allow third-party apps to replace the Siri button or the wake word used to activate it. Instead, users will need to open an AI app within CarPlay to use its voice features. Developers may design their apps so that voice mode starts automatically when the app is opened, which could make the experience smoother.

 

Siri is typically used in CarPlay for tasks like playing music, sending messages, and navigation. However, many users now rely on services such as ChatGPT, Gemini, and other AI voice apps for more general questions that Siri does not always handle well. While other AI apps can currently be used through a car’s speakerphone, this method works outside CarPlay and can be unreliable. Some users also rely on workarounds like shortcuts synced from an iPhone. 

 


At the same time, Apple is continuing to develop its own technology. The company plans to upgrade Siri later this year with features such as web-based answers and online summaries. Apple last updated standard CarPlay with iOS 26, adding widgets and a Liquid Glass interface, while its higher-end CarPlay Ultra remains limited to select carmakers. 

 



Source link

AI-led memory shortage threatens to make phones, TVs, PCs costlier in 2026

AI-led memory shortage threatens to make phones, TVs, PCs costlier in 2026



For years, memory was one of the few components in consumer electronics that reliably got cheaper. That pattern has now reversed. Prices of Dynamic Random Access Memory (DRAM) and NAND are climbing sharply, supplies are tightening, and the impact is starting to show up across smartphones, PCs, televisions and even automobiles. The immediate trigger is the global rush to build AI infrastructure, which is pulling memory supply away from consumer devices and reshaping how chipmakers allocate capacity.


What’s happening now?


Global memory chip supply, especially DRAM and NAND used in phones, laptops and PCs, is under unusually strong pressure as manufacturers redirect capacity toward AI infrastructure. Memory that once flowed primarily into consumer electronics is increasingly being absorbed by data centres, cloud providers and server makers building out AI systems.

 
 


According to a Reuters report citing Counterpoint Research, memory prices are expected to jump by around 40–50 per cent in the first quarter after already rising sharply last year, pushing up costs for device makers across categories. Reuters noted that the surge is being driven by tighter supply and stronger demand from the server and AI segments, leaving consumer electronics brands facing higher bills of materials.


For companies that operate on thin margins, especially in PCs, smartphones and TVs, memory has quickly shifted from being a background component to one of the biggest cost variables in their products.


The AI demand driver


At the centre of this shift is the rapid expansion of AI infrastructure. Memory makers are increasingly prioritising high-bandwidth memory (HBM) and server-grade DRAM contracts tied to AI workloads, including data centres, cloud platforms and GPU-based systems. HBM is more complex and expensive to produce, but it also commands much higher margins, making it far more attractive for suppliers than conventional consumer-grade memory.

 


This has changed how the industry allocates supply. Tom’s Hardware reported that the three dominant memory manufacturers — Samsung, SK Hynix and Micron — have begun tightening order controls and prioritising large customers, sometimes even policing orders to prevent stockpiling. According to that report, supply relationships and customer profiles are increasingly determining who gets memory first during the crunch, with AI and server customers at the front of the queue.

 


The practical effect is that less capacity is available for the kinds of DRAM and NAND used in everyday consumer devices, even as production shifts toward newer, server-focused technologies.


Consumer electronics under strain


The impact of the memory crunch is already spilling into consumer electronics, particularly in categories like smart TVs that depend heavily on stable component pricing.

 


Avneet Singh Marwah, CEO of Super Plastronics Private Limited (SPPL), the brand licensee for Blaupunkt and Kodak in India, said memory prices have surged because supply is being redirected toward newer standards used in servers and AI infrastructure.


“In smart TVs, memory used is DDR3 and DDR4, but due to huge demand from AI, memory is being diverted to servers, which use DDR5. Because of this, production across memory suppliers has shifted largely to DDR5. As a result, memory prices have increased by more than 300–400 per cent,” Marwah said.

 


According to him, the cost pressure is already feeding through to retail prices. Marwah said television prices are expected to rise by around 7–10 per cent on a month-on-month basis, and that the situation is unlikely to normalise in the near term. “As per forecasts and limited sources, the situation is not expected to normalise in Q2 or Q3, with prices continuing to rise during this period,” he said.

 


He added that the surge in component costs has effectively cancelled out earlier tax relief. India’s reduction of GST on televisions from 28 per cent to 18 per cent had helped ease prices, but rising input costs and higher metal prices have wiped out those gains. “All the benefits provided by the government last year by reducing GST from 28 per cent to 18 per cent have effectively been wiped out. TVs in the coming months are expected to become even more expensive than they were during the pre-28 per cent GST period,” he said.

 


The same pressure is visible in other parts of the industry. Reuters reported in January that surging memory chip prices are dimming the outlook for consumer electronics makers, with companies facing higher costs and tighter supply. Apple has also flagged rising memory costs as a pressure point, in its latest earnings call. Analyst Ming-Chi Kuo said that memory price increases are adding to Apple’s component costs and weighing on margins.

 


In the PC and graphics segment, memory constraints are also starting to affect product timelines. Reports have indicated that Nvidia has had to delay at least one gaming-related chip due to memory supply issues. Meanwhile, the impact is not limited to gadgets. As Pocket-lint has noted in its reporting on the RAM shortage, modern cars rely heavily on semiconductors and memory, and tighter supply in the chip ecosystem is now feeding into higher costs and production risks for automakers as well.


What this means for consumers


Some brands have already signalled that higher component costs will be passed on. UK-based smartphone maker Nothing has said it plans to raise the prices of its phones in 2026, citing rising memory costs driven by demand from AI data centres. The company has argued that memory has become one of the biggest cost drivers in smartphones after years of getting cheaper.

 


However, not all major players are expected to respond in the same way. Kuo said that Apple’s current plan for its second-half 2026 iPhone 18 lineup is to avoid raising prices as much as possible, and to at least keep the starting price flat, calling that approach helpful from a marketing perspective. Kuo has also cautioned that Apple’s position is not necessarily representative of the wider smartphone market, given the company’s scale and leverage. He noted that iPhone DRAM and NAND flash consumption accounts for around 20–25 per cent of the global smartphone memory market, giving Apple far stronger bargaining power with suppliers than most other brands.

 


In categories like televisions, the impact may be less about whether people buy a device at all and more about what they choose within a given budget. In his statement, Marwah said the recent surge in costs may not immediately reduce overall sales volumes, but it is likely to change buying behaviour.

 


“Consumers who earlier wanted to buy a 65-inch TV will now opt for 55 inches, and those looking at 55 inches will shift to 43 or 50 inches,” he said, adding that the recent shift toward larger screen sizes could reverse if prices continue to climb. Entry-level models, he warned, are likely to be hit the hardest as prices rise in a highly price-sensitive segment.


Is China stepping in?


With traditional suppliers focused on higher-margin AI and server customers, some consumer electronics and PC makers are starting to look for alternatives. Reuters, citing Nikkei Asia, reported that PC brands such as HP, Dell, Acer and Asus are evaluating memory from Chinese manufacturers like ChangXin Memory Technologies (CXMT) as a partial way to navigate the current supply crunch.

 


The report noted that this would mark a significant shift for some of these companies, which have traditionally relied on Micron, Samsung and SK Hynix. While qualifying new suppliers takes time and much of China’s output is likely to be absorbed by its domestic market, the move underlines how tight the global memory market has become.



Source link

Claude AI could worsen analyst groupthink in already volatile markets

Claude AI could worsen analyst groupthink in already volatile markets



By Parmy Olson

 


The past week of volatility showed how fickle and conformist financial markets are: One minute we’re in an artificial-intelligence bubble that’s about to burst, the next we’re witnessing AI disruption across multiple industries. The latter belief underpinned the latest $1 trillion rout, triggered by new legal and financial tools from AI firm Anthropic PBC. At least, that’s what the herd decided. Anthropic’s open-source legal plugin for Claude Cowork isn’t as effective as tools from legal AI specialists such as Harvey and Legora. Yet many investors saw it as an opportunity to rush to the exits on positions they were already nervous about.  

 


The irony is that whizzy financial AI tools like Anthropic’s could make that mob mentality worse.

 


Anthropic said its new Claude Opus 4.6, unveiled on Thursday, can analyze company data, regulatory filings and market information and then generate detailed assessments that would typically take a person days to complete. That’s all well and good, but consider what would happen if a tool like Claude became as popular among analysts and investors as ChatGPT, which is now used weekly by 10 per cent of the global population. Such an ascendancy is plausible. Cloud computing is dominated by Amazon Inc., Microsoft Corp. and Alphabet Inc.’s Google, and the use of AI models is already confined to a small number of players: OpenAI’s ChatGPT and Google’s Gemini, with Anthropic’s Claude coming up swiftly from behind. 

 


Now imagine what happens when equity analysts — already well known for their corporate obsequiousness and pack mentality — are all listening to the same quarterly earnings report, and using the same one or two AI models to transcribe, analyze and suggest advice based on that call. If market participants are all drawing from the same models trained on largely the same historic data, it’s probable they’ll not only miss the black swan events that have never happened before, but reach similar conclusions and investment strategies. 

 


“It should make good analysts more productive, but it’s not going to replace the 50 analysts all vying to ‘congratulate management,’ ‘interpret’ the call, or end the conflict of interest that skew their ratings to nearly all ‘buys,’” says Richard Kramer, founder and managing director of London-based Arete Research Services LLP.

 


This is what Federal Reserve Governor Michael Barr meant last year when he warned that the ubiquitous use of generative AI tools by investors “could lead to herding behavior and the concentration of risk, potentially amplifying market volatility.”

 


Anthropic says its new model’s so-called context window has expanded to 1 million tokens from 200,000, meaning it can digest thousands of pages of financial documents in one pass, which is genuinely impressive. But it could also accelerate the concentration problem. Claude, just like ChatGPT, is still a probabilistic text generator designed to predict the most likely next word, not the most original one. This means its outputs tend to echo what’s already familiar. So when one model becomes the obvious choice for complex financial research, an increasing number of firms will find their strategies resembling each other even more. 

 


We’re already seeing a similar phenomenon with content on the Internet, in the form of a linguistic and cultural flattening. When Tim Berners-Lee invented the World Wide Web in 1989, he envisioned an “anarchic jumble” of wild ideas. That’s precisely what first emerged: gloriously weird pockets of content that were all discoverable, from a teenager’s Buffy fan page on GeoCities to Usenet groups debating Tolkien linguistics, hamster care or stock picks. But the rise of large online platforms and search engine optimization wrung out much of that initial creativity; generative AI tools look set to homogenize it further as more people use ChatGPT to write their LinkedIn posts, blogs, marketing material and more. 

 


A 2024 study in Science Advances found that while stories co-authored with GPT-4 were more polished, they bore “uncanny resemblances to one another, lacking the unpredictable edge that human-only stories often contain.” That should come as no surprise when models are picking the most statistically familiar token. 

 

A healthy financial market is one underpinned by a diversity of opinions. That helps keep pricing honest and panic at bay. So it’s ironic that in adopting AI to steal a march on their rivals, market participants could now make themselves even more likely to follow the crowd, developing a kind of market monoculture. That could set them up to inflate the same bubbles and miss the same systemic vulnerabilities —  at least more than they already do. So much for that competitive edge.   
(Disclaimer: This is a Bloomberg Opinion piece, and these are the personal opinions of the writer. They do not reflect the views of www.business-standard.com or the Business Standard newspaper)



Source link

OpenAI's first consumer device could be AI-powered earbuds: What to expect

OpenAI's first consumer device could be AI-powered earbuds: What to expect


OpenAI is reportedly preparing to launch AI-powered earbuds as it plans to move into consumer hardware space. According to a report by Mint, the company’s first device is expected to be a simpler product rather than a fully standalone AI gadget. The company is said to be planning an announcement later this year, while shipments are reportedly scheduled for early 2027. Meanwhile, the more advanced, smartphone-like AI device is reportedly facing delays due to component shortages and rising costs, prompting OpenAI to prioritise a simpler product first.


OpenAI’s AI-powered earbuds: What to expect


The earbuds are said to be referenced in a patent filing in China that has been linked to OpenAI.

 
 


OpenAI’s first consumer device is being developed in collaboration with former Apple design chief Jony Ive, and the project moved into the prototyping stage in 2025. Ive has previously said that the device could arrive in “less than” two years. OpenAI CEO Sam Altman has also said that the latest prototype feels “simple and beautiful,” after earlier versions failed to feel intuitive or easy to use. Both have indicated that the overall design direction has been finalised.

 


OpenAI is reportedly considering launching a basic pair of earbuds before moving on to a more advanced AI device. This approach could help the company enter the hardware market with lower costs and fewer technical challenges.

 


As per the report, the earbuds would mainly serve as a hands-free interface to OpenAI’s AI models. Rather than acting as a full computing device, the earbuds could allow users to interact with AI through voice commands, offering real-time assistance and responses while on the move.


By focusing on audio and voice interaction, OpenAI could position the earbuds as a practical extension of its existing software capabilities. This approach would reportedly help the company understand how users engage with AI-powered hardware in daily life before moving on to more complex products.

 


While OpenAI has not officially confirmed these plans, the report by The Mint suggested that the earbuds could act as a stepping stone toward more advanced AI hardware in the future. For now, the company appears to be taking a measured approach as it prepares for its first product for consumer devices.

 


The report aligns with earlier coverage, which suggested that OpenAI’s first product would focus more on voice and ambient interaction rather than traditional screens. The device is not expected to replace smartphones or laptops, but instead work alongside them by offering contextual assistance throughout the day, indicating that OpenAI may begin its hardware push with audio-focused wearables such as AI-powered earbuds.


Advanced device delayed


As reported, a more complex AI device, similar to a smartphone and capable of processing data on its own, could face delays. This is reportedly due to a shortage of high-bandwidth memory (HBM), which has made components more expensive and increased production costs. Because of this, OpenAI may choose to release a simpler device in 2026 and push the launch of a more advanced version to a later date, once supply issues ease and manufacturing becomes cheaper. If this turns out to be true, it would reportedly follow a common industry approach, where companies introduce entry-level products first before moving on to more advanced hardware.

 



Source link

YouTube
Instagram
WhatsApp