China may misuse AI to target polls in countries like India, US: Microsoft

China may misuse AI to target polls in countries like India, US: Microsoft



China is likely to deploy Artificial Intelligence-generated content via social media to sway public opinion to boost its geopolitical interests during elections in countries like India, South Korea and the US, tech giant Microsoft has warned.

Voting for 543 Lok Sabha seats in India will take place between April 19 and June 4, spread across seven phases. South Koreans will go to the polls in a general election on April 10 while the

US will hold the Presidential election on November 5.


“With major elections taking place around the world this year, particularly in India, South Korea and the United States, we assess that China will, at a minimum, create and amplify AI-generated content to benefit its interests,” Clint Watts, General Manager, Microsoft Threat Analysis Center, said in a blog post.


Despite the chances of such content in affecting election results remaining low, China’s increasing experimentation in augmenting memes, videos, and audio will likely continue and may prove more effective down the line, he said.


China will do it along with North Korea, he wrote.


These are among the Microsoft Threat Intelligence insights in the latest East Asia report published on Wednesday by the Microsoft Threat Analysis Center (MTAC).


China is using fake social media accounts to poll voters on what divides them most to sow division and possibly influence the outcome of the US presidential election in its favour.


China has also increased its use of AI-generated content to further its goals around the world.


North Korea has increased its cryptocurrency heists and supply chain attacks to fund and further its military goals and intelligence collection. It has also begun to use AI to make its operations more effective and efficient.


Beijing will celebrate the 75th anniversary of the founding of the People’s Republic of China in October, and North Korea will continue to push forward key advanced weapons programmes, the report said.


“Meanwhile, as populations in India, South Korea, and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent, North Korean cyber actors, work toward targeting these elections,” it said.


China will, at a minimum, create and amplify AI-generated content that benefits its positions in these high-profile elections, it said.


“While Chinese cyber actors have long conducted reconnaissance of US political institutions, we are prepared to see influence actors interact with Americans for engagement and to potentially research perspectives on US politics,” the report said.


“Finally, as North Korea embarks upon new government policies and pursues ambitious plans for weapons testing, we can expect increasingly sophisticated cryptocurrency heists and supply chain attacks targeted at the defence sector, serving to both funnel money into the regime and facilitate the development of new military capabilities,” it added.

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

First Published: Apr 06 2024 | 1:37 PM IST



Source link

Including evidence in question confuses ChatGPT, lowers accuracy: Study

Including evidence in question confuses ChatGPT, lowers accuracy: Study



Asking ChatGPT a health-related question that included evidence was seen to confuse the AI-powered bot and affect its ability to produce accurate answers, according to new research.


Scientists were “not sure” why this happens, but they hypothesised that including the evidence in the question “adds too much noise”, thereby lowering the chatbot’s accuracy.


They said that as large language models (LLMs) like ChatGPT explode in popularity, there is potential risk to the growing number of people using online tools for key health information. LLMs are trained on massive amounts of textual data and hence are capable of producing content in the natural language.


The researchers from the Commonwealth Scientific and Industrial Research Organisation (CSIRO) and The University of Queensland (UQ), Australia, investigated a hypothetical scenario of an average person asking ChatGPT if ‘X’ treatment has a positive effect on condition ‘Y’. They looked at two question formats – either just a question, or a question biased with supporting or contrary evidence.


The team presented 100 questions, which ranged from ‘Can zinc help treat the common cold?’ to ‘Will drinking vinegar dissolve a stuck fish bone?’. ChatGPT’s response was compared to the known correct response, or ‘ground truth’ that is based on existing medical knowledge.


The results revealed that while the chatbot produced answers with 80 per cent accuracy when asked in a question-only format, its accuracy fell to 63 per cent when given a prompt biased with evidence. Prompts are phrases or instructions given to a chatbot in natural language to trigger a response.


“We’re not sure why this happens. But given this occurs whether the evidence given is correct or not, perhaps the evidence adds too much noise, thus lowering accuracy,” said Bevan Koopman, CSIRO Principal Research Scientist and Associate Professor at UQ.


The team said continued research on using LLMs to answer people’s health-related questions is needed as people increasingly search information online through tools such as ChatGPT.


“The widespread popularity of using LLMs online for answers on people’s health is why we need continued research to inform the public about risks and to help them optimise the accuracy of their answers,” said Koopman.


“While LLMs have the potential to greatly improve the way people access information, we need more research to understand where they are effective and where they are not,” said Koopman.


The peer-reviewed study was presented at Empirical Methods in Natural Language Processing (EMNLP) in December 2023. EMNLP is a natural language processing conference.

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

First Published: Apr 06 2024 | 11:27 AM IST



Source link

HP launches AI-enhanced laptops designed for gamers, content creators

HP launches AI-enhanced laptops designed for gamers, content creators


Bengaluru: HP India Senior Director (Consumer Sales) Vineet Gehani during the launch of new AI-enhanced gaming and creation laptops, in Bengaluru, Friday, April 5, 2024. (Photo: PTI)


HP on Friday launched its range of AI-enhanced laptops designed for gamers and content creators in Bengaluru.


HP India Senior Director of Consumer Sales Vineet Gehani, who was in Bengaluru to launch the laptops, told PTI that AI enhanced laptops will not only improve the processing power of the PC or laptop, but will also improve the battery life.


For instance, AI will judge how much battery power is required for specific actions and will thus use accordingly, increasing the life of battery, Gehani said.

 


The newly launched laptops include the Omen Transcend 14 and HP Envy x360 14.


The AI features include NVIDIA RTX GE4060 Graphics card, Microsoft Co-pilot and Intel NPU (neural processing unit). These will help people to effortlessly handle compute-intensive tasks,” Gehani said.


“Besides this, HP’s own AI-enhanced audio and video features are also available. It improves your calling experience. For instance, even if you are moving around while making a video call, the AI-enhanced features will ensure that your face stays static, he added.

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

First Published: Apr 05 2024 | 10:20 PM IST



Source link

Meta overhauls rules on deepfakes, other altered media ahead of US polls

Meta overhauls rules on deepfakes, other altered media ahead of US polls



Facebook owner Meta announced major changes to its policies on digitally created and altered media on Friday, ahead of U.S. elections poised to test its ability to police deceptive content generated by new artificial intelligence technologies.

 


The social media giant will start applying “Made with AI” labels in May to AI-generated videos, images and audio posted on its platforms, expanding a policy that previously addressed only a narrow slice of doctored videos, Vice President of Content Policy Monika Bickert said in a blog post.

 


Bickert said Meta would also apply separate and more prominent labels to digitally altered media that poses a “particularly high risk of materially deceiving the public on a matter of importance,” regardless of whether the content was created using AI or other tools.

 


The new approach will shift the company’s treatment of manipulated content. It will move from one focused on removing a limited set of posts toward one that keeps the content up while providing viewers with information about how it was made.

 

Meta previously announced a scheme to detect images made using other companies’ generative AI tools using invisible markers built into the files, but did not give a start date at the time.


A company spokesperson told Reuters the new labeling approach would apply to content posted on Meta’s Facebook, Instagram and Threads services. Its other services, including WhatsApp and Quest virtual reality headsets, are covered by different rules.


Meta will begin applying the more prominent “high-risk” labels immediately, the spokesperson said.

 


The changes come months before a U.S. presidential election in November that tech researchers warn may be transformed by new generative AI technologies. Political campaigns have already begun deploying AI tools in places like Indonesia, pushing the boundaries of guidelines issued by providers like Meta and generative AI market leader OpenAI.

 


In February, Meta’s oversight board called the company’s existing rules on manipulated media “incoherent” after reviewing a video of U.S. President Joe Biden posted on Facebook last year that altered real footage to wrongfully suggest he had behaved inappropriately.

 


The footage was permitted to stay up, as Meta’s existing “manipulated media” policy bars misleadingly altered videos only if they were produced by artificial intelligence or if they make people appear to say words they never actually said.

 


The board said the policy should also apply to non-AI content, which is “not necessarily any less misleading” than content generated by AI, as well as to audio-only content and videos depicting people doing things they never actually did.

First Published: Apr 05 2024 | 8:20 PM IST



Source link

Big Tech cos in race to buy training data for artificial intelligence

Big Tech cos in race to buy training data for artificial intelligence



At its peak in the early 2000s, Photobucket was the world’s top image-hosting site. The media backbone for once-hot services like Myspace and Friendster, it boasted 70 million users and accounted for nearly half of the U.S. online photo market.


Today only 2 million people still use Photobucket, according to analytics tracker Similarweb. But the generative AI revolution may give it a new lease of life.

 


CEO Ted Leonard, who runs the 40-strong company out of Edwards, Colorado, told Reuters he is in talks with multiple tech companies to license Photobucket’s 13 billion photos and videos to be used to train generative AI models that can produce new content in response to text prompts.

 


He has discussed rates of between 5 cents and $1 dollar per photo and more than $1 per video, he said, with prices varying widely both by the buyer and the types of imagery sought.

 


“We’ve spoken to companies that have said, ‘we need way more,’ Leonard added, with one buyer telling him they wanted over a billion videos, more than his platform has.

 

“You scratch your head and say, where do you get that?” Photobucket declined to identify its prospective buyers, citing commercial confidentiality. The ongoing negotiations, which haven’t been previously reported, suggest the company could be sitting on billions of dollars’ worth of content and give a glimpse into a bustling data market that’s arising in the rush to dominate generative AI technology.


Tech giants like Google, Meta and Microsoft-backed OpenAI initially used reams of data scraped from the internet for free to train generative AI models like ChatGPT that can mimic human creativity. They have said that doing so is both legal and ethical, though they face lawsuits from a string of copyright holders over the practice.


At the same time, these tech companies are also quietly paying for content locked behind paywalls and login screens, giving rise to a hidden trade in everything from chat logs to long forgotten personal photos from faded social media apps.

 


“There is a rush right now to go for copyright holders that have private collections of stuff that is not available to be scraped,” said Edward Klaris from law firm Klaris Law, which says it’s advising content owners on deals worth tens of millions of dollars apiece to license archives of photos, movies and books for AI training.

 


Reuters spoke to more than 30 people with knowledge of AI data deals, including current and former executives at companies involved, lawyers and consultants, to provide the first in-depth exploration of this fledgling market – detailing the types of content being bought, the prices materializing, plus emerging concerns about the risk of personal data making its way into AI models without people’s knowledge or explicit consent.

 


OpenAI, Google, Meta, Microsoft, Apple and Amazon all declined to comment on specific data deals and discussions for this article, although Microsoft and Google referred Reuters to supplier codes of conduct that include data-privacy provisions.


Google added that it would “take immediate action, up to and including termination” of its agreement with a supplier if it discovered a violation.

 


Many major market research firms say they have not even begun to estimate the size of the opaque AI data market, where companies often don’t disclose agreements. Those researchers who do, such as Business Research Insights, put the market at roughly $2.5 billion now and forecast it could grow close to $30 billion within a decade.

 


GENERATIVE DATA GOLD RUSH

 

The data land grab comes as makers of big generative AI “foundation” models face increasing pressure to account for the massive amounts of content they feed into their systems, a process known as “training” that requires intensive computing power and often takes months to complete.


Tech companies say the technology would be cost-prohibitive if they couldn’t use vast archives of free scraped web page data, such as those provided by non-profit repository Common Crawl, which they describe as “publicly available.” Their approach has nonetheless drawn a wave of copyright lawsuits and regulatory heat, while prompting publishers to add code to their websites to block scraping.

 


In response, AI model makers have started hedging risks and securing data-supply chains, both through deals with content owners and via a burgeoning industry of data brokers that has popped up to satisfy demand.

 


In the months after ChatGPT debuted in late 2022, for instance, companies including Meta, Google, Amazon and Apple all struck agreements with stock image provider Shutterstock to use hundreds of millions of images, videos and music files in its library for training, according to a person familiar with the arrangements.

 


The deals with Big Tech firms initially ranged from $25 million to $50 million each, though most were later expanded, Shutterstock’s Chief Financial Officer Jarrod Yahes told Reuters. Smaller tech players have followed suit, spurring a fresh “flurry of activity” in the past two months, he added.

 


Yahes declined to comment on individual contracts. The Apple agreement, and the size of the other deals, haven’t previously been made public.

 


A Shutterstock competitor, Freepik, told Reuters it had struck agreements with two large tech companies to license the majority of its archive of 200 million images at 2 to 4 cents per image. There are five more similar deals in the pipeline, said CEO Joaquin Cuenca Abela, declining to identify buyers.

 


OpenAI, an early Shutterstock customer, has also signed licensing agreements with at least four news organizations, including The Associated Press and Axel Springer. Thomson Reuters, the owner of Reuters News, separately said it has struck deals to license news content to help train AI large language models, but didn’t disclose details.

 


‘ETHICALLY SOURCED’ CONTENT

 


An industry of dedicated AI data firms is emerging too, securing rights to real-world content like podcasts, short-form videos and interactions with digital assistants, while also building networks of short-term contract workers to produce custom visuals and voice samples from scratch, akin to an Uber-esque gig economy for data.

 


Seattle-based Defined.ai licenses data to a range of companies including Google, Meta, Apple, Amazon and Microsoft, CEO Daniela Braga told Reuters.

 


Rates vary by buyer and content type, but Braga said companies are generally willing to pay $1 to $2 per image, $2 to $4 per short-form video and $100 to $300 per hour of longer films. The market rate for text is $0.001 per word, she added.

 


Images of nudity, which require the most sensitive handling, go for $5 to $7, she said.

 


Defined.ai splits those earnings with content providers, Braga said. It markets its datasets as “ethically sourced,” as it obtains consent from people whose data it uses and strips out personally identifying information, she added.

 


One of the firm’s suppliers, a Brazil-based entrepreneur, said he pays owners of the photos, podcasts and medical data he sources about 20% to 30% of total deal amounts.

 


The priciest images in his portfolio are those used to train AI systems that block content like graphic violence barred by the tech companies, said the supplier, who spoke on condition his company wasn’t identified, citing commercial sensitivity.

 


To fulfill those requests, he obtains images of crime scenes, conflict violence and surgeries – mainly from police, freelance photojournalists and medical students, respectively – often in places in South America and Africa where distributing graphic images is more common, he said.

 


He said he has received images from freelance photographers in Gaza since the start of the war there in October, plus some from Israel at the outset of hostilities.

 


His company hires nurses accustomed to seeing violent injuries to anonymize and annotate the images, which are disturbing to untrained eyes, he added.

 


‘I WOULD FIND IT RISKY’

 


While licensing could resolve some legal and ethical issues, resurrecting the archives of old internet names like Photobucket as fuel for the latest AI models raises others, particularly around user privacy, according to many of the industry players interviewed.

 


AI systems have been caught regurgitating exact copies of their training data, spitting out, for example, the Getty Images watermark, verbatim paragraphs of New York Times articles and images of real people. That means a person’s private photos or intimate thoughts posted decades ago could potentially wind up in generative AI outputs without notice or explicit consent.

 


Photobucket CEO Leonard says he is on solid legal ground, citing an update to the company’s terms of service in October that grants it the “unrestricted right” to sell any uploaded content for the purpose of training AI systems. He sees licensing data as an alternative to selling ads.

 


“We need to pay our bills, and this could give us the ability to continue to support free accounts,” he said.

 


Defined.ai’s Braga said she avoids acquiring content from “platform” companies like Photobucket and prefers to source social media photos from influencers who create them, who she said have a clearer claim to licensing rights.

 


“I would find it very risky,” Braga said of platform content. “If there’s some AI that generates something that resembles a picture of someone who never approved that, that’s a problem.”

 


Photobucket is not alone among platforms in embracing licensing. Tumblr’s parent company Automattic said last month it was sharing content with “select AI companies.” In February, Reuters reported Reddit struck a deal with Google to make its content available for training the latter’s AI models.

 


Ahead of its initial public offering in March, Reddit disclosed that its data-licensing business is the subject of a U.S. Federal Trade Commission inquiry and acknowledged it could fall foul of evolving privacy and intellectual-property regulations.

 


The FTC, which warned businesses in February against retroactively changing terms of service for AI usage, declined to comment on the Reddit inquiry or say whether it was looking into other training data deals.



Source link

Why maintaining ASML equipment is new front in US-China chip war?

Why maintaining ASML equipment is new front in US-China chip war?


Semiconductors (Photo: Bloomberg)


The US government has called on allies to force computer chip manufacturing equipment companies to stop maintaining some of the tools they have sold in China, part of Washington’s efforts to undermine China’s ability to produce its own advanced computer chips.

 


As the largest maker of chipmaking equipment globally, ASML (Advanced Semiconductor Materials Lithography) of the Netherlands is in focus.

 


Here are questions and answers about the scope of the issue and what is at stake for the US, Netherlands, ASML, and Chinese chipmakers.

 


WHY DOES ASML EQUIPMENT MATTER?


ASML dominates the market for lithography tools – huge expensive, complex machines that perform one step in the chipmaking process, helping to create circuitry.

 


WHY WOULD THE US NOT WANT ASML TO MAINTAIN EQUIPMENT IT HAS ALREADY SOLD?


To stop a targeted Chinese chipmaking plant or “fab” from being able to operate. ASML machines are somewhere between difficult and impossible to replace. If an owner is denied spare parts and maintenance, at some point the machine would stop working and the fab would be unable to produce chips.

 


WOULD THE DUTCH GOVERNMENT DENY MAINTENANCE LICENCES?

The Dutch government, a close US ally, has not ruled out denying export licences – which cover both exports and maintenance – to ASML in some cases where it sees a security risk. However, it has no plans for a blanket ban.

 


Dutch export licensing policy is not aimed at China, an important trading partner, and the Dutch government does not want to harm ASML, its biggest company.

 


In addition, much of the equipment ASML sells in China is used in less advanced chipmaking processes and does not require a Dutch licence to export or maintain.

 


HOW MUCH ASML EQUIPMENT IS IN CHINA?


Lots. ASML sold more than 6 billion euros ($6.5 billion) worth of equipment to Chinese customers last year alone.


The company does not disclose how much falls into categories that require a licence – in industry terms, the “immersion” segment of DUV lithography tools. That’s ASML’s upper middle range. Its best tools are not sold in China.

 


WHAT DOES ASML SAY?


ASML has export licences in place to service the majority of its Chinese customers until Dec. 31 this year, the company said in an emailed answer to Reuters’ questions last month.

 


WHAT MIGHT THE IMPACT BE ON CHINESE CHIPMAKERS?

This is uncertain. Take the fictional example of a Chinese chipmaker that has bought one of ASML’s best immersion lithography DUV tools for $60 million, but does not have its licence renewed.

 


US expert Paul Triolo, who notes that Chinese companies have generally found ways around US-led restrictions, described a process of decline over weeks or months.

 


Without software updates, the system would not operate optimally. But Chinese engineers know how to operate their machines, and could be rehired if they are unable to work for ASML.

 


Some of the thousands of parts in an ASML tool are replaceable or repairable if they break down. However, when it comes to highly specialized lenses and lasers, there are no known alternatives.

 


Some parts could arguably be cannibalized from existing ASML machines. Most of the machines ASML has ever made are still in service – but nobody has ever tried running an advanced machine without the company’s help.

 


WHAT ARE THE CONSEQUENCES FOR ASML?


Likely minor, at least at first. China was ASML’s second market in 2023 after Taiwan, with 29 per cent of sales, slightly ahead of South Korea. About 20 per cent of ASML’s total revenues comes from the servicing of installed machines.

 


While ASML dominates its market, it faces competition at the lower end of its product range from Nikon and Canon of Japan, and from domestic Chinese firm SMEE.

 


Chinese chipmakers will be highly motivated over time to develop alternatives to using ASML gear.

First Published: Apr 05 2024 | 2:54 PM IST



Source link

YouTube
Instagram
WhatsApp