By Parmy Olson

 


The past week of volatility showed how fickle and conformist financial markets are: One minute we’re in an artificial-intelligence bubble that’s about to burst, the next we’re witnessing AI disruption across multiple industries. The latter belief underpinned the latest $1 trillion rout, triggered by new legal and financial tools from AI firm Anthropic PBC. At least, that’s what the herd decided. Anthropic’s open-source legal plugin for Claude Cowork isn’t as effective as tools from legal AI specialists such as Harvey and Legora. Yet many investors saw it as an opportunity to rush to the exits on positions they were already nervous about.  

 


The irony is that whizzy financial AI tools like Anthropic’s could make that mob mentality worse.

 


Anthropic said its new Claude Opus 4.6, unveiled on Thursday, can analyze company data, regulatory filings and market information and then generate detailed assessments that would typically take a person days to complete. That’s all well and good, but consider what would happen if a tool like Claude became as popular among analysts and investors as ChatGPT, which is now used weekly by 10 per cent of the global population. Such an ascendancy is plausible. Cloud computing is dominated by Amazon Inc., Microsoft Corp. and Alphabet Inc.’s Google, and the use of AI models is already confined to a small number of players: OpenAI’s ChatGPT and Google’s Gemini, with Anthropic’s Claude coming up swiftly from behind. 

 


Now imagine what happens when equity analysts — already well known for their corporate obsequiousness and pack mentality — are all listening to the same quarterly earnings report, and using the same one or two AI models to transcribe, analyze and suggest advice based on that call. If market participants are all drawing from the same models trained on largely the same historic data, it’s probable they’ll not only miss the black swan events that have never happened before, but reach similar conclusions and investment strategies. 

 


“It should make good analysts more productive, but it’s not going to replace the 50 analysts all vying to ‘congratulate management,’ ‘interpret’ the call, or end the conflict of interest that skew their ratings to nearly all ‘buys,’” says Richard Kramer, founder and managing director of London-based Arete Research Services LLP.

 


This is what Federal Reserve Governor Michael Barr meant last year when he warned that the ubiquitous use of generative AI tools by investors “could lead to herding behavior and the concentration of risk, potentially amplifying market volatility.”

 


Anthropic says its new model’s so-called context window has expanded to 1 million tokens from 200,000, meaning it can digest thousands of pages of financial documents in one pass, which is genuinely impressive. But it could also accelerate the concentration problem. Claude, just like ChatGPT, is still a probabilistic text generator designed to predict the most likely next word, not the most original one. This means its outputs tend to echo what’s already familiar. So when one model becomes the obvious choice for complex financial research, an increasing number of firms will find their strategies resembling each other even more. 

 


We’re already seeing a similar phenomenon with content on the Internet, in the form of a linguistic and cultural flattening. When Tim Berners-Lee invented the World Wide Web in 1989, he envisioned an “anarchic jumble” of wild ideas. That’s precisely what first emerged: gloriously weird pockets of content that were all discoverable, from a teenager’s Buffy fan page on GeoCities to Usenet groups debating Tolkien linguistics, hamster care or stock picks. But the rise of large online platforms and search engine optimization wrung out much of that initial creativity; generative AI tools look set to homogenize it further as more people use ChatGPT to write their LinkedIn posts, blogs, marketing material and more. 

 


A 2024 study in Science Advances found that while stories co-authored with GPT-4 were more polished, they bore “uncanny resemblances to one another, lacking the unpredictable edge that human-only stories often contain.” That should come as no surprise when models are picking the most statistically familiar token. 

 

A healthy financial market is one underpinned by a diversity of opinions. That helps keep pricing honest and panic at bay. So it’s ironic that in adopting AI to steal a march on their rivals, market participants could now make themselves even more likely to follow the crowd, developing a kind of market monoculture. That could set them up to inflate the same bubbles and miss the same systemic vulnerabilities —  at least more than they already do. So much for that competitive edge.   
(Disclaimer: This is a Bloomberg Opinion piece, and these are the personal opinions of the writer. They do not reflect the views of www.business-standard.com or the Business Standard newspaper)



Source link

YouTube
Instagram
WhatsApp