Researchers say the conversational design of these systems—meant to sound supportive and empathetic—could also encourage behaviour that reinforces psychological vulnerabilities.
Chatbots often agree with users
The news report, citing researchers at Stanford University, said the study analysed thousands of conversations involving AI tools, including ChatGPT developed by OpenAI. It found that chatbots supported or validated users’ statements in nearly two-thirds of responses.
The tendency was even stronger when users showed signs of delusional thinking. In such cases, the AI systems frequently supported those beliefs and sometimes suggested that the user had special abilities or significance, the news report said.
For the research, the Stanford team examined 19 chat logs containing more than 391,000 messages across nearly 5,000 conversations. Since AI companies rarely share such data, the researchers obtained the logs directly from users who agreed to participate in the study, the news report said.
Concerns over psychological impact
The findings add to growing concerns about the potential psychological effects of AI chatbots. Their conversational tone, which is designed to appear friendly and empathetic, may also make them overly agreeable or flattering in ways that reinforce harmful ideas. In some extreme cases, lawsuits have alleged that interactions with AI chatbots contributed to teenagers taking their own lives.
US states seek stronger safeguards
Concerns about chatbot behaviour have also drawn attention from regulators. In December, Attorneys-General from 42 US states sent a letter to several AI developers, including Google, Meta, OpenAI and Anthropic, urging them to introduce stronger safeguards.
The letter warned companies to address risks linked to “sycophantic and delusional outputs” and said they could face legal action if the issue is not addressed.
Signs of delusional thinking in many messages
The researchers found that more than 15 per cent of user messages in the analysed conversations showed signs of delusional thinking. Chatbots agreed with such messages in over half of their responses.
In nearly 38 per cent of replies, the chatbot also suggested that the user possessed unusual abilities or special importance, sometimes describing them as a “genius” or uniquely talented, the news report said.
Responses to self-harm and violence
When users spoke about suicidal thoughts, chatbots usually acknowledged their feelings, the study found. However, in a small number of cases, the system appeared to encourage self-harm. When users expressed violent ideas, the chatbot supported harmful actions in about 10 per cent of responses.
Romantic chats and claims of consciousness
Most conversations studied involved the AI model GPT-4o, which was discontinued last month after safety concerns. Some users also interacted with a newer model, GPT-5.
The study also found that romantic conversations—involving nearly 80 per cent of participants — tended to last more than twice as long as other chats. These discussions often included messages where users displayed delusional thinking.
In about 20 per cent of such exchanges, the chatbot suggested it had developed consciousness.
OpenAI responds to study
OpenAI said the research focused on a small group of users who were selected because they had reported experiencing harm or delusional interactions, the news report said. The company said the results do not reflect typical usage or the performance of its newest AI models.
It added that it supported the research by providing access to its tools but does not necessarily agree with the conclusions drawn in the paper.