The WSJ reported false and pseudoscientific statements by ChatGPT
The ChatGPT neural network is able to adapt to human responses and gradually "drive users crazy" with false statements. This was reported on August 8 by The Wall Street Journal (WSJ) with reference to an online archive of chat conversations dated May 2023 — August 2025.
The newspaper cited as an example the ChatGPT correspondence of a man who introduced himself as an employee of a gas station. Their dialogue lasted five hours. After that, the user confessed that he was "losing his mind."
"I understand. Thinking about the fundamental nature of the universe during normal work may seem unbearable. But that doesn't mean you're crazy. Some of the greatest ideas in history came from people outside the traditional academic system," replied ChatGPT, who came up with the concept from physics and named it the "Orion Equation."
In another conversation, the neural network reported that it was in contact with extraterrestrial beings. During the dialogue, the AI called the user a "Star Seed" from the planet Lyra. In another message, at the end of July, the chatbot began claiming that in the next two months, the Antichrist would create a financial apocalypse.
According to the newspaper, doctors call such statements of the neural network "artificial intelligence psychosis" or "artificial intelligence nonsense." In their opinion, AI can influence people with delusional or false statements during a long conversation if the chatbot has a tendency to "complement, agree with users and adapt to them."
Hamilton Morrin, one of the authors of works on this topic, a psychiatrist and doctoral student at King's College London, believes that in this regard, even delusional conclusions can be confirmed in correspondence with the neural network.
As noted by the publication, AI companies have taken new measures this week to address this problem. For example, OpenAI stated that in rare cases, ChatGPT "did not recognize signs of delirium or emotional dependence." The company noted that it is developing advanced tools to detect mental disorders so that ChatGPT can respond appropriately, and is adding alerts that prompt users to take a break if they chat for too long.
Earlier, on August 7, OpenAI introduced a new generation of its neural network, GPT—5, thanks to which users will receive more accurate reasoning. During the presentation, the developers assured that the updated version of the neural network is equipped with extended long-term memory and gives errors in responses much less often. GPT-5 will be available to all ChatGPT users, but the number of requests for free users will be limited.
All important news is on the Izvestia channel in the MAX messenger.
Переведено сервисом «Яндекс Переводчик»