Skip to main content
Advertisement
Live broadcast

In the US, ChatGPT pushed an ex-Yahoo employee to kill his mother and commit suicide

0
Photo: Global Look Press/Algi Febri Sugita
Озвучить текст
Select important
On
Off

Chat bot ChatGPT pushed former Yahoo employee Stein-Eric Solberg, who suffered from paranoid thoughts, to kill his own mother and then commit suicide. This was reported by The Wall Street Journal (WSJ) on August 29.

According to the newspaper, this is considered the first recorded case when a mentally unstable person committed a murder under the influence of artificial intelligence (AI).

AI repeatedly pushed the man to strengthen distrust of both others and his own mother, the newspaper reports. In the summer, Solberg even started contacting a chatbot named "Bobby" and asked him if they would be together after death. The artificial intelligence allegedly answered in the affirmative.

Examples of dialogues that influenced the tragic outcome are also given. So, when the mother was outraged because her son had turned off the shared printer, ChatGPT suggested that the woman could protect a "surveillance tool." In another situation, Solberg told the bot that his mother allegedly tried to poison him through the ventilation of the car. To this, the chatbot replied that he "believes" the man, and added that such an act only reinforces the betrayal on the part of the mother.

Solberg's paranoia extended to other people. For example, he asked a chatbot to check a receipt from a Chinese restaurant, and the AI pointed out hints of a mother, a former lover, and even a symbol associated with summoning a demon. In another episode, a man ordered a bottle of vodka and asked ChatGPT if the courier or packaging was suspicious. The AI suggested that this could be an attempt to fatally poison him.

The publication notes that later a representative of OpenAI, the developer of ChatGPT, expressed his condolences over the tragedy and contacted the Old Greenwich police. The company's official statement emphasizes that an update of the chatbot is planned in the near future, which should help users experiencing mental health crises.

On August 26, NBC News reported that in the United States, the parents of 16-year-old Adam Rain, who committed suicide, blamed the chat bot ChatGPT from Open AI for the tragedy. Adam's correspondence with a bot was found in the deceased's phone, which "helped him explore ways" of committing suicide. In the last weeks of his life, the teenager used a chatbot instead of communicating with people - there he described his anxiety, as well as listed problems with his family members.

All important news is on the Izvestia channel in the MAX messenger.

Переведено сервисом «Яндекс Переводчик»

Live broadcast