- Статьи
- Internet and technology
- A case of burden: frequent ChatGPT outages can lead to personal data breaches
A case of burden: frequent ChatGPT outages can lead to personal data breaches
Large-scale failures in the work of OpenAI products may lead to leaks of user data, experts believe. On the evening of December 26, the company that created ChatGPT experienced a mass crash of all services - it is also called a blackout. It took more than five hours to fix the problems, and this is the third such incident in December. In total, there have been 11 confirmed cases of information leaks in the last two years. Why personal data is vulnerable to hackers and how it can be compromised - in the material "Izvestia".
ChatGPT failures
Users risk losing their personal data in another OpenAI crash. All services, including the ChatGPT chatbot, API program module, neural network for creating Sora videos and others, broke down in the evening of December 26. The real cause of the failure has not been determined, but it may entail leakage of user data and reputational problems for the business, Igor Bederov, head of the T.Hunter investigations department, told Izvestia.
- As a consequence, there may be loss of access to data and documents, breakdowns of network and other equipment. As a result - disrupted deliveries, conflicts with customers and reputational costs for business, and on top of that user data during such failures can be stolen by hacker groups for subsequent resale," he added.
According to authorized notification service OpenAI Status, this outage was the third major blackout in December 2024. The previous outages occurred on the 4th and 11th, with a total duration of more than 5.5 hours. The duration of the outage of services and products on December 26 was five hours and four minutes. The extent of the problem has not yet been fully explored, on the Reddit forum users complain that part of the history of their requests and previously created patterns disappeared and has not been restored.
Earlier, in February of the same year, there was a large-scale data leak from OpenAI services - users' personal information, login and password data, and correspondence were stolen. Experts believe that this scenario could happen again, although the service itself does not comment on the cause of the failure, referring to the internal Internet Service Provider (ISP). To what extent this failure is critical for business depends on the depth of integration of ChatGPT into the processes of a particular company that uses it, says Andrei Biryukov, vice president for research, development and services at InfoWatch Group of Companies.
- When working with a neural network, the user sends it his data - in the form of questions, text, documents. We don't know what happens to this data. For example, on the basis of a commercial document provided by the user, the neural network can be trained and later use fragments of this document when working with other requests," the InfoWatch expert pointed out the risks for business.
In 2022-2023 alone, ten cases of leaks were recorded, and in October 2024, hackers took advantage of one of the blackouts and extracted over 225 thousand private data from user accounts. All of them were offered for sale on the darknet, and LummaC2 malware was a popular tool for the theft, said Sergey Pomortsev, IT expert at GG Tech.
How to protect yourself from information leaks
The most logical result of such breakdowns may be cybersecurity-related risks: data leaks and other activities of hackers, TelecomDaily CEO Denis Kuskov said. According to him, alternatives to OpenAI products are functioning in the Russian Federation, among the largest are YandexGPT from Yandex, MTS AI GPT from MTS and GigaChat from Sber. They are not inferior to their Western analogs and do not need regular maintenance for preventive and technical work.
- OpenAI products are not available from Russia, they are mainly used through special proxy services, for example in messengers. Chatbots should be treated as strangers on the Internet - do not upload confidential information or personal data, not only about yourself, but also about other people," said Vladislav Tushkanov, head of the research and development group of machine learning technologies at Kaspersky Lab.
According to Alexei Gorelkin, CEO of Phishman, attackers have long been using various AI-based tools for their purposes, but at this stage they have limited applicability. Models can find correlations, but natural intelligence is much more effective in cybercrime and information security issues. At the same time, a neural network can prepare the texts of a phishing email in different languages much faster and more diverse than a human.
- The functionality of OpenAI products can simplify the creation of phishing emails and other types of malware. Partial writing of code for such programs is possible with ChatGPT," said Igor Bederov.
Now cybercriminals are actively using the capabilities of artificial intelligence to automate and increase the number of attacks. They predict an increase in the role of neural networks in such attacks, including the creation of more convincing dipfakes with the help of ChatGPT and its analogs, the automation of phishing, as well as the improvement of methods for searching for vulnerabilities in systems and applications, according to F.A.C.C.T. experts. However, AI cannot yet plan and execute a successful attack on its own, said Oleg Skulkin, head of BI.ZONE Threat Intelligence.
Interviewed experts agree that more careful control over the infrastructure of chatbots is needed, as the data contained in user requests can cause serious damage if it falls into the hands of fraudsters. Especially in cases where ChatGPT consumers are working on behalf of businesses and generate requests based on trade secret information. As was already the case in April 2023, when Samsung engineers "leaked" classified data to a chatbot, including notes from internal meetings and data related to the company's production and profitability, they summarized.