Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

Russians' savings on ChatGPT may result in a leak of their data — more than 2 thousand ads with offers to share a subscription to this chatbot and other AI services are posted online, Izvestia has learned. Such schemes are popular because of the savings, but they carry risks.: The services do not support access control, which means that one user can see other people's requests, experts say. For more information, see the Izvestia article.

How users lose data

Russians are increasingly buying subscriptions to ChatGPT and other foreign AI services for several people at once. Currently, more than 2,000 ads for the sale of such "group" subscriptions can be found on the Internet in Russian, Mikhail Kuznetsov, VisionLabs product director, told Izvestia. The subscription price for the service is about 2,600 rubles per month. Therefore, if you divide it, for example, into four people, it will cost 650 rubles per user per month.

Деньги
Photo: IZVESTIA/Dmitry Korotaev

However, this approach can lead to leakage of personal data and confidential information.

Most AI services do not support the function of dividing roles between users, so one person can see the query history of another. This makes such subscriptions unsuitable for work or personal use, especially in cases where a person has open conversations with ChatGPT, for example, as with a friend or psychologist," the expert noted.

Izvestia reference

ChatGPT is a chatbot with generative artificial intelligence developed by OpenAI and capable of working in an interactive mode, supporting queries in natural languages. It is used to generate text and images, translations, and much more. The model is constantly learning from new data. ChatGPT is available in many countries.

He stressed that large language models can "remember" the entered data and later reproduce it in responses to other users with other requests. Such solutions use information from dialogues for further training.

If the user enters personal data, such as a phone number, passport, or bank card, they can be included in the training sample. Subsequently, another person may accidentally receive this data in the response," Mikhail Kuznetsov explained.

Even if the model does not formally store information, the risk of leakage still remains. He noted that at the moment almost nothing is known about the security mechanisms in such systems.

Neural networks are not designed to store classified information. Unlike banking services or messengers with end-to-end encryption, there is no reliable protection where only the sender and recipient have access to the correspondence," he added.

Телефон
Photo: IZVESTIA/Pavel Volkov

The editors found online stories of users who had experienced similar situations. One of them said that he logged into his account and found hundreds of other people's chats, some in foreign languages, such as Chinese. He assumed that his profile had somehow merged with other people's accounts. However, he did not believe that he had been hacked, since the chats were clearly from different countries — Cambodia, China, the United States and others.

Another user said that a colleague sent him a link to a ChatGPT conversation. When he logged into his account, he saw the titles of all her chats on the sidebar. Although he had not read the correspondence itself, he was alarmed by the situation, especially since he himself had shared similar links before and was now worried that someone might gain access to his topics.

How to save your data

Any technology used for personal purposes or to optimize a company's business processes carries not only benefits, but also certain risks. Whether it's websites, mobile applications, or AI assistants, it's necessary to follow the approach of reasonable use and limited transfer of personal data, information containing trade secrets, and so on, says Maria Usacheva, director of AI products at AppSec Solutions.

— The demand for AI services is growing and will continue to increase every day, because it is a huge data source that is able to solve user queries in seconds. Any "neural networks" are trained, including on user requests and their reactions to the responses received from the model, the expert believes.

Клавиатура
Photo: IZVESTIA/Eduard Kornienko

According to her, a lot of work is underway to create trusted AI systems. There are recommendations and rules of digital hygiene for users.

Yaroslav Meshalkin, an expert on digital communications, believes that trusting or not trusting ChatGPT is a matter of personal choice, but one must understand that there is no complete security in the digital world today. Any system can be hacked, and even if not, the data obtained can be used by its owners.

— And in vain a person thinks that his "personal" queries are not interesting to anyone: firstly, they can be used in a generalized form (the same "big data"), and secondly, stories are increasingly happening with sensitive commercial information being sent to neural networks, for example, tables for analysis. What if a military general or a top manager of a state corporation uses ChatGPT? Given the current level of international relations, all this carries enormous risks," he said.

Офис
Photo: IZVESTIA/Eduard Kornienko

When using AI services, it is important to follow the key rules of digital security, said Vladislav Tushkanov, head of the research and development group for machine learning technologies at Kaspersky Lab. For example, do not share confidential data, critically evaluate answers, double-check them, especially when it comes to vital issues, and use official applications and services. In addition, it is important to take care of the security of credentials from AI services: use strong passwords and two-factor authentication, he concluded.

Переведено сервисом «Яндекс Переводчик»

Live broadcast