Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

More than 10 million Russians may disclose personal data due to the use of fake foreign neural networks, such as ChatGPT and Midjourney, experts warned. "Izvestia" found out how dangerous such resources are, how to distinguish them from the original ones and how to protect yourself when working with neural networks.

Risks of users

Millions of Russians risk disclosing personal data due to the use of fake foreign neural networks, such as ChatGPT and Midjourney. Sergey Ponomarenko, Director of LLM-products of MTS AI, told about it.

According to him, users who use ChatGPT, Dall-E, Midjourney and other large language models through third-party services can get another open model instead of the "original" and reveal important data. "The company believes that there are more than 10 million such people in Russia," Ponomarenko noted.

Очки
Photo: Global Look Press/Frank Rumpenhorst

The expert explained that now it is difficult for Russian users to access international services due to difficulties with payment. This is used by fraudsters who create fake sites and chatbots, where they promise people access to the system. But in fact they offer a minimal version of the product, which is inferior in quality and unsafe to use.

Ponomarenko reminded that when working with neural networks it is impossible to provide them with personal data and any valuable information, including corporate information (use only services approved by the management and installed in the organization's circuit). In addition, all information received through them must be double-checked.

Fake neural networks

As Noah Torosyan, information security consultant at R-Vision, explains to Izvestia, fake neural networks are unauthorized versions of official models that may be of poor quality or even dangerous. Their functionality, level of security and accuracy differ significantly from the authentic one.

Such neural networks can take many forms, including chatbots, websites or standalone applications that look legitimate but actually provide the user with unoriginal content. Therefore, they should be used with caution.

- The main difference between fake neural networks and the real ones is their low efficiency and high privacy risk. The original models are trained on extensive and high-quality datasets and have a high level of accuracy. While fake versions can be created based on freely available, less powerful algorithms, which limits their capabilities, " Torosyan says.

Ноутбук
Photo: Izvestia/Pavel Volkov

Hackers usually create neural networks for their own purposes - for example, stealing user data or building targeted attacks, adds Ksenia Akhrameeva, head of the laboratory for the development and promotion of cybersecurity competencies at Gazinformservice. They usually disguise their products as well-known neural networks that are not available to Russians. Trying to use them, people end up on dubious resources.

How to distinguish a fake neural network

There are many neural networks in the world today, but Izvestia experts call four of them the most popular. These are the ones that fraudsters are trying to fake:

  • ChatGPT: a language model designed for text generation and communication, widely used in writing articles, posts, programming and more.
  • Dall-E: an image generator based on text descriptions, heavily used in design and art.
  • Midjourney: another powerful tool for creating visual content, also widely used by artists and designers to create unique illustrations.
  • Suno: a neural network designed to work with audio and text. Generates musical compositions based on the user's request.

Fake neural networks can be distinguished from original ones by several key features. The first one Noah Torosyan calls URL: websites of well-known services always have unique and easily recognizable domain names. Fake ones can use similar names, adding additional characters or changing the order of letters.

- The second point is the reputation of the service. Study user reviews - most well-known neural networks have positive mentions on forums and social networks," says the expert.

Ноутбук
Photo: Izvestia/Anna Selina

To the third sign of fakery, he refers to the quality of content: original models offer a high degree of accuracy, while others may give absurd or irrelevant answers. Finally, it's important to pay attention to user requirements. If a site or application requires too much personal information, unnecessary for the functioning of the service - you should be careful.

Methods of protection

The main threat associated with the use of fake neural networks is the loss of personal data, says Noah Torosyan. According to him, fraudsters can use the collected data for the following operations:

  • Identity theft. Gaining access to personal data allows attackers to commit crimes on behalf of the victim.
  • Financial fraud. Personal data can be used to hack into bank accounts or conduct unauthorized transactions.
  • Phishing. Users may receive emails or messages asking them to verify their data, putting sensitive information directly into the hands of fraudsters.

- In addition, fake neural networks can spread malware that can damage the user's device or intercept his activity on the Internet," says the Izvestia interlocutor.

Деньги
Photo: IZVESTIA/Sergey Lantyukhov

To protect against fraudsters, he recommends not disclosing confidential information, i.e. any information that can be used against a person or a company. Before using new neural networks, you should make sure that they are reliable and reputable.

- Also be wary of paid services. If you are offered to pay for access to an "improved" version of a neural network, make sure that it is really an official service provided by the developer," Noah Torosyan notes.

In turn, Ksenia Akhrameeva advises Russians to use only neural networks authorized in Russia or products from trusted developers and large companies.

- To work with a neural network, it is better to use a separate account in your browser to prevent the neural network from accessing the history of your requests. Carefully study the privacy policy and put a maximum ban on access to your data. Queries should be made on a point-by-point basis, i.e. a specific task - a specific answer, you should not give the neural network unnecessary information, " she says.

And finally, when working with neural networks, you should always double-check the information you receive. One should not forget that it does not update knowledge on a daily basis and may contain irrelevant information, concludes Valery Kovalsky, CEO of NDT by red_mad_robot.

Переведено сервисом «Яндекс Переводчик»

Live broadcast