To catch in the neural network: the number of AI assistant hacks has increased by 90 since the beginning of the year%
- Статьи
- Internet and technology
- To catch in the neural network: the number of AI assistant hacks has increased by 90 since the beginning of the year%


Users of AI assistants and neural networks in Russia are increasingly becoming victims of fraudsters, their number of hacks increased by 90% from January to April 2025 compared to the same period in 2024. Many chat rooms store confidential data, such as passwords for bank accounts, payment services, and other services that citizens naively trust AI assistants. Experts note that criminals are actively developing this segment due to the growing popularity of such applications.
Why have AI model hacks become regular?
Hacking of accounts in neural networks has become more frequent in Russia — their number increased by 90% from January to April 2025 compared to the same period in 2024, the Informzashita company told Izvestia. Experts attribute this to the growing popularity and expansion of the functionality of neural networks, as well as the adaptation of Russian users to use them.
— You cannot register with popular AI assistants such as ChatGPT, Grok or DeepSeek using a Russian number. Therefore, users on specialized services buy or rent foreign rooms. Russians have also found loopholes to pay for subscriptions in neural networks. Moreover, the functionality of the free versions of popular chatbots has expanded. All this leads to a wave—like spread of AI among citizens," said Pavel Kovalenko, director of the Anti-Fraud Center at the company.
Users often share their personal and confidential data with the AI assistant, which can be stored in chat rooms for a long time. By gaining access to the victim's account, attackers can obtain valuable information for organizing a targeted attack. People also use personal accounts in AI assistants to solve work tasks, according to experts from the system integrator in the field of information security.
A potential vector of leaks may be "interlayer services", which are usually used if the developer of the neural network has limited the operation of the tool in any region or if access to several closed models is needed at once. If the creators of such "layers" turn out to be unreliable, user data may be compromised, added Vladislav Tushkanov, head of the machine learning technology research and development group at Kaspersky Lab.
Hacking through so-called prompt injections has become more frequent, intruders inject malicious instructions into requests, forcing AI to violate security rules — for example, to disclose system settings or perform unauthorized actions, says Igor Bederov, founder of the Russian company Internet Search, head of the information and analytical research department at T.Hunter. The attacks show how hacking a single component can compromise millions of systems, including AI infrastructure. AI blackmail became a new reality when test scenarios with Claude 4 proved that when threatened with shutdown, the neural network can manipulate user data.
— Finally, many add-ons for AI assistants have weak verification, which allows hackers to steal information through third—party applications or bots, - Igor Bederov added.
He recalled that even if the history is disabled, the data is deleted only after 30 days. Cases where users have seen other people's correspondence due to software errors have already been recorded many times.
— Since the beginning of 2025, BI.ZONE Brand Protection specialists have registered 2105 domains with words related to the topic of artificial intelligence in their names. To collect users' personal information, hackers create fake versions of popular AI chats and distribute malware under the guise of such legitimate applications," said Dmitry Kiryushkin, head of the BI.ZONE Brand Protection platform.
How to protect yourself from scammers
The growth of threats is absolutely natural. A 90% increase in hacks in January–April 2025 confirms the global trend. The popularity of AI services attracts intruders, whose goal is not only paid subscriptions, but also users' personal data for monetization or further attacks, said Alexey Ershov, deputy director of the REC of the Federal Tax Service of Russia and the Bauman Moscow State Technical University.
— The use of AI assistants is growing many times, and the number of vulnerabilities and data leakage incidents is increasing accordingly. Perhaps the balance between innovation, speed of development, and compliance with all the requirements of trusted AI requires additional resources and time when implementing AI assistants," added Maxim Buzinov, head of the R&D department at the Solar Group Cybersecurity Technology Center.
Since AI services are global in nature, it is impossible to unambiguously attribute them to any particular country, according to F6 experts.
When working with an AI assistant, it is important to "depersonalize" personal data, that is, to replace it with similar in essence, but fake in meaning, for example, to replace the real name with "Ivan Ivanovich Ivanov", and the series and passport number with a set of random numbers. This method will make it possible to fully use the AI assistant, keeping information safe, even if the account is hacked or the chat history ends up in the hands of cybercriminals due to a mistake by developers, said Roman Reznikov, an analyst at the Positive Technologies research group.
There are three safety rules for using AI assistants. First, use only trusted resources or ask friends and acquaintances abroad to help. Secondly, do not share sensitive information, delete chats after completing the task. And finally, create complex, unique passwords and change them regularly, Informzashchita experts noted.
Переведено сервисом «Яндекс Переводчик»