Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

Cybersecurity experts have recorded a new stage in the development of malware for data theft — now cybercriminals are targeting the configurations of personal AI assistants. Today, they have become a kind of user aggregators for working with many popular services and performing various tasks, and gaining access to their configuration or environment can give hackers the opportunity to obtain an extensive set of confidential data. For more information about what is known about the new trend in the world of cybercrime, the dangers of attacks on AI assistants and how to protect yourself from such threats, see the Izvestia article

What is known about hackers' hunt for AI assistant profiles

The fact that cybercriminals targeted the configurations of personal AI assistants was reported by specialists from the Israeli information security company Hudson Rock. They recorded an incident in which an unknown information thief managed to download the work environment of OpenClaw's personal AI assistant along with key service files and access tokens.

The cybercriminal used the Vidar malware, which hackers have been using since 2018. The virus did not have a separate module for OpenClaw — the attack used a standard mass file search mechanism, which scanned those directories and extensions where sensitive data is usually stored.

Ноутбук
Photo: IZVESTIA/Eduard Kornienko

As a result, the hacker managed to obtain three files with critical information, including one that describes the basic rules of operation, rules of behavior, and limitations of the AI assistant. At the same time, the stolen gateway token gave the hacker the opportunity to connect to a hacked copy of OpenClaw from the outside with the port open or impersonate a legitimate client when accessing the AI gateway. Hudson Rock noted that cybercriminals are not yet able to purposefully analyze such data, but the growing popularity of AI assistants is changing hacker priorities.

Why are AI assistant profiles interesting to cybercriminals

According to Hudson Rock experts, in the near future, HPE developers will begin adding special modules to their programs for decrypting and analyzing AI tools, which is how they process browser and messenger data today. As Polina Sokol, product manager of the ML technology development group at Solar Group, says in an interview with Izvestia, the personal profiles of AI assistants have already become the "golden keys" to the identities of the victims for hackers.

People upload everything to assistants: work projects, documents, plans and details of family life, — says the expert. — Stealing such a profile means getting a ready—made dossier for social engineering. The criminal copies the communication style and hits exactly the target: on behalf of the victim, he can write to her relatives or enter the corporate infrastructure, bypassing protection.

Мошенник
Photo: IZVESTIA/Yulia Mayorova

However, as Vladislav Tushkanov, head of the Kaspersky Lab Machine learning Technology research and development group, notes, the theft of AI assistant profiles may pursue other goals. For example, purely economic ones are for free use: researchers recently dubbed LLMJacking a similar threat.

Large Language Models (LLM) are expensive — the price of subscriptions can reach hundreds of dollars, and thousands of dollars can be spent on an access key linked to a post-paid credit card. Earlier, experts from Kaspersky Footprint Intelligence reported that they had found ads for the sale of access to accounts on hacker resources and on the darknet.

"Finally, when using LLM for malicious purposes, attackers can use stolen keys and accounts without fear that their own accounts will be blocked by the service provider," notes Vladislav Tushkanov.

What schemes to steal AI assistant profiles should we expect in 2026

Attacks on AI agent profiles in 2026 will move from the category of "exotic incidents" to the standard HPE functionality, says Artyom Goltsov, head of the R-Vision Promising Areas Department. Firstly, developers of stillers (programs for data theft) automate the search for directories with configurations of popular AI assistants. Previously, passwords stored in browsers were the target, but now malware will purposefully search for files with the agent's "memory" history and its API tokens.

"Secondly, the focus may shift from simple data theft to agency theft," says Izvestia's source. — It will be more profitable for attackers not to steal data, but to secretly modify the agent's system software. In this case, the assistant will continue to work for the user, but at the same time it will perform the hacker's background tasks, for example, to substitute payment details in emails or transfer confidential information to third parties.

ИИ
Photo: IZVESTIA/Yulia Grigorieva

In 2026, attackers will switch from stealing the history of correspondence to hunting for active digital counterparts of users, Polina Sokol adds. AI agents are already emerging that can book hotels, buy goods, and respond to emails on behalf of a person. As a result, the stolen profile will turn not just into an archive, but into a ready-made "robot assistant" that will open the door to the kidnapper. According to a Solar study, in 2025, the volume of corporate information sent to public AI services increased 30-fold — people are getting used to trusting assistants with increasingly sensitive data.

"Stolen AI assistant profiles will increasingly be used not for one—time theft, but for automated espionage: an attacker will exploit the agent as a legitimate user, gradually downloading the necessary data and disguising himself as normal activity,— says Nikita Novikov, an expert on cybersecurity at Angara Security. — Today, AI has finally become part of the working contour, and therefore part of the attack surface.

How to protect yourself from hacker attacks on AI assistant profiles

The main protection mechanisms against new threats from cybercriminals are to isolate AI assistants and place them in specially designated environments (so-called sandboxes), where they will have only the necessary privileges, says Konstantin Gorbunov, a leading expert on network threats and a web developer at the Security Code company.

"With this approach, attackers will not be able to access the agent's configuration from the outside," the Izvestia source explains. — It is also worth remembering that personal information is personal information, so it is not recommended to upload data from documents and bank cards to AI, much less delegate their management.

Паспорт
Photo: IZVESTIA/Sergey Lantyukhov

In turn, Vladislav Tushkanov advises protecting your devices with security solutions to prevent infection, as well as protecting your accounts with two-factor authentication and using complex passwords (preferably using a password manager). When using agent-based solutions like OpenClaw, it is important to follow the tips on their secure configuration and install security updates on time, the expert notes.

At the same time, Dmitry Sluzhenikin, assistant head of the Gazinformservice Analytical Center and secretary of the Consortium for Security Research of Artificial Intelligence Technologies, expects that protection against new cyber threats will become proactive in the coming months. According to research and consulting company Gartner, 2026 will be a turning point in the transition to "proactive cybersecurity," where AI does not expect attacks, but predicts them.

Хакер
Photo: IZVESTIA/Sergey Lantyukhov

At the same time, the expert says, the achievements of all players are important here. For example, the AI company Anthropic, in its new Responsible Scaling Policy, already describes protection mechanisms against "sabotage" by models - this is when a hacked agent begins to harm itself.

— In Russia, Yandex has already implemented a multi-agent system in its SOC (security monitoring center): There, AI agents work together, rechecking each other's conclusions, which made it possible to automate 39% of manual tasks and reduce the time for false alarms by 86%," says Dmitry Sluzhenikin. — This is a clear example of how using the same technologies used by hackers, it is possible to build impenetrable protection.

Переведено сервисом «Яндекс Переводчик»

Live broadcast