- Статьи
- Society
- A godsend for a spy: how employees of Russian companies give secrets to neural networks
A godsend for a spy: how employees of Russian companies give secrets to neural networks
Artificial intelligence has invaded corporate life faster than defensive perimeters have been built. Analysts at Solar Group have recorded an alarming trend: the volume of corporate information sent by employees to foreign public AI services has increased 30-fold in 2025. At the same time, about 60% of Russian organizations operate in an information vacuum mode — without any policies regulating the use of neural networks. The combination of massive availability of powerful tools and a complete lack of control has created a phenomenon that security experts call the second coming of the shadow IT infrastructure. Details can be found in the Izvestia article.
How secrets go into chatbots
Analysts of the Solar company, having studied the traffic of 150 clients from government agencies, finance, industry, retail, e-commerce, telecom and the IT sector, recorded dangerous trends. Over the past year, the volume of corporate data transferred to public AI services like ChatGPT and Google Gemini has increased by more than 30 times. Employees send pieces of code, financial reports, legal papers, and customer databases there to automate routine tasks such as data analysis, sammari, or script writing. As a result, they themselves contribute to information leaks, without even suspecting it, the Solar company told Izvestia.
The paradox is that the roots of the problem lie not in malice, but in the banal desire to optimize work. Employees who are used to the speed and convenience of generative AI in personal tasks are transferring the same tools to their work processes, not realizing that they are harming their own companies.
— The thirty—fold increase in corporate data leaks through neural networks is, of course, an alarming signal. But why is this happening? The main reason, in my opinion, is the banal desire of employees to work more efficiently," explains Ekaterina Ionova, Project director of the Lukomorye IT ecosystem (Rostelecom), a representative of the Sirin AI Center. — After all, the company often cannot offer them any alternative. There are no sufficiently powerful models inside the corporate circuit, and well-known domestic solutions are still lagging behind in terms of functionality.
The pressure of deadlines and the growing competition in the labor market force us to look for any optimization methods. When the choice is between delaying a report and making a quick decision via ChatGPT, many people forget about the boundaries of privacy.
"There are three main reasons," says Sultan Salpagarov, Information security architect at Getmobit technology company. — The regularity of using the services has also increased significantly. Following it, the number of leaks increases proportionally. Competition in the labor market is growing, as is the productivity of labor itself, and after it, the demands on employees. You can prepare the report on time with the help of AI, or not do it at all and get it exactly.
The illusion of security is particularly dangerous. Employees often do not perceive code, internal reports, or strategic presentations as something forbidden — unlike personal customer data, which is more often remembered for protection.
How foreign AI uses the received data
Once in public services, confidential information begins to take on a life of its own. The data processing mechanisms are designed in such a way that each request becomes a potential source of income for platform operators.
"The terms of service for AI models assume that all the information provided is stored and used at least for further training of models, and at most for collecting and organizing information about users for research, marketing and other purposes,— Sultan Salpagarov warns.
The danger is not limited to legal use.
The attackers have developed a whole arsenal of techniques for extracting other people's data from trained models, from injections of prompta to the targeted extraction of confidential fragments.
— Everything that gets into the public neural network through prompta or attached files can be used for its further training. It is impossible to delete this information from the model later, and theoretically it can "pop up" in a response to another user, emphasizes Ekaterina Ionova.
According to Solar, 46% of confidential files and promptings are uploaded through ChatGPT, the most popular and accessible model. At the same time, employees of technology companies, who should better understand the risks, often become the main violators.
"Employees of technology companies and IT departments have historically looked at such services and software quite broadly," explains Anastasia Hveschenik, Product Manager of Solar WebProxy at Solar Group. — The restructuring of consciousness towards import substitution is actively taking place at the organizational level, where the assessment of risks and consequences has changed dramatically. However, at the level of a specific person, this process is moving more slowly. Habits and personal preferences turn out to be stronger.
Real-life incidents demonstrate the scale of the threat. Samsung engineers uploaded secret source code to ChatGPT for optimization, and the acting head of the American agency CISA personally made the service contracts publicly available. In both cases, the leak became possible due to a basic misunderstanding: the data sent to the chatbot leaves the corporate perimeter forever.
When prohibition is not the way out, and floodgates are the salvation
The reaction of many organizations to the threat is like kicking down a door when it would be enough to close the window. The complete blocking of AI services deprives businesses of a competitive advantage and provokes employees to take workarounds.
"For now, the FSTEC recommends that government agencies simply block access to neural networks, while companies are starting to think about more flexible policies," Anastasia Khveschenik states. — Blocking, of course, is a cardinal solution to the problem, but, on the other hand, it is possible to understand the business, which now faces a lot of challenges and tasks to reduce costs.
Progressive approaches involve fine-tuning controls rather than a total ban.
—Advanced API interaction monitoring, network segmentation, and isolation of systems that process critical data are becoming mandatory elements of protection,— says Anastasia Hveschenik.
An additional layer of protection is formed through DLP systems that have learned to recognize confidential content directly in requests to neural networks.
DLP systems (Data Loss Prevention) are software and hardware solutions to prevent leaks of confidential information from an organization.
"A modern DLP system can analyze how sensitive content is uploaded to the same ChatGPT or Gemini, and, if necessary, prevent such downloading," explains Andrey Arefyev, Director of Innovation and Product Development at InfoWatch Group.
However, technical means only work in combination with a safety culture. Training employees, demonstrating real vulnerabilities and clear regulations are no less important than firewalls, experts believe.
"It is necessary to regularly raise their awareness of the risks associated with the use of AI tools, as well as other threats, such as social engineering using deepfakes," recommends Dmitry Kryukov, head of the MTS Link machine learning department. — In most cases, digital hygiene becomes the key to the safety of corporate data.
Local AI: the golden mean or an expensive pleasure
Switching to local neural networks deployed within the corporate contour seems like a logical solution for organizations working with sensitive information. However, this path is littered with technical and financial pitfalls, experts point out.
"With proper deployment, the risks are significantly reduced," Sultan Salpagarov confirms. — However, the possibility of using local solutions is very limited. The power required for modern models with comparable quality is huge. As a rule, we are talking about amounts in the tens and hundreds of millions of rubles.
Even major players are forced to find compromises. Hybrid schemes, where databases and communication history are stored locally, and computing power is leased from a provider, become an intermediate solution.
"The hybrid model deployment format is considered more accessible," explains Dmitry Kryukov. — Databases, including the history of communication with the chatbot and the files transferred to it, are stored on the company's side, and the model itself is hosted by the provider. With full local deployment, the company gets the undeniable advantages of control, but acquires new responsibilities.
Alexander Zhukov, Executive Director of Aeroclub IT, added that the local launch of AI solutions provides a number of significant advantages. First, full control over the data is ensured: it does not leave the perimeter of the organization. Secondly, it allows us to comply with data localization requirements, including those contained in Russian Federal Law 152. Thirdly, the company receives a predictable cost model and independence from cloud providers.
Future: regulation instead of prohibition
Government agencies are aware that AI cannot be stopped, it can only be directed. In August 2025, the Ministry of Finance presented a draft concept for regulating artificial intelligence until 2030 with a risk assessment system and liability for harm from AI.
"A section has already been added to the updated requirements for information protection in government information systems, which defines standards for secure architecture for systems that use artificial intelligence," says Maxim Repko, an expert at Security Vision.
Technological sovereignty requires a balance between control and innovation. A complete ban condemns companies to lag, and uncontrolled use leads to leaks.
— In such a situation, the company should follow the path of building system control, — summarizes Ekaterina Ionova. — Implement clear internal policies, implement traffic filtering tools, and constantly train employees.
Izvestia sent requests to the Ministry of Finance.
Переведено сервисом «Яндекс Переводчик»