Skip to main content
Advertisement
Live broadcast

The expert spoke about the dangers of using neural networks at work

Cybersecurity expert Hveschenik: AI processes data in its favor
0
Озвучить текст
Select important
On
Off

Employees of Russian companies massively upload confidential information to foreign artificial intelligence (AI) services like ChatGPT and Google Gemini, ignoring the risks of leaks. Anastasia Hveschenik, Product Manager of Solar WebProxy Group of Companies, told Izvestia about this on February 5.

According to her, the use of AI is growing like an avalanche: companies optimize routine, and when there is a shortage of staff, employees independently reduce the workload without thinking about safety. This is regularly done by representatives of the IT sector, who are accustomed to looking at such services broadly.

"AI services process data "in their favor": an internal agenda, transcripts of conversations or fragments of code with authorization get to intruders. This increases phishing and infrastructure compromise. Even if the services promise not to store data, leaks are already being fixed, and information becomes available to other users," she stressed.

The expert noted the trend towards switching to Russian neural networks like GigaChat or YaLM due to sanctions and the law on Personal Data. Even more reliable, in her opinion, is the use of systems localized in the company.

"Local deployment is safer — the data does not go to providers. But security depends on the company's own cyber defense," the source said.

Publicly available AI services like Google Gemini or ChatGPT in 2025 "received" 30 times more sensitive information from Russian companies due to the actions of the latter's employees than in 2024, according to a study by Solar Group analysts.

Read the details in the exclusive Izvestia article:

A godsend for a spy: how employees of Russian companies give secrets to neural networks

Переведено сервисом «Яндекс Переводчик»

Live broadcast