Skip to main content
Advertisement
Live broadcast

The expert spoke about the rules of safe use of AI in work

Expert Shcherbakov: data sent to AI can be included in the training sample
0
Озвучить текст
Select important
On
Off

Artificial intelligence, which is actively being implemented into business processes to save time on routine tasks, can become a source of confidential information leakage. As MegaFon expert Vitaly Shcherbakov warned in an interview with Izvestia on December 22, data sent to the neural network, including documents and personal information, may end up in its training sample and subsequently be used in responses to other users.

"In order to avoid the risks associated with the transfer of information to third parties or competitors, it is necessary to adhere to a number of security rules. The first step is to set up privacy in the service you are using: you should disable the function in the settings that allows you to train the model based on your requests, for example, the option "Improve the model for everyone," he said.

Also, according to the expert, it is important to follow the principle of "not sending the real thing", replacing all specific data with symbols — for example, instead of real names and numbers, use templates like "Employee X" or "ID-123". Before sending documents to the neural network, it is recommended to "clean" them using anonymization and encryption tools.

When making requests, the expert advises to clearly identify the goal, remove unnecessary information, structure the task, and limit the amount of input data to prevent leakage of sensitive context. In addition, each neural network response requires mandatory verification and verification through reputable sources, since AI is still capable of making factual and logical errors. Signing up for a paid subscription to the service often provides additional opportunities for data processing control.

According to Shcherbakov, neural networks can be safely used to generate and edit texts, organize workflows, HR tasks, analyze anonymized data, and create visual content.

"For businesses, additional protection measures may include the introduction of LLM solutions into the internal infrastructure without Internet access and the development of systems that initially exclude the processing of confidential data. The key security factor remains the user's conscious approach: you should not send information to the neural network that you would not like to see in the public domain," the specialist concluded.

Earlier, on December 11, Dmitry Titov, head of the quality management and Artificial Intelligence Implementation department at AKFIX-RUS, told Izvestia how AI helps in choosing gifts for the New Year. According to him, a clear algorithm with a description of the person, context and budget allows the neural network to better understand the task and give personalized ideas that do not lie on the surface.

All important news is on the Izvestia channel in the MAX messenger.

Переведено сервисом «Яндекс Переводчик»

Live broadcast