Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

The Ministry of Digital Affairs has admitted that in Russia they will come to mandatory labeling of content created with the help of artificial intelligence, as well as to criminalize the distribution of deepfakes created without human consent. However, these are the next steps, while the Ministry of Finance is focused on creating a conceptual framework in this area. Izvestia investigated whether it is possible to introduce labeling of AI content and how effective this measure will be.

How is the Ministry of Finance going to regulate AI content

At the end of May, the Institute for Socio-Economic Analysis and Development Programs (ISAPR) sent a proposal to Prime Minister Mikhail Mishustin to introduce mandatory labeling of content created using artificial intelligence for individuals and legal entities. Thus, according to the calculations of the institute's specialists, it will be possible to deal with deepfakes and protect users from false information. For violating these requirements, it was proposed to impose fines similar to penalties for illegal processing of personal data. Recall that at the end of May, a law came into force, according to which fines for leakage and illegal processing of personal information of citizens increased significantly — in particular, for companies that committed repeated violations, they became negotiable, up to 3% of revenue.

Recently, the Institute of Socio-Economic Analysis and Development Programs received a response from the Ministry of Finance, which was reviewed by Izvestia.

ИИ
Photo: IZVESTIA/Anna Selina

The response, signed by the deputy head of the ministry, Alexander Shoitov, says that the priority task now is to consolidate in legislation the "legal regime for the legitimate creation, distribution and use" of AI content. The problem remains the lack of a single conceptual framework: the legal terms "deepfake", "artificial intelligence" and "synthetic content" do not yet exist.

But along with this, the letter emphasizes, there will be a need to study the feasibility of introducing labeling of digital content, as well as a mechanism for monitoring it to counter fraud. To do this, it will be necessary, among other things, to identify the authorized bodies and the circle of persons who will be required to label digital content. In addition, it is necessary to develop technical measures to identify and label deepfakes, including within the framework of the Unified System of the Autonomous Non-governmental Organization Dialog Regions and the national project "Data Economics and Digital Transformation of the State," the Ministry of Finance emphasized.

"Currently, the Russian Ministry of Finance, together with Roskomnadzor and interested federal executive authorities, is developing proposals to establish requirements for labeling content legally produced and distributed through information technology, including those created using artificial intelligence technologies and digital substitution of human photos and videos," Alexander Shoitov said in a response.

Изображение
Photo: IZVESTIA/Eduard Kornienko

He points out that in addition, the issue of introducing a criminal law norm may be considered, which provides for responsibility for the creation, distribution or use of "synthetic content" using AI technologies, but without the consent of the person whose image or voice was used. It is envisaged that criminal punishment may occur if deepfakes have harmed the rights and legitimate interests of a person.

Izvestia sent a request to the Ministry of Finance and Roskomnadzor. Roskomnadzor redirected questions to the Ministry of Digital Affairs, and the ministry told the publication that labeling of legally produced material created using AI, as well as criminal liability for AI content without the consent of the depicted or voiced persons, is now being actively discussed with relevant departments.

Is it really possible to keep track of AI scammers

Andrey Shurikov, director of the Institute for Socio-Economic Analysis and Development Programs, told Izvestia that there is no guaranteed way to distinguish labeled AI content from unmarked content yet, but mechanisms already exist: it is possible to identify a significant part of such content. Mikhail Kopnin, director of the IT department at DCLogic, also adds that there are services that can check videos and images for the use of AI.

However, he emphasizes that it makes no sense to label all content created using neural networks.

— For example, it is not important to most users that the route in the navigator is built by AI algorithms, — said the interlocutor of Izvestia. "But when it comes to combating the malicious use of AI, labeling is unlikely to be effective. Those who create such malicious content will not follow the rules.

Навигатор
Photo: IZVESTIA/Dmitry Korotaev

Alexander Tunik, the head of the Runet Rating product line, says the same thing: if someone is going to comply with the law, it's definitely not scammers.

Denis Kucherov, Director of Minerva Result projects at Minervasoft, believes that labeling mechanisms definitely need to be applied in the media space. However, it is important that this is not just labeling of AI materials, but systematic fact-checking.

He added that labeling the use of neural network technologies in text content is also technically feasible.

"Some AI systems leave hidden markings, such as indissoluble spaces," he told Izvestia. — They are not visually visible, but when analyzing the code, it becomes clear that the text was generated by artificial intelligence.

Пробел
Photo: IZVESTIA/Pavel Volkov

Andrey Shurikov believes that labeling AI texts should be the next step, especially in the media.

Alexey Vaganov, technical architect at 1C PRO Consulting, believes that labeling can even become a "competitive advantage" for the media and bloggers, especially for those who use artificial intelligence less.

Who will be punished for deepfakes

Andrey Shurikov is confident that the introduction of mandatory labeling legislation will increase not only the traceability of malicious content, but also the responsibility of people who use AI. Still, it is technically difficult to trace the original source of the deepfake, but distributors can be easily identified, including through the analysis of digital traces.

"Therefore, first of all, we need to focus on punishing distributors, including those who repost fake content, especially if it is done intentionally," he said. — In addition, it is also important to develop content verification technologies and oblige platforms to promptly remove unmarked deepfakes.

Интернет
Photo: IZVESTIA/Anna Selina

Denis Kucherov notes that even the primary source of the deepfake distribution is very often impossible to determine promptly. Therefore, he emphasizes, the emphasis should be on "objective dissemination." And this is the responsibility of each user.

"It's important not to blindly forward clickbait headlines and news, but to analyze the content: how plausible it is, where it comes from," he said.

Мошенник
Photo: IZVESTIA/Sergey Konkov

Roman Dushkin, chief architect of artificial intelligence systems at the AI Research Center for Transport and Logistics at the National Research Nuclear University MEPhI, warns that the introduction of criminal liability for the distribution of unmarked deepfakes should be approached very carefully, carefully examining how the content is used.

"And with labeling, there is a danger that we can make life worse for decent citizens who don't violate anything, and the attackers will continue to do what they did," he said. — Fraudsters who have already entered this field of activity and are taking on all these risks, most likely, will not stop because of the law on criminal liability.

Roman Dushkin believes that the modern legal system already has all the necessary standards for dealing with digital fraud and new legislative innovations will not stop it. Comprehensive measures are needed, the expert emphasizes.

Переведено сервисом «Яндекс Переводчик»

Live broadcast