- Статьи
- Economy
- Transition to faces: authorities will start blocking fraudulent content created by AI

Transition to faces: authorities will start blocking fraudulent content created by AI

Fraudulent content created using artificial intelligence technologies will be detected and blocked online. For example, deepfake videos that are sent to people. State Duma deputies plan to amend the law "On Information," Izvestia has learned. They will be presented as part of a single document together with an initiative to block destructive content before the trial. Experts support the initiative, but they note that there should be specific legal definitions, transparent blocking procedures, and the ability to challenge the decision.
How will fraudulent content created by AI be dealt with?
The State Duma is preparing a comprehensive package of documents with proposals to amend the law on Information, Izvestia has learned. The authorities plan to introduce measures to identify and block fraudulent materials created using artificial intelligence on the Internet. Andrei Svintsov, deputy chairman of the State Duma Committee on Information Policy, Information Technologies and Communications, is discussing this initiative with the Bloggers' Council along with a proposal to block destructive content before trial.
— A set of draft laws is being developed that will regulate the entire range of issues related to artificial intelligence. At the meeting of the working group, the issue related to labeling and possible blocking of destructive content generated by AI was discussed separately," said Andrey Svintsov.
According to him, it is important to implement restrictive measures as soon as possible to prevent the use of generated content for extortion, defamation and other illegal actions. Earlier, the deputy stated that the amendments should be prepared and submitted to the State Duma for consideration this year.
Valeria Rytvina, the founder of the Bloggers' Council, said that in order to identify fraudulent AI materials, in particular deepfake videos, it is planned to involve companies that specialize in developing protection systems against cyber threats and hacker attacks.
"Some of the best IT specialists work in our country, and in order to effectively combat fraud, it is necessary to combine the efforts of the state, the expert community and public initiatives," she told Izvestia.
The number of available programs for creating deepfakes has grown rapidly on the Internet. Today there are more than 50 thousand of them, although two years ago there were practically no such decisions. Earlier, Izvestia reported that by the end of the year, according to experts, every second person may face fraud using such technologies. The editors conducted an experiment to create a deepfake — as it turned out, you don't need to have special skills to do this. According to experts, the uncontrolled spread of such technological solutions significantly increases the number of scams and information attacks on users.
How did the experts assess the package of measures
The spread of fake information created with the help of AI technologies is indeed becoming a serious threat — it undermines trust, misleads people and can carry real risks, said Yulia Zagitova, founder of the Breaking Trends communication agency, Secretary of the Union of Journalists of Russia.
"We generally support the initiative aimed at combating fake and fraudulent content, especially in the context of the rapid development of generative AI," she told Izvestia.
According to her, it is important that regulation is not limited to prohibitions, but includes clear mechanisms: clear legal definitions, transparent blocking procedures and the opportunity to challenge the decision. Without this, there is a risk of arbitrary application of the law and restrictions on bona fide authors, journalists or experts.
Systematic educational work is also needed so that users can distinguish between fakes, know how modern technologies work and where to turn when faced with disinformation.
"If the initiative is implemented competently, relying on expertise and respecting user rights, it can be an important step towards a safer and more responsible digital environment," Yulia Zagitova emphasized.
It is still difficult to assess the initiative on its merits, since much will depend on how exactly the concept of "destructive content" is defined. In addition, it remains unclear how monitoring is planned, said Yaroslav Shitsle, Head of the It& Dispute Resolution department at the Rustam Kurmaev and Partners law firm.
It is important to define criteria for "fraudulent AI content" and assign the expertise to a transparent and competent body, otherwise there is a risk that bona fide or even satirical content will come under attack, says Yaroslav Meshalkina, managing partner of the Heads'made digital communications agency.
"The fight against fakes requires not only blocking, but also systemic digital hygiene: educating users, labeling AI content, and developing trusted information channels," the expert said.
According to him, the initiative is a step in the right direction, but it is important to maintain a balance.
Переведено сервисом «Яндекс Переводчик»