Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

Credit organizations have called for postponing the start of AI regulation until March 1, 2028. The Association of Banks of Russia sent such a letter to the Ministry of Finance. If the law comes into force in 2027, as the authorities are discussing, it will entail risks, market participants are confident. The Russian Federation intends to localize the development of neural networks and introduce mandatory labeling of content. Banks have actively invested in the implementation of these technologies. They think: These measures may affect the development of a new market. What else is being proposed to change and what could be the threat of hasty and too strict regulation of the sphere — in the Izvestia material.

What are the risks in regulating AI

Credit organizations see risks in the authorities' plans to regulate artificial intelligence in the Russian Federation. They advocate postponing the introduction of new requirements for at least six months, until March 1, 2028, and the most difficult measures for 18 months (until September 2029). Such proposals follow from a letter from the Association of Banks of Russia (ADB) to the Minister of Digital Development Maksut Shadaev (Izvestia has it).

At the end of March, the Ministry of Finance published a draft law on AI regulation for public discussion. It is expected that the document should establish rules for the development and use of new technologies, from labeling generated content to liability for harm from neural networks. It is planned that it will enter into force in September 2027.

It is expected that the status of trusted models will appear in the Russian Federation, which have been checked by the FSB and the Federal Service for Technical and Export Control (FSTEC) for safety, confirmed quality according to industry standards and entered into a special register. At the same time, the project will not apply to the use of AI for the defense and security of the country, law enforcement, and emergency prevention — the specifics of application in these areas will be determined either by the president himself or by federal laws.

клавиатура
Photo: IZVESTIA/Anna Selina

Most of the bill's requirements relate to the public sector, but the basic rules, including definitions of artificial intelligence, as well as the duties and responsibilities of market participants, apply to everyone. At the same time, only "trusted" models can be used at critical information infrastructure facilities and in a number of government systems. This means that commercial developers will also have to confirm that their solutions meet the established criteria if their AI is used in such environments.

The ADB believes that the implementation should proceed in stages: first, support measures and pilot projects, then regulation of high-risk areas, and only after that full-fledged control and registers, Bulat Anerzhanov, director of the association's strategic Development department, told Izvestia. The introduction of the law as a whole package from September 1, 2027 looks risky if the necessary by-laws are not ready by that time. The transition period is a prerequisite that will allow companies to adapt to changes, primarily those that have invested heavily in AI technologies, he added.

The banks' demand to delay the entry into force of the law until March 1, 2028 is a request for time for technical adaptation, said Olga Popkova, managing partner of the Goldman and Po communications agency. The term of entry proposed by the authorities looks difficult to implement, given the scale of internal restructuring in banking IT systems.

банк
Photo: IZVESTIA/Eduard Kornienko

For the first time, the law introduces mandatory rules in the field of AI and establishes responsibility for their violation. Therefore, a reasonable transition period is needed so that companies, especially those that have already invested in technology, have time to adapt, Bulat Anerzhanov emphasized.

Some backbone banks believe that the entry into force of the document should be postponed even until 2030. The reason is that innovations will overlap in time with the process of digital transformation of state-owned companies.

The press service of the Ministry of Finance told Izvestia that upon receipt, the ADB initiative would be considered in due course. The agency believes that the law on the use of AI should be a framework, and specific rules should be prescribed in industry laws, taking into account the specifics of areas such as transport, medicine and education.

папка
Photo: RIA Novosti/Sergey Bobylev

At the same time, on April 23, it became known that the Presidential Council for the Codification and Improvement of Civil Legislation criticized the bill.

How banks proposed to regulate AI

Credit organizations believe that the authorities should relax the requirements for localization of the development and training of AI models, as well as for the citizenship of developers. According to the bill, all stages should be carried out exclusively on the territory of the country.

In this case, their creation becomes more burdensome and reduces the competitiveness of domestic AI solutions, Bulat Anderzhanov believes. He clarified that mandatory localization prevents the full use of global data arrays necessary for training high-precision models, especially in highly specialized areas. In addition, these requirements are practically impossible to fulfill: it is impossible to verify the "Russian origin" of each component of the model, the expert added.

ии
Photo: IZVESTIA/Sergey Konkov

Our own structure will require large computing power to operate, and it will not be possible to remove this dependence on foreign hardware in the near future, says Vladimir Ulyanov, head of the Zecurion analytical center, because the right equipment is not produced in the country in the right quantities.

In addition, the bill strengthens the responsibility of neural network owners for the misuse of technology and introduces requirements for labeling content created using AI, including cases where a person used a neural network to slightly refine their work.

рука робот
Photo: IZVESTIA/Pavel Volkov

The banking community considers it advisable to eliminate the need for such labeling in such cases. This amendment is important for credit institutions. According to Dmitry Livshin, CEO of Sayber Business Consulting, labeling can include messages in application support chats, as well as AI-generated certificates, extracts, contract texts, and even advertising banners. This practice is highly likely to cause confusion and irritation among customers, without improving the user experience.

— Tagging each text is technically not only laborious. This makes the concept of labeling meaningless, blurring it to the limit. If any text where an employee uses auto—correction or stylistic neural network suggestion falls under the requirement, the regulatory norm ceases to be a consumer protection tool and turns into a source of legal uncertainty for the entire market," says Olga Popkova from Goldman and Po.

If the draft law does not take into account the proposed changes with labeling and localization of the development, this can lead to serious costs on the part of credit institutions, said Ivan Lyubimenko, Managing Director of Absolut Bank's Digital Services and Sales Directorate. In addition, there will be additional restrictions on the use of LLM (a type of artificial intelligence program that recognizes and generates text), and this will create obstacles to the application and development of business, the quality and effectiveness of customer interaction.

цб
Photo: IZVESTIA/Natalia Shershakova

Credit organizations have also proposed giving the Central Bank the right to regulate the use of AI in finance. This area has always attracted the attention of intruders, as it is directly related to monetary transactions, and any errors or vulnerabilities can lead to losses for banks and their customers, Vladimir Ulyanov emphasized. According to the expert, AI will become an important element for the industry in the near future. Izvestia sent a request to the regulator.

Banks most often use AI to manage risks and counter fraud, the Central Bank noted in a November report. As Olga Popkova added, regulation by the relevant supervisory authority looks logical, especially since the regulator already has relevant practice. In July 2025, the Bank of Russia issued a Code of Ethics for the Use of AI in the Financial Market, consolidating the principles of human-centricity, fairness, transparency, security, reliability and efficiency, as well as responsible risk management.

Переведено сервисом «Яндекс Переводчик»

Live broadcast