Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

Russia will legislate the crucial role of humans in decision-making using artificial intelligence in education, medicine, law, and public safety. The government intends to introduce a rule on the human role in these areas into the upcoming law on the regulation of neural networks. There are already problems with them: patients are confronted with errors in "intelligent" medical equipment, in courts the parties to the proceedings rely on fictitious norms of law, and teachers carelessly check students' homework, market participants say. It is necessary to control AI, but it is impractical to impose a complete ban on its decision-making or require their total verification, experts say.

Why should human supremacy over AI be consolidated by law

The Russian Federation may legislate to preserve the crucial role of humans in the use of artificial intelligence in socially significant areas: healthcare, legal proceedings, education, and public safety. This follows from the government's materials, which contain proposals for the draft law on the regulation of artificial intelligence. The Ministry of Finance's document, prepared taking into account the wishes, should be sent to the government for approval by the end of February, a source close to several relevant departments said.

Клавиатура
Photo: IZVESTIA/Anna Selina

AI is already actively used in healthcare and education, and is also used to monitor law and order, according to the office of Deputy Prime Minister Dmitry Grigorenko, who oversees the development of artificial intelligence. All these are sensitive areas in which the state traditionally plays a major role.

— The cost of a possible AI error in areas such as medicine or legal proceedings is too high: it can directly affect people's lives and health, as well as the protection of their rights. Therefore, it is important to reduce potential risks by consolidating specific areas of responsibility for AI users and developers, as well as ensuring the primary and decisive role of humans in decision—making, rather than technology," the Deputy Prime Minister's office stressed.

—AI is becoming an integral part of cognition and education, it expands their instrumental base and is able to radically change their development," the Ministry of Education and Science told Izvestia. — These changes inevitably raise a number of fundamental ethical issues, such as the problem of authorship and the reliability of information. But despite the enormous potential and capabilities of AI, it will not be able to fully replace human consciousness in the foreseeable future, to determine the ethics of scientific and educational practices.

The editorial board also sent requests to the Ministry of Finance, the Ministry of Justice, the Ministry of Health and the Ministry of Internal Affairs.

ИИ
Photo: IZVESTIA/Sergey Lantyukhov

The absence of a requirement for a decisive human role in the use of AI carries systemic risks, said Anton Nemkin, a member of the State Duma Committee on Information Policy and federal coordinator of the Digital Russia party project.

— Firstly, there is the threat of discrimination and algorithmic distortions if the model is trained on incomplete or biased data. Secondly, the blurring of responsibility: in case of an error, the question arises — who is responsible: the developer, the operator or the state? Thirdly, there is a decrease in citizens' trust in digital services if decisions affecting their fate are perceived as automatic and peremptory," he said.

Документы
Photo: Global Look Press/IMAGO/Zoonar.com/Sirijit Jongcha

The neural network works like a black box: data is sent to the input, and the result appears at the output, while the reasons why it came to a specific decision usually remain unknown, said Leonid Konik, a partner at ComNews Research. A phenomenon called AI hallucinations is widely known: a large language model or other neural network, which yesterday was coping with complex tasks and gave sound answers, suddenly begins to "get stupid." The causes of such "hallucinations" are also not always explicable, he added.

In jurisprudence, the so-called predictive technologies are controversial, when a party uploads materials and the plot of a case into a program and asks for a forecast on how the court should resolve the present dispute. In the future, such an analysis is sometimes used to exert psychological pressure on the judge," Yaroslav Shitsle, head of the IT&IP Dispute Resolution department at Rustam Kurmaev & Partners, told Izvestia.

When a participant in the process resorts to such methods and presents a forecast prepared by AI, this can affect the personal perception of the judge, who forms a decision based on his own experience and knowledge. Often, representatives of the parties use neural networks to prepare procedural positions without verifying the authenticity of references to legal norms and judicial practice, which sometimes turn out to be fictitious, the lawyer added.

In early February, it became known that the AI used in medical equipment could harm patients. "At least 10 people were injured between the end of 2021 and November 2025. In most cases, the cause was presumably errors that caused the TruDi navigation system to give surgeons incorrect information about the location of instruments during operations on patients' heads," the Reuters article reported.

—It is obvious that AI cannot be trusted to check assignments in schools and universities — as experience has shown, it makes a lot of mistakes," said Karen Ghazaryan, director of the Institute for Internet Research.

дипфейки
Photo: IZVESTIA/Anna Selina

Countries have different approaches to regulating artificial intelligence, he noted. For example, in the United States there is a moratorium on federal regulation of AI, while in China certain areas are legally controlled, for example, the use of data in critical infrastructure, and deepfakes are prohibited. The European Union has adopted a law on AI, which is already causing complaints from market participants, the expert said.

The main side effect of AI, according to Ekaterina Lyubimova, advisor to the rector of University 2035, is that the wide availability of knowledge reduces cognitive efforts. When answers are always at hand, students' habit of thinking critically and checking information weakens, she noted. As the volume of generated content increases, the risk of errors and so-called hallucinations increases, while users often perceive the AI's conclusions as reliable, she stressed.

How much can artificial intelligence be trusted?

Artificial intelligence is embracing more and more public and government institutions, market participants say. So, an experiment on the introduction of AI has started in the office of the government of the Russian Federation, its representatives told Izvestia. Its participants have the opportunity to generate documents based on templates, prepare draft presentations, conduct a comprehensive analysis of data from various sources and carry out a semantic analysis of regulatory legal acts. In the future, it is planned to test the use of neural networks to classify incoming documentation and distribute it by performers, prepare brief information, search for information in archives, taking into account the context, and perform other tasks, representatives of the Cabinet noted.

Fixing the rule of preserving the decisive role of a person is necessary to avoid AI mistakes that can seriously affect people's lives, says Anton Averyanov, CEO of the ST IT Group, TechNet NTI market expert. He noted that algorithms are not yet able to take into account morality, individual context and ethics in the way a human does.

Папка
Photo: RIA Novosti/Dmitry Dudo

The need to legislate the decisive role of humans is dictated primarily by the technological immaturity of artificial intelligence systems, says Yaroslav Seliverstov, a leading expert in the field of AI at University 2035. Modern algorithms, even the most advanced ones, operate on the basis of correlations rather than understanding cause-and-effect relationships, he said.

"In the field of security, the requirement for human control prevents the erroneous identification of citizens by facial recognition systems in public places, allowing the operator to take into account racial or age specifics that the algorithm could not cope with," he gave an example.

AI should not be given too broad powers, says Boris Zingerman, director of the National Medical Knowledge Base Association. So far, there are practically no solutions in medicine that have the "right to vote," he noted. There is only one such model in Russia — it cuts off the "norm" for lung X-rays, which frees doctors from the routine work of viewing multiple images, the expert explained.

In most cases, AI systems act as assistants for doctors, and there are very few decisions that are made completely autonomously. I believe that it is not necessary to completely prohibit the use of AI with decision-making functions at the legislative level. If such a rule is introduced, at the end of the day the doctor will have to double-check and sign many decisions that the AI has already made automatically. Everything, of course, depends on the level of risks," Boris Zingerman emphasized.

ИИ
Photo: IZVESTIA/Yulia Mayorova

Mechanically dividing the spheres into those where AI solutions can be trusted and where they cannot is impractical — it is important to take into account the level of risks in each specific area and build their gradations, Karen Ghazaryan believes.

For example, in hazardous production, AI control cannot be fully transferred, but the system is able to identify and block faulty or potentially dangerous equipment faster than a human by analyzing data from sensors, the expert explained. According to him, it is impossible to foresee all the risks associated with both AI and the human factor. Therefore, regulation should take into account different scenarios for the use of artificial intelligence and the level of potential threats in each specific area, he concluded.

Переведено сервисом «Яндекс Переводчик»

Live broadcast