The expert named ways to find those responsible for artificial intelligence errors
- Новости
- Internet and technology
- The expert named ways to find those responsible for artificial intelligence errors
With the development of artificial intelligence, the issue of responsibility for its mistakes is becoming more relevant. Nikolay Turubar, Director of Development at Articul digital agency and chairman of the RAEC Artificial Intelligence cluster, told Izvestia on June 4 about the difficulties of identifying those responsible in cases where algorithms make erroneous decisions with serious consequences.
The expert cited several high-profile examples where AI errors led to tragic consequences: from recommendations of dangerous treatment regimens for cancer patients with the Watson for Oncology system to suicide of a teenager after communicating with a chatbot.
"At first glance, the answer to the question of who will answer if the AI makes a mistake is obvious: the machine is not a subject and is not responsible. She has no conscience, intentions, understanding of good and evil. This is an algorithm that operates within the given instructions. This means that the responsibility lies with the people. But on whom exactly: developers, companies, or users?" asks Turubar.
According to the expert, in some cases it is relatively easy to identify the culprit — for example, when the error is related to incorrect training of the model. He recalled the lawsuit against Equifax due to low credit ratings and Amazon's discriminatory hiring system. However, the very nature of neural networks often makes it impossible to understand the logic of decision-making, which creates a legal vacuum.
"We are creating black boxes, but we are not ready to be responsible for their work," the expert states. — How can I challenge a loan refusal if a bank employee can only say "that's what the AI decided"? Who will answer if a robot car hits a person?"
Turubar believes that the solution to the problem lies in two dimensions: technological and regulatory. On the one hand, it is necessary to develop Explicable AI, the technologies of which already exist. On the other hand, it requires the creation of strict ethical standards and certification of AI systems, similar to aviation or medicine.
The expert positively assesses initiatives like the European AI Regulation Act or the Russian Code of Ethics for AI, signed by leading organizations. In his opinion, such measures can become the basis for the formation of a responsible approach to the development and implementation of artificial intelligence, until technology has reached a level where the consequences of mistakes become irreversible.
Earlier, on March 6, it was reported that the majority of creative specialists surveyed actively use neural networks in their work. At the same time, 43% of respondents resort to them on a regular basis, according to the results of a study by the Moscow Creative Industries Agency and MTS AI.
Переведено сервисом «Яндекс Переводчик»