- Статьи
- Economy
- Says AI orders: online services will be prohibited from manipulating people using neural networks
Says AI orders: online services will be prohibited from manipulating people using neural networks
Online services may be prohibited from manipulating people using artificial intelligence, for example, encouraging unnecessary purchases by analyzing user activity on the Network. This follows from the draft law of the Ministry of Finance on AI, which was reviewed by Izvestia. The extreme form of such manipulation is suicide, experts remind: a neural network "pretending" to be a loved one can push this. At the same time, in a situation with artificial intelligence, bans, experts believe, will be ineffective: it is impossible to foresee all scenarios of neural networks and their possible responses to requests. In addition, it can also harm resources that use AI to help consciously select the right products or content.
How AI manipulates user behavior
Developers of AI systems will be required to inform their users about the inadmissibility of using neural networks to manipulate human behavior and exploit human vulnerabilities. This follows from the draft law on artificial intelligence, which was developed by the Ministry of Finance and is currently undergoing interdepartmental coordination. Izvestia has reviewed the document.
The exploitation of vulnerabilities in the law refers to the use of certain human conditions (age, socio-economic, psychological, and others) to "purposefully influence behavior, decision-making, or unauthorized access to information."
The bill does not mention which users will be told about the inadmissibility of using AI for manipulation, specific examples of such actions, as well as penalties for them.
"The ban applies primarily to operators and developers of AI systems, that is, to organizations and companies that implement such systems in services, applications or products," says Yaroslav Seliverstov, a leading AI expert at University 2035. — In fact, you can "explain" this to a robot like this: an AI system should not be designed or used in such a way as to consciously push a person to make decisions that he would not have made without hidden pressure or manipulation. Thus, the responsibility lies not with the technology itself, but with those who create and apply it.
According to the expert, AI is able to influence human behavior through analyzing data about it and personalizing content, for example, by selecting messages, recommendations or interfaces that increase the likelihood of the desired action. Such technologies can be used in advertising, political communications, recommendation systems, or chatbots to enhance trust, urgency, or emotional response, says Yaroslav Seliverstov. Legislative innovations look relevant, since modern AI systems are really capable of analyzing the psychological and behavioral characteristics of users, he believes.
The law may apply, in particular, to services that sell goods and services: with the help of AI, they collect information about user preferences, and then impose things on them through open and hidden advertising that are often completely unnecessary for a person, said Denis Kuskov, CEO of TelecomDaily. The issue of taxi aggregators manipulating fares is also being discussed in the media: they allegedly charge inflated prices for passengers who are running out of charge on their smartphone and who are trying to get to their home or office faster to charge it, the expert said.
"Attackers or unscrupulous companies can use AI for hyperpersonalized fraud, when artificial intelligence analyzes data about a person and generates perfectly matched messages — deepfakes of a relative's voice, "individual" offers from a fake bank at the moment when a person is most vulnerable," said Andrey Bezrukov, Chairman of the Board of the Autonomous Non—Profit Organization Center for Unmanned Systems and Technologies (CBST)..
It is not only in Russia that people are thinking about making artificial intelligence developers responsible for the behavior of its users. Legislators, following the example of other countries, can oblige owners of AI products to implement protocols according to which the neural network should not behave in a certain way, or should signal some extraordinary events, said Yaroslav Shitsle, head of the It&Ip Dispute Resolution department at Rustam Kurmaev & Partners. For example, in the United States, after a number of incidents involving users driving themselves to suicide, neural network owners are required to implement parental controls and security protocols, he said.
— The question of the effectiveness of these measures is related to whether the office of the relevant companies is located in Russia. The legislator may oblige Russian platform owners to implement such protocols at the risk of responsibility, but the world's leading neural networks do not," believes Yaroslav Shitsle.
A report with a preliminary version of the bill is under consideration in the relevant departments — there is no final version of the document and it is too early to talk about specific initiatives that will be included in it, the press service of the Ministry of Finance told Izvestia.
— The development of promising technologies, including artificial intelligence, is one of the important areas of activity of the Ministry of Digital Economy. At the same time, it should be noted that any technology should be used exclusively in compliance with the rights and interests of citizens.
How to separate useful AI recommendations from harmful ones
An extreme form of AI behavior manipulation is driving its user to suicide. Periodically, the news flashes cases of suicide of those who began to perceive the AI assistant as a living person and either fell in love with him or obeyed his destructive advice, said Leonid Konik, a partner at ComNews Research. Such situations should be eliminated, but the law has no power over the psychology of different people. This is like banning the sale of kitchen knives in hardware stores on the grounds that one psycho killed his neighbor with it, the expert believes.
— We support the need to protect citizens from real threats, especially when it comes to harm to life and health. But it is important to note that the sphere of recommendation services has already received the necessary legal regulation. Further interference in algorithms can lead to the opposite effect — biased issuance, limited access to information, and subjectivity," said Oraz Durdyev, President of the Association of Digital Platforms. — To avoid this, it is important for business and government to work closely together to maintain a balance between the interests of the state, consumers and the industry.
It will be difficult to implement a ban on manipulating user behavior using neural networks — there are countless scenarios for responses and dialogues with users conducted by AI, their answers vary depending on the models used, and even developers cannot influence this, Denis Kuskov noted.
The bill contains a provision that obliges operators to include in the user's safety manual an indication of the inadmissibility of actions that could cause harm. However, this is not a direct legislative prohibition of manipulation as such, Andrei Bezrukov noted.
— For comparison, the European AI Act explicitly prohibits all market participants from using subliminal and manipulative techniques that can harm or significantly distort human behavior. The sanctions amount to €35 million, or 7% of the company's global turnover," he said.
At the same time, AI used in the field of defense and national security is being removed from the regulation of the Russian draft law, which creates the risk of using "civilian" developments in manipulative military technologies without proper control, Andrei Bezrukov believes.
In fact, the bill is based on the presumption of guilt of AI developers for some abstract violations that third parties commit or may commit with their help, says Karen Ghazaryan, director of the Institute for Internet Research. Meanwhile, there is a Criminal Code, a Code on Administrative Violations and other legislative acts that establish liability for fraud, defamation, driving to suicide or, for example, for improper performance of work or provision of services. It provides for the responsibility of specific individuals for specific offenses and it is impractical to introduce any additional encumbrances and responsibilities for developers, the expert believes.
If the bill is passed and it is applied literally, then all personalized online advertising and recommendation algorithms will have to be banned as tools for purposeful influence on decision-making, Leonid Konik warned.
According to Denis Kuskov, regulation of recommendations using AI needs to be discussed separately. When artificial intelligence tries to sell an unnecessary product or service, it's certainly unpleasant, but when a person is purposefully looking for a particular thing, a neural network that selects several offers helps and facilitates the choice, Denis Kuskov believes. In online cinemas, such algorithms, for example, allow you to find an interesting film that satisfies the tastes of the viewer, the expert concludes.
Переведено сервисом «Яндекс Переводчик»