- Статьи
- Internet and technology
- Playing by the rules: Russia has developed a code of ethics for working with AI for financial organizations
Playing by the rules: Russia has developed a code of ethics for working with AI for financial organizations
The Central Bank of Russia has approved the basic principles of the use of artificial intelligence in the financial market and has prepared a draft code of ethics for organizations in the sector. The document is designed to increase confidence in companies using AI from both customers and investors, and will also have a positive impact on ensuring the security of the technology in the financial market. For more information about the prospects for using AI in the financial sector, what it will give Russians, and what risks it may carry, see the Izvestia article
What is known about the code of ethics for working with AI
The announcement that the Central Bank of the Russian Federation has approved the basic principles of AI application in the financial market and has prepared a draft code of ethics for organizations in the sector appeared on the regulator's website on July 9.
"The Code of Ethics for AI will contribute to increasing trust in companies using AI from both customers and investors, will have a positive impact on ensuring the safety of technology in the financial market, will allow the regulator to participate in risk monitoring and analysis, and will provide a deeper understanding of the specifics of AI—related risks," the document says..
According to the document, the overwhelming majority of financial market participants supported the initiative to create a separate code of ethics that would take into account the specifics of the use of AI in the field of financial services.
The Central Bank of the Russian Federation suggests using so-called soft regulation tools: recommendations, guidelines and standards that will help companies develop responsible AI practices without creating additional legal barriers. At the same time, the regulator will continue to work on removing barriers to the exchange and turnover of data, including anonymized personal data, which, according to market participants, is one of the key conditions for the development of AI in Russia.
Why did the Central Bank of the Russian Federation decide to develop a code of ethics
The issue of developing and adopting a Code of Ethics for AI has long been discussed in the IT and information security markets, Alexander Honin, director of the Angara Security Consulting Center, told Izvestia. According to the expert, several years ago, the largest technology companies, with the support of the Analytical Center under the Government of the Russian Federation and the Ministry of Economic Development, created an Alliance in the field of AI and developed a "Code of Ethics in the field of AI", as well as "Ethical recommendations on the use of AI algorithms in digital services."
"Such documents are also being discussed at the legislative level, but the Code of Ethics in the field of AI has not received the status of a law," says the specialist. — The Bank of Russia, as the leader of Russia's digital transformation, was the first to move from words to deeds and develop an industry Code of Ethics for working with AI.
According to Alexander Khonin, this step is largely due to the legal uncertainty of the use of AI technologies in the banking sector: the use of artificial intelligence algorithms is not defined by the regulatory requirements of the regulator. This gap is planned to be fixed.
The Code is needed to create a trusted and secure environment for rapidly spreading AI technologies in finance, without hindering innovation, adds Stanislav Yezhov, Director of AI at Astra Group. It sets common principles: human―centricity, fairness and transparency, and facilitates risk oversight, taking into account global regulatory practices.
"From the point of view of the market, the appearance of a code from the Central Bank can become a cut-off point between formally AI products and mature solutions that are actually integrated into institutional processes," Evgeny Semenov, architect of national digital identity systems, deputy general Director of the Center for Biometric Technologies, operator of the state Unified Biometric System, says in an interview with Izvestia. — This is an opportunity for players not only to comply with regulatory requirements, but also to build more sustainable, transparent and ethical models of digital interaction with users.
What are the prospects for the use of AI in the financial sector
According to the Central Bank of the Russian Federation, the use of artificial intelligence opens up significant opportunities for financial organizations to improve the quality of services, develop personalized products and optimize business processes. Among the areas of technology application, market participants name automation of analytics, working with big data, creating intelligent financial assistants and developing generative AI, including large language models.
Today, banks are actively implementing artificial intelligence in their work, notes Alexander Honin. Digital assistants and AI avatars, big data analysis and processing, automation of routine tasks, risk management and personalization of services are just a small list of areas where the latest AI technologies are actively used. AI algorithms also analyze transactions in real time and identify suspicious fraud-related activities. A number of large banks are actively implementing AI technologies in cybersecurity systems.
"Artificial intelligence is able to fight back against telephone and cyber fraudsters, who have become more active in recent years, and significantly improve the security of operations," says Alexander Kobozev, director of Data Fusion at the Digital Economy League. — In general, the introduction of breakthrough technologies will improve the quality of services in the banking sector and expand the range of financial services.
At the same time, as Alexander Khonin notes, according to the provisions of the Code from the Central Bank, a financial credit institution will have to notify the consumer about the interaction with AI and give the opportunity to communicate with a live operator. The consumer will have the opportunity to review the decision made by artificial intelligence by the employees of the organization. Today, bank customers often complain about unmotivated refusals to receive certain services that cannot be protested.
The Code as a whole is also intended to increase the level of public confidence in new technologies, taking into account the vulnerability factors of customers (age, education, limited opportunities, and others) and the possible impact of these factors on the provision of services to such consumers, the expert emphasizes.
What are the risks of introducing AI in the financial sector?
Meanwhile, as noted in the report of the Central Bank of the Russian Federation, the widespread introduction of artificial intelligence is associated with a number of risks, including threats of personal data leakage, cyber threats, possible "hallucinations" of models, as well as risks of consumer rights violations. It is emphasized that generative AI can be used by fraudsters, for example, to create deepfakes and manipulate information.
"The main threats associated with the introduction of AI in the financial sector are data leaks and cyber attacks, algorithm errors and their unexplained solutions, as well as the possibility of abuse of generative technologies," Stanislav Yezhov says in an interview with Izvestia. — Without proper measures, this can undermine trust, increase financial losses and increase discriminatory effects.
Among other threats, Alexander Kobozev highlights the risk of attacks on personal data that can be used to train AI. In addition, neural networks, if the relevant systems are not fully honed, may experience various errors that result in the issuance of incorrect or dangerous recommendations. Kobozev also highlights ethical risks: the possibility of biased or discriminatory decisions being made by algorithms.
In order to mitigate these threats, trusted generative AI models should be used in the financial sector, created taking into account the requirements for secure development at all stages of the life cycle of the model and dataset, notes Alexander Honin. In addition, regular quality checks of datasets and AI models themselves should be conducted.
"Risk management in the development and application of artificial intelligence should be considered within the framework of a general risk management system in a financial institution and include AI systems in threat models related to information security breaches and operational reliability of a particular bank," the expert concludes.
Переведено сервисом «Яндекс Переводчик»