The Central Bank of the Russian Federation has developed a code of ethics for working with AI for financial organizations
- Новости
- Economy
- The Central Bank of the Russian Federation has developed a code of ethics for working with AI for financial organizations
The Central Bank of Russia has approved the basic principles for the use of artificial intelligence (AI) in the financial market and has prepared a draft code of ethics for organizations in the sector. This was reported on the regulator's website.
"The AI Code of Ethics will contribute to increasing trust in AI companies from both customers and investors, will have a positive impact on ensuring the safety of technology applications in the financial market, will allow the regulator to participate in risk monitoring and analysis, and will provide a deeper understanding of the specifics of AI—related risks," the statement said. the document.
According to the document, the vast majority of financial market participants supported the initiative to create a separate code of ethics that would take into account the specifics of the use of AI in the field of financial services.
The regulator emphasizes that the use of AI opens up significant opportunities for financial organizations to improve the quality of services, develop personalized products and optimize business processes. Among the areas of technology application, market participants name automation of analytics, working with big data, creating intelligent financial assistants and developing generative AI, including large language models.
However, the report notes that the widespread adoption of AI is associated with a number of risks, including threats of personal data leakage, cyber threats, possible "hallucinations" of models, as well as risks of consumer rights violations. It is emphasized that generative AI can be used by fraudsters, for example, to create deepfakes and manipulate information.
The Central Bank of the Russian Federation suggests using so—called soft regulation tools - recommendations, guidelines and standards that will help companies develop responsible AI practices without creating additional legal barriers. At the same time, the regulator will continue to work on removing barriers to the exchange and turnover of data, including anonymized personal data, which, according to market participants, is one of the key conditions for the development of AI in Russia.
Maxim Smirnov, Deputy CEO of IVA Technologies, said on June 29 that personal data leaks and deepfake fraud were among the main risks of AI. In Russia, 19 major data breaches occurred in the first two months of this year, affecting 24 million people. To protect your data, it is recommended to avoid posting it in open sources and use encryption. For the corporate segment, it is preferable to install AI in a closed company circuit.
Переведено сервисом «Яндекс Переводчик»