Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

Financial schemes using artificial intelligence (AI) began to bring attackers 4.5 times more profit than attacks without such tools, Interpol said. The organization noted that generative models dramatically increased the quality of deception, as well as simplified the scaling of criminal operations. For more information about how AI is changing the economy of scammers, how dangerous it is and how to counter such threats, see the Izvestia article.

What did Interpol tell you about AI in the service of fraudsters?

According to a recent Interpol report, financial schemes using artificial intelligence have begun to bring attackers 4.5 times more profit than attacks without such tools. The organization noted that generative models dramatically increased the quality of deception, as well as simplified the scaling of criminal operations.

телефон
Photo: IZVESTIA/Sergey Lantyukhov

At the same time, according to her data, attackers most often use neural networks to refine messages, emails and other lures for victims. Such services remove language errors, smooth out unnatural formulations and help to accurately copy the style of well-known companies. Such processing significantly increases the chances of success in attacks where a fraudster impersonates a brand, bank employee, or other trusted sender.

Global losses from financial fraud in 2025 are estimated at about $442 billion. According to preliminary forecasts, this amount will only grow in the next 3-5 years, and artificial intelligence may become one of the main factors in this. Interpol Secretary General Valdesi Urquiza stressed that the price of financial crimes is measured not only by money: savings, human dignity, and in the most severe cases, life are at risk.

How neural networks affect the profits from fraudulent schemes

Today, artificial intelligence can be used at almost every stage of fraud, from victim search and profiling to writing messages, voice substitution, and dialogue support, says Yakov Filevsky, an expert in sociotechnical testing at Angara Security, in an interview with Izvestia.

телефон
Photo: IZVESTIA/Sergey Konkov

"Neural networks help scammers create "content" — various publications and messages for interacting with the victim, and are also increasingly being used to scan technical vulnerabilities," says the specialist. — AI has ceased to be a tool only for professionals: due to the increased availability of services and their adaptation to the average consumer, semi-professional groups also use it.

The increase in the profitability of criminal companies using AI is due to two key factors: an increase in the scale and geography of schemes, as well as the ability to adapt attacks to each individual victim, says Roman Reznikov, an analyst at the Positive Technologies research group. Automated collection of victim information and generation of fake content help cybercriminals scale phishing companies and carry out attacks in other languages, and therefore increase the number of potential victims.

бизнес
Photo: IZVESTIA/Pavel Volkov

The increased quality of text generation, according to some studies, even more convincing than written by people, leads to an increase in the number of deceived victims in fraudulent campaigns, and hence to an increase in profits, Roman Reznikov points out. At the same time, in addition to increasing the scale of attacks using AI, cybercriminals can automatically adjust deceptive patterns to a specific victim, thereby increasing the likelihood of success of each individual attack.

"Deepfakes provide additional opportunities to personalize attacks," the specialist says. — They are used both in attacks on individuals and companies. In both cases, voice and video forgery are used to increase the victim's confidence in the information being shared by disguising themselves as a loved one, friend, or colleague.

Which AI techniques are most profitable for scammers

Today, three key areas using neural networks bring the greatest returns to fraudsters, according to Alexey Korobchenko, head of the information security department at the Security Code company. The first of these is voice synthesis and the creation of deepfakes that allow fraudsters to impersonate company executives, relatives or friends of victims. Such attacks are difficult to recognize by ear.

— The second direction is the generation of realistic phishing content, — says the interlocutor of Izvestia. — AI writes letters on behalf of government agencies and banks without a single mistake and in a matter of minutes creates duplicate websites that cannot be distinguished from the originals.

клавиатура
Photo: IZVESTIA/Eduard Kornienko

The third key area is the automated collection and analysis of victim data from open sources, which makes attacks personalized and as convincing as possible, adds Alexey Korobchenko. Super profits, according to the expert, help scammers invest in development and hire the best developers, which leads to more complex threats.

In addition, secondary damage to the economy is increasing, as businesses are forced to include rising protection costs in the cost of goods and services. But the main danger is the undermining of trust in digital channels: if people stop believing their eyes and ears, the very foundation of digital interaction collapses, the expert emphasizes.

"The high marginality of AI crimes creates a vicious circle: the higher the profit, the more investments there are in the development of new, even more sophisticated attacks, which leads to a professional evolution of cybercrime," says Solar WebProxy Product Manager Anastasia Khveschenik. — For society, this can lead to an increase not only in financial losses, but also in social tension — people stop trusting even visual and audio information.

How to Deal with lucrative AI-based Fraud schemes

In its March 16 report, Interpol called for the creation of strict international cooperation between law enforcement agencies and the private sector to counter cybercriminals, as well as the use of emergency stop mechanisms (for example, I-GRIP), says Maxim Fedosenko, a leading analyst at the Gazinformservice cybersecurity analytical center.

Хакер
Photo: IZVESTIA/Sergey Konkov

"However, at the moment these technologies are still being improved, therefore, in the current reality, the first and main line of defense remains compliance with digital hygiene and adherence to the concept of "zero trust," the Izvestia interlocutor emphasizes.

To counter intruders, an integrated approach is needed — for example, the introduction of tools for detecting AI content and enhanced authentication, adds Konstantin Larin, head of the cyber intelligence department at Bastion. According to the expert, businesses should develop behavior analytics and anomaly monitoring, while the government should strengthen regulation. At the same time, it is important to pay enough attention to user training, since the human factor remains a key entry point for cybercriminals.

Симка
Photo: IZVESTIA/Yulia Mayorova

In turn, MTS Defender experts remind that the main way to protect yourself from fraudulent schemes using AI is vigilance in combination with protection technologies from telecom operators. If possible, it is worth minimizing the digital footprint and not making a large amount of personal information publicly available. If you receive a suspicious call or video message, even from a familiar contact, you must interrupt the conversation and call the person back for additional verification.

Переведено сервисом «Яндекс Переводчик»

Live broadcast