- Статьи
- Society
- One person at a time: the number of deepfake creation programs has grown rapidly online

One person at a time: the number of deepfake creation programs has grown rapidly online

The number of available programs for creating deepfakes has grown rapidly on the Internet. Today there are more than 50 thousand of them, although two years ago there were practically no such decisions. Earlier, Izvestia reported that by the end of the year, according to experts, every second person may face fraud using such technologies. The editors conducted an experiment to create a deepfake — as it turned out, you don't need to have special skills to do this. According to experts, the uncontrolled spread of such technological solutions significantly increases the number of scams and information attacks on users. For more information, see the Izvestia article.
Is it difficult to create a deepfake
Two years ago, deepfake creation technologies were available only to a narrow circle of highly trained specialists with expensive software. Now the situation has changed dramatically: more than 50,000 free tools for generating deepfakes have become publicly available, from mobile programs to complex platforms with artificial intelligence, the press service of MWS AI (part of MTS Web Services, formerly MTS AI) told Izvestia.
Izvestia conducted its experiment on creating a deepfake and made sure that it requires a minimum of effort and experience. All it took to create a digital avatar were two photos of the correspondent and his friend.
The next steps were simple: the photo was uploaded to one of the popular online services offering a face replacement function, and a pre-prepared photo with a person was chosen as a template. The program automatically placed the journalist's face on the body of another person — without any manual settings or additional processing. The process took no more than 10 minutes. To create a video clip, it will take a little more effort and financial investment — you need to buy a subscription that provides access to improved algorithms. On average, it costs from $3-6 per month.
One technology company created a video fake for Izvestia, they needed only one photo for this. Initially, the portrait of the "victim" is loaded into the program and animated with the help of AI — this stage is one of the most difficult. It's important to recreate facial expressions, eye movements, head movements, and even facial expression changes to make everything look natural. In order for the image to "speak", it is necessary to overlay an audio track on it and synchronize it with the movement of the lips, after which it will be possible to download the resulting video.
During the experiment, the editors made sure that even basic tools make it easy to create fake photos, and with a paid subscription, deepfakes become almost indistinguishable from reality.
Are there any ways to protect against deepfakes
The sharp increase in the number of available tools for creating deepfakes is an alarming signal. Today, any user with basic skills can generate realistic video or audio on behalf of anyone, from a relative to a government official. The main threat lies not only in fake news or reputation attacks, but also in the practical application of these technologies by fraudsters, said State Duma Deputy Anton Nemkin.
—Deepfakes are used to deceive citizens and organizations: fake executive voices, video messages from "bank employees", "officials" or "relatives" become part of complex social engineering schemes," he told Izvestia.
According to him, the problem is compounded by the fact that it is becoming increasingly difficult to recognize deepfake — generation technologies are developing faster than protection tools. For the layman, the difference between real and synthetic content is almost invisible — this creates the ground for mass manipulation, panic and the destruction of trust in visual and audio evidence in general.
According to MWS AI, among the main types of deepfakes used in fraud are video calls and circles in messengers with face and voice substitution to imitate real people, the creation of fake identities in social networks and dating applications using synthesized photos and videos to create false trust and deceive users. There is also the problem of pornographic deepfakes, when fake images or explicit videos are created for the purpose of blackmail, they stressed.
It can be difficult to recognize a deepfake on their own, since attackers often use various techniques to artificially "make noise" content, for example, they send fuzzy, blurred deepfake videos in circles in messengers, impose extraneous noise from the road on pre-recorded fake audio messages, and so on, said Dmitry Anikin, head of data research at Kaspersky Lab..
As Izvestia wrote, according to information security experts, every second Russian will face a deepfake attack before the end of the year. In 2026, experts predict an increase in deepfake calls, when an attacker will talk to a person in the voice of his acquaintance.
"There are special solutions on the market for recognizing deepfakes, but they do not provide a 100% guarantee, like any technology," the expert said.
Deepfake detectors are very effective in detecting all types of fakes, said Alexander Parkin, head of VisionLabs research projects. Their introduction into social networks and instant messengers, as well as increasing the digital literacy of users, could help people avoid the threat of blackmail and extortion of money, he added.
Anton Nemkin believes that the solution to the problem of such deception should be systematic: detection technologies, legal regulation, verification of digital content, as well as education of citizens are needed.
It is important for users to follow the basic recommendations of digital security: double-check whether the person is really who he claims to be, be critical of any messages on the Network and use reliable security solutions that will prevent access to a phishing or scam page, notify about a call from a potential fraudster and prevent malware from being installed on the device, he added. Dmitry Anikin.
Переведено сервисом «Яндекс Переводчик»