Skip to main content
Advertisement
Live broadcast

Roskachestvo named the main difficulties in recognizing deepfakes

0
Photo: IZVESTIA/Pavel Volkov
Озвучить текст
Select important
On
Off

Deepfakes can be found on social networks, blogs, and marketplaces — characters generated by artificial intelligence (AI) are almost impossible to distinguish from real people, and the old methods of identifying them no longer work. This was announced on April 24 by the press service of Roskachestvo.

"Deepfake technologies have gone far ahead. Today, a scammer can call you via video link with the face and voice of your relative or boss, and you won't notice the trick. The main weapon against this is not technical tricks, but your digital discipline. Don't let technology fool you," the press service quoted Sergey Kuzmenko, head of the Roskachestvo Center for Digital Expertise, as saying.

One of the main difficulties in AI recognition is creating characters using multiple services. The developers generate a face, facial expressions, movements, voice and intonation in different neural networks, and take several photos of people who look similar to each other, whose features "cross." Modern AI models are created on the latest, paid neural networks that produce better generation, Roskachestvo emphasized.

These characters are so real that they can "run" blogs and advertise in them under contracts with major brands, and subscribers of AI influencers send them money and gifts. This way of making money on deepfakes is absolutely legal, and advertisers often don't care if the character is generated or real — audience reach and engagement are important.

To protect yourself from dipfeks, Roskachestvo advised you to come up with a code word that you can use to trust a loved one during a video call. You can also ask a question that only a real interlocutor can answer. In addition, the model can be checked with an unexpected request to show the situation behind the scenes. In case of any suspicions, haste or pressure, it is worth trusting your intuition and interrupting communication, concluded Roskachestvo.

AI director and entrepreneur Alexey Rykov said on April 5 that generative neural networks are able to reproduce not only a person's appearance, but also his facial expressions, gestures and behavioral features, which turns deepfakes into a manipulation tool. According to him, when creating realistic images and videos, neural networks rely on real data — photos and videos, from which behavioral patterns are extracted.

Переведено сервисом «Яндекс Переводчик»

Live broadcast