Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

There has been a massive surge in fake AI—generated videos of natural disasters such as earthquakes, fires, and floods on social media. Their number has increased two to six times over the past year, depending on the scenarios played out in them, Izvestia found out. The reason is not only the rapid development of AI technologies, but also the algorithms of social networks that promote such videos: they evoke emotions, collect views and reposts, experts say. The use of such technologies has already gone beyond the entertainment of the audience and has become a tool for fraud and influencing public opinion. For example, there are cases when deepfakes about the effects of earthquakes were used for fake charity fees.

How AI simulates disasters

The number of videos with alleged natural disasters — floods, earthquakes, and fires - is rapidly growing on social networks. In fact, these are fakes created with the help of generative artificial intelligence. Over the past year, the number of such publications in Russia and in the world as a whole has increased by 100-500%, that is, by two to six times, depending on the scenarios that are played out in the videos, VisionLabs (a company specializing in the development of deepfake recognition technologies) told Izvestia.

The reason for the rapid growth is not only in the increasing availability of AI technologies, but also in the algorithms of social networks that willingly promote such videos — they evoke strong emotions, quickly collect views and reposts, experts say.

Социальная сеть
Photo: IZVESTIA/Mikhail Tereshchenko

— We conducted a survey in which we asked Russians to distinguish real photos from deepfakes. The majority of respondents (from 62 to 80%) mistook images created by specialized AI for real ones, the company said.

For example, after the earthquake in Kamchatka, a wave of fakes about the large-scale consequences of the disaster began to actively spread on social networks and messengers. One of these lies is a report about the alleged damage of a nuclear submarine of the Russian Navy as a result of an earthquake. The Regional Management Center (SDG), with reference to the Pacific Fleet and the government of the region, promptly denied this information. Other examples include a video where a helicopter allegedly shoots down a UAV in the sky over the Sverdlovsk region, or a deepfake where drivers listen to a radio message about the danger of an attack on one of Stavropol's enterprises.

Люди в электричке со смартфонами
Photo: IZVESTIA/Eduard Kornienko

Also last week, the media reported on a viral video that spread rapidly on social networks. The footage allegedly shows the collapse of a giant aquarium, a stream of water washes away several people. It is claimed that the incident occurred in California. In fact, there was nothing like that.

Scenarios of destruction that previously could only be created in Hollywood studios are now generated in a matter of minutes. Over the past year alone, the number of platforms and applications that can be used to create videos of non-existent disasters has grown from hundreds to several thousand, AI creator Sergey Bigulov told Izvestia. According to him, we are talking about a real revolution in the world of synthetic video content, such models as Runway Gen 4, Google Veo 3, Seedance 1.0 — they allow you to generate 1080p (Full HD) video based on just a text description. Moreover, many of these services are available by subscription for a nominal fee or for free.

Фейковое фото катастрофы
Photo: Getty Images/kickers

"What used to require studio equipment and weeks of work can now be done on a smartphone in a few minutes," the expert said.

The real leap was the new Runway Aleph model (an AI video editing tool based on text queries), introduced on July 25, 2025. Unlike its predecessors, Aleph is able not only to create fake scenes from scratch, but also to manipulate existing videos, Sergey Bigulov emphasized. It is enough to take footage from an ordinary city street — and in a few minutes it turns into a disaster zone. The model can also remove fake elements from the frame, such as cameramen, equipment, and decorations.

Девушка со смартфоном
Photo: Getty Images/picture alliance

— We are witnessing the rapid development of technologies that blur the boundaries between reality and synthetic content. The most worrying aspect is the democratization of these technologies. What used to require professional skills and expensive equipment is now available to any smartphone user," the expert said.

How to recognize deepfakes on social networks

The main way to protect yourself is to take a critical approach. You should not trust videos that appear suddenly and without a source: any real incident has a trace in the media, comments from official services or eyewitnesses, mentions in local media, said Yaroslav Meshalkin, an expert on digital communications.

— Checking the publication date and footnotes often helps to identify inconsistencies. You should also pay attention to the technical details.: AI still makes mistakes in small things — distorted faces in the crowd, unrealistic movements, "smearing" objects in the background, he said.

Видео на экранах мониторов
Photo: Getty Images/Boston Globe

According to Yaroslav Meshalkin, the task is not only to learn how to recognize fakes, but also to form a new media literacy. Once upon a time, students learned to distinguish the yellow press from high-quality journalism, but now all Internet users should get used to checking sources and being wary of videos that cause too much emotion. Changes in the policies of the largest social media platforms, which are now learning how to identify AI videos and will continue to label them accordingly, will also help.

The press service of MWS AI (a company that develops AI solutions) believes that the country already needs deepfake detectors that work not only with images, but also with sound, because in a year or two Russians will see more advanced videos that combine different technologies: superimposition of disaster, image of the real the man and his voice, the screams of the crowd.

Просмотр фильмов в интернете
Photo: TASS/Vedomosti/Andrey Gordeev

The purpose of such videos may be to play on people's emotions, an attempt to gain social media coverage through viral content. A sensational example of such a video is a video clip about an allegedly collapsed glass bridge in China, said Vladislav Tushkanov, head of the machine learning technology research and development group at Kaspersky Lab.

— If we are talking about online fraud, attackers who use deepfake videos in their schemes may target people's data and money. For example, attackers can use legends with fake charity fees, as was the case, in particular, with the earthquake in Turkey. At the same time, it is important to understand that fake videos in scam schemes are one of the elements of social engineering, in fact, they are bait," the expert explained.

According to him, to protect against cyber threats, it is important to adhere to the basic rules of digital security: double-check whether an organization is actually holding a fundraiser or an action, do not click on links under questionable videos and do not transfer money without checking to whom it will actually be transferred.

Переведено сервисом «Яндекс Переводчик»

Live broadcast