Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

Chatbots like ChatGPT or Grok are creating new loopholes for committing crimes, and in 2025 the number of such offenses will increase dramatically, experts said. According to them, criminal methods will improve along with the development of AI, they will be used by about 90% of illegal cyber groups, and attacks will affect not only cyberspace, but also other, more familiar aspects of life. Experts called the car bombing in Las Vegas on January 1 the first swallow. The organizer of the explosion used AI to prepare the crime. Details about the trend - in the material "Izvestia".

How AI first became an accomplice in a crime

In 2025, the number of unlawful actions for which attackers will use AI will grow significantly, cybersecurity experts told Izvestia. The first officially recorded case of committing a real crime with the help of technology was the bombing of a Tesla Cybertruck near the Trump International Hotel in Las Vegas on January 1. Its organizer, Matthew Livesberger, used ChatGPT to plan the attack. As noted by Las Vegas Sheriff Kevin McMahill, this is the first time in history that a chatbot has been used on U.S. soil to prepare a crime.

- The annual increase in crimes committed with the help of AI is not recorded by official bodies. It is determined roughly by the number of bots and the content they generate. In 2025, the growth of such crimes may increase up to eight times or even ten times compared to last year," said Igor Bederov, head of the T.Hunter Investigations Department.

тесла

A Tesla Cybertruck after it exploded outside the Trump International Hotel in Las Vegas, United States, January 1, 2025

Photo: REUTERS/ALCIDES ANTUNES

According to the expert, neural network protection measures can be circumvented by so-called prompt engineering. In such situations, an attacker can conditionally place the AI in a fictitious situation - for example, ask it to imagine itself as a writer and create a novel about terrorists, including instructions for making an explosive device.

- The reason for the rise in global crime rates with AI will be automation and new opportunities to personalize attacks. Cybercriminals will start using the technology for more sophisticated attacks, which will require new methods of protection and changes in legislation and software to combat this type of crime," said Viktoria Beresneva, director of the Autonomous Nonprofit Organization "Sports and Methodological Center "Department of Cybersport" and an expert in the field of high technology.

AI is not capable of launching attacks without an operator, but it can easily cope with composing phishing emails. At the same time, it will get a multiple growth in 2025, and tools with artificial intelligence can become ubiquitous for attackers. Especially for those who use social engineering attacks, said Alexei Gorelkin, CEO of Phishman.

- Neural network and chatbot technologies may be adopted by up to 90% of all cyber groups that enrich themselves through social engineering. However, it will be extremely difficult to calculate the total number," the expert added.

хакер
Photo: IZVESTIA/Sergey Lantyukhov

Dmitry Shtanenko, a senior lecturer at Synergy University, agrees. In his opinion, the number of all types of crimes with the help of AI will inevitably grow several times. The main danger is situations when users will be able to "lure" chatbots with instructions on how to create weapons, explosive devices, chemical compounds that can be used for terrorist purposes.

- Along with the development of generative AI and GPT technologies, their accessibility increases. For the user, the threshold of entry and cost of ownership is decreasing, which means that these technologies will be more actively used for fraud. Especially they increase due to easy and plausible human imitation," said Andrei Arefiev, Innovation Director of InfoWatch Group.

Ekaterina Snegireva, senior analyst at Positive Technologies research group, noted that criminals also use AI to generate and modify malicious code. For example, in June 2024, a phishing campaign was detected in which the AsyncRAT virus was widely spread and malicious JavaScript and VBScript scripts were computer-generated. According to its data, the number of data breaches will increase this year due to the widespread use of AI by organizations, as 38% of employees surveyed admitted to using such tools without notifying their employer.

ноутбук
Photo: IZVESTIA/Sergey Lantyukhov

In 2025, F.A.C.C.T. experts predict an increased role for AI in cyberattacks, including the creation of more convincing dipfakes with its help, automation of phishing for mass attacks, and improved methods for finding vulnerabilities in systems and applications. At the same time, experts do not collect statistics on such cases, but study specific schemes using AI and diplfeaks.

Protection against illegal content

Foreign services such as Grok or ChatGPT are inaccessible to Russians, they cannot be legally paid for with Russian bank cards, and they cannot be used by those whose IP address is tied to the Russian Federation, recalled GG Tech IT expert Sergei Pomortsev. He believes that domestic analogs are safer in terms of providing sensitive information, as they are under the control of Russian agencies and are also regulated by the legislation on personal data and other documents.

- It is already necessary to introduce into the legislative base of the Russian Federation more precise and in line with technological development. They should refer to the ban on the use of GPT-mechanisms to create requests related to the production of artisanal weapons and recipes containing such chemicals as hydrogen sulfide," said criminal law specialist Anatoly Litvinov.

телефон
Photo: Izvestia/Pavel Volkov

According to Igor Bederov, stop-word filters are in place in the Russian Federation to protect users from illegal content, and there is a "Code of Ethics in the field of AI" that has been signed by major organizations, including Sber, Skolkovo and others.

- There is also a proposal to empower Roskomnadzor to conduct an expert examination to identify content created by artificial intelligence without labeling," the specialist specified.

As noted in the Alliance in the field of AI, during the development of neural network models, constant testing is carried out to find the maximum possible number of requests that allow unethical content, and to train artificial intelligence not to respond to such requests. At the same time, the activities of all domestic developers are regulated by the declaration "On Responsible Development and Use of Services in the Sphere of Generative AI".

Live broadcast