On the dark Side: Hackers have learned how to use AI for extortion
Hackers have learned how to use artificial intelligence (AI) for extortion. According to experts, the first neural network-based ransomware program has already been found online, and soon such viruses may become more common and dangerous weapons of cybercriminals. For more information about how hackers have learned to use AI for extortion, how dangerous it is and how to counter such threats, see the Izvestia article.
How do hackers extort using AI?
Artificial intelligence as a tool for extortion is interesting to cybercriminals for several reasons, says Alexey Korobchenko, head of the information security department at the Security Code company, in an interview with Izvestia.
— Algorithms, on the one hand, allow automating processes, including analyzing and selecting target files, and making attacks more effective and personalized. On the other hand, they help to circumvent security systems, adapting to the security features and making it difficult to detect malware," says the specialist.
Unsurprisingly, the first ransomware program has already appeared and been detected. At the end of August, researchers from the international company ESET, specializing in the development of antivirus programs and computer security solutions, announced it.
The virus, called PromptLock, has not yet been documented in real attacks, but ESET experts are confident that their discovery demonstrates how the use of publicly available AI tools can enhance common ransomware and other cyber threats.
What are the dangers of AI-based ransomware viruses?
The idea of using neural networks to create malware is not new — attackers have made such attempts before, says Tatiana Shishkova, a leading Kaspersky GReAT expert, in an interview with Izvestia. In particular, at the end of 2024, the FunkSec encryption program appeared, with which organizations from the public sector, as well as from the fields of IT, finance and education in Europe and Asia were attacked.
— This cryptographer was created using generative AI. Judging by the technical analysis, many code fragments were written not manually, but automatically," says the specialist.
PromptLock is notable for the fact that it has implemented an open source AI model for local startup, explains Tatiana Shishkova. In other words, the neural network generates malicious code not on the side of the attackers, but locally on the infected device. At the same time, PromptLock is not fully functional at the moment, and nothing is known about its victims.
In other words, it's either a virus under development or a concept.
—PromptLock uses the GPT-OSS-20b model via the Ollama API to create Lua scripts compatible with Windows, Linux and macOS,— adds Stanislav Yezhov, Director of AI at the Astra Group.
The program scans the file system, analyzes the contents, and makes autonomous decisions about data encryption or theft based on predefined prompts.
"The key difference from previous threats is the ability to dynamically generate code, which makes each attack unique and makes it difficult to create signatures for detection," the expert notes. — Another non—standard solution is to use the SPECK algorithm for data encryption.
What are the prospects for online extortion using AI?
The appearance of such viruses became possible due to the increased availability of language models, including Open Source, says the head of BI.ZONE Threat Intelligence Oleg Skulkin. Workarounds for generating prompta have also affected, which make it possible to obtain malicious code even from AI with restrictions.
— Thanks to neural networks, the so-called entry threshold is decreasing: more and more potential attackers will be able to create cryptographers. Attackers also actively practice circumventing the built—in limitations of AI by using indirect instructions to mask the real targets of the attacks," Oleg Skulkin notes.
In the future, the use of neural networks in online extortion is likely to become widespread, says Tatiana Butorina, an Internet consultant and specialist at the Gazinformservice Cybersecurity Analytical Center. One of the possible scenarios is debugging the "experimental" version of PromptLock, its technical improvement and refinement.
"Then this ransomware virus can be marketed as a service, including for technically incompetent individuals, which will expand the circle of cybercriminals,— the specialist warns.
The introduction of AI into malicious code will be a logical step in the development of cybercrime and will ensure automatic vulnerability detection, circumvention of security systems and adaptation to the victims' infrastructure, adds Nikita Novikov, cybersecurity expert at Angara Security.
"Such ransomware viruses can turn out to be much more dangerous than classical ones and eventually take a prominent place in the shadow market of cyber services," the expert summarizes.
How to protect yourself from extortion using neural networks?
An integrated approach is recommended to combat AI ransomware, says Alexey Korobchenko. Firstly, update the operating system and applications regularly, and secondly, use antivirus and antispyware programs.
"In addition, it is important to create backups of significant data on a separate medium or in the cloud in order to recover information without paying a ransom in the event of a cyber attack," advises Izvestia's interlocutor.
Companies need to provide cybersecurity training to employees, including phishing email recognition, and implement multi-factor authentication to protect systems.
Active monitoring of the Network for suspicious activity will allow you to quickly respond to incidents.
As Alexey Korobchenko notes, one of the most effective methods against AI extortion is the so-called granular granting of rights to local software working with neural networks. The bottom line is that the installed software must have minimal privileges at the operating system level and file access.
— It is important to understand that although an AI can write malicious code, it will not carry out an attack on its own. So far, we can talk about AI as an auxiliary tool at different stages of the attack," adds Oleg Skulkin. In the case of PromptLock, the attackers used a local GPT model and generated the code according to a given prompt in the malware, the expert explained.
At the global level, the joint work of neural network developers, information security companies and regulatory authorities, as well as the exchange of information on new types of attacks and tools, will allow countering AI threats, the Izvestia interlocutor concludes.
Переведено сервисом «Яндекс Переводчик»