Experts have warned about the consequences of the uncontrolled development of AI
- Новости
- Science and technology
- Experts have warned about the consequences of the uncontrolled development of AI
Artificial intelligence is no longer just about "smart" assistants and fast computing, experts at Positive Technologies warn. Experts analyzed three key AI trends — agency AI, edge AI, and quantum AI — and found that each of them has not only business potential, but also serious cyber threats.
Agent-based AI based on large language models (LLM) are bots that independently analyze data, make decisions, and even communicate with clients. According to Gartner forecasts, by 2028, such systems will be able to automate up to 15% of the business routine: from logistics management to personalized service. However, as it turned out, this solution has a downside.
Attackers have learned how to deceive agents: to substitute data in their memory, plant false information (for example, fake customer orders) or overload the system with requests. This can lead to a chain of errors.: A bot can start ignoring security policies, sending sensitive data to the wrong place, or even launching malicious actions. And if you inject malicious code into frameworks for creating such agents, all related systems will suffer.
Edge AI is an AI that works "on-site": for example, in smart cameras, medical sensors, or smart home sensors. It processes data without a stable internet connection, which is critical for medicine (real-time diagnostics) or smart cities (traffic light monitoring without delays).
Smart devices often have vulnerabilities: outdated components, insecure settings. Attackers can attack them through the network (DDoS, IP/MAC address spoofing) or substitute input data. For example, in medicine, this can lead to misdiagnosis, and in industry, to equipment failures.
Quantum AI is not widely used yet, but its potential is huge: it solves complex optimization problems (for example, in logistics or pharmaceuticals) and models molecules to develop new materials. Now such solutions are available through the so-called "clouds", and this is what creates risks.
Attackers can hack into cloud platforms to steal the architecture of AI models or training data. And if you substitute the input data, then, for example, in pharmaceuticals, a toxic substance may be recognized as safe, and in finance, fraudulent transactions may not be recognized. In addition, quantum AI enhances traditional attacks: for example, it helps crack passwords by brute force.
It is now easier for attackers to hack into corporate systems using old methods than to attack AI agents or quantum models. But everything may change tomorrow," says Yana Avezova, a leading analyst at Positive Technologies.
Experts advise not to panic, but to be vigilant. By implementing AI, businesses must, first, integrate cybersecurity into their strategy: not only protect models, but control how AI interacts with business processes. Next, it is necessary to train employees so that staff understand the risks and can recognize suspicious AI actions. And, thirdly, to protect data: restrict access to information used by AI and implement leak control mechanisms.
On July 31, it was reported that the so-called digital interlocutors are becoming real companions for thousands of Russians. In 2025, the number of users of such services doubled, and the average traffic of AI assistants soared fourfold, according to Yota analytics. Psychological support bots have shown particularly explosive growth: their audience has become almost five times larger.
Переведено сервисом «Яндекс Переводчик»