Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

The Claude neural network has been at the top of downloads for almost a week now. Earlier, the Pentagon called the developer of the program, Anhtropic, a threat to national security. But this and the increased frequency of DDoS attacks did not prevent the AI platform from remaining popular with Americans. And the military in the States continued to consult with her even after the scandalous accusation. How the conflict between the US government and a private company developed and what are Claude's prospects in the battle with Grok and OpenAI — in the Izvestia article.

National security risk

The conflict between the American company Anthropic and the Pentagon has become one of the most high-profile clashes of corporate ethics and state interests in the history of artificial intelligence, says Yegor Zubakin, a political scientist and expert at the New Era Development Center. It all started back in 2024, when Anthropic, through contractors Palantir and Amazon, signed a $200 million contract with the Pentagon.

Пентагон
Photo: REUTERS/Dado Ruvic/Illustration

The Claude model has been integrated into the classified networks of the US Department of Defense for the analysis of intelligence and military simulations. The company immediately set strict "red lines": a complete ban on the use of AI for mass surveillance of American citizens and the creation of fully autonomous weapon systems, where the machine independently decides on the destruction of the target. The turning point came in February 2026. The new administration of Donald Trump and Defense Secretary Pete Hegseth demanded that both restrictions be lifted. "A private company should not dictate the terms of national security," the Pentagon said. The head of Anthropic, Dario Amodei, categorically refused.

— As a result, on February 27, Anthropic was officially recognized as a "threat to the national security supply chain," a status previously applied only to foreign companies like China's Huawei. Military contractors were banned from any contact with her, and government agencies were given six months to completely remove Claude from the systems," Zubakin explained.

The paradox of the situation is that it was at this moment that the model was actively used in real combat operations. According to Western media reports, Claude assisted in the capture of Nicolas Maduro in Venezuela in January 2026 and was used by the US Central Command during airstrikes on Iran as part of Operation Epic Fury — even after the official ban. As Igor Baranov, an information technology specialist, noted, the military continued to work with the system because it was already deeply embedded in the infrastructure.

Иран
Photo: REUTERS/Majid Asgaripour/WANA

"Claude is banned, and the military continues to consult with him, shifting some of the responsibility for the protracted conflict to AI," he suggested.

At the same time, experts disagree on the true reasons for the ban. Egor Zubakin believes that the formal claims hide the usual competition, because Anthropic was punished for the principles that its competitor, OpenAI— immediately prescribed in its contract.

A heated spot in the Apple top

The ban, which was supposed to hit Anthropic hard, unexpectedly played against the initiators themselves. Five days after Trump's "black mark", the Claude app has been ranked first in the top downloads of free apps in the United States and other accessible regions. Anthropic reported a 60% increase in new users. At the same time, the application is subjected to powerful DDoS attacks, judging by the data of the independent website DownDetector, indicating problems with various applications, platforms and services. The experts interviewed do not rule out that both pro-government hackers and foreign players may be behind them.

At the same time, ChatGPT's competitor records massive deletions and user outflow after signing an agreement with the Pentagon. According to Sensor Tower, an analytical company, the number of people removing an AI product from OpenAI has increased by 200% compared to the usual level. The peak occurred on February 28 — the number of deletions increased by 295%.

ИИ
Photo: IZVESTIA/Sergey Konkov

"In the long run, Claude has become a symbol of ethical AI, which has not been sold to the military," says Darko Todorovsky, a military—political analyst, candidate of political sciences, researcher at the RANEPA under the President of the Russian Federation.

In the consumer segment, the model can hold the lead for another two to three months due to the "protest effect," the expert believes, but in the military sphere, the advantage was gained by the more flexible OpenAI and Grok (xAI). The program's popularity will also be affected by a consistent policy of restrictions, as in the case of Chinese Huawei and ZTE, which the US Federal Communications Commission officially declared a threat to national security back in 2020.

Experts at the Intelligence Express Laboratory (LIEX) are paying attention to the deeper risks of military use of AI, be it Claude or ChatGPT. Models are vulnerable to competitive attacks: a properly colored piece of cloth or a special pattern is enough for the algorithm to mistake a tank for a school bus. Error auditing, when AI suggests a civilian object as a target, is also extremely difficult — it is practically not carried out in real combat conditions.

Algorithms do not have legal personality, which means that the question "Who is to blame?" remains unanswered, — stated in LIEX.

ИИ
Photo: IZVESTIA/Yulia Mayorova

Military AI is physically located in isolated data centers in the United States or in mobile modules that can be transported in standard containers. The model itself takes from 100 to 500 gigabytes, but thousands of powerful GPUs are required for real work.

The Pentagon's attempt to quickly replace the "obstinate" Claude with more obedient partners turned into an unexpected boomerang. Claude has not disappeared — he has become more popular than ever, Igor Baranov summed up. And the question of whether the ethics of artificial intelligence can withstand military interests has only escalated.

Переведено сервисом «Яндекс Переводчик»

Live broadcast