- Статьи
- Internet and technology
- Prompta and contra: how Microsoft influences the outcome of the trial on the ethics of using AI
Prompta and contra: how Microsoft influences the outcome of the trial on the ethics of using AI
Microsoft has offered to temporarily block the Pentagon's decision to include Anthropic in the list of companies posing a threat to the United States. The corporation supported the AI platform, which it actively uses itself. Thus, she increased the chances of Claude developers in a legal dispute with the American Ministry of War. The leading neural networks assessed Microsoft's move as an attempt to bring a peaceful settlement of the conflict closer. At the same time, the trial itself has already been called the largest court case in history related to the ethics of using artificial intelligence in the military sphere. About how the American IT sector came out in support of Anthropic and opposed its blacklisting, see the Izvestia article.
How is the conflict between the Pentagon and the Anthropic developing?
Microsoft sent a letter to the court in support of Anthropic, which increased the chances of the neural network developer Claude for a successful outcome of the proceedings with the US Department of War. It is related to the Pentagon's decision to include an artificial intelligence developer in the list of companies that pose a threat to national security.
Anthropic— the developer of the Claude model, filed a lawsuit in federal court in San Francisco on March 10, 2026. The company demands to reverse the decision and suspend its execution, as it may actually close the defense contract market for it. The conflict between Anthropic and the Pentagon broke out after the agency declared the developer of the Claude AI platform a "risk to the supply chain." This effectively prohibits contractors from using its technology in defense projects, a formulation that has not previously been applied to American companies. Microsoft and a number of other technology companies have also submitted their legal opinions to the court in support of the AI platform, warning of possible disruptions and rising costs if restrictions take effect.
— The list of companies with "supply chain risks" in the United States is considered a kind of blacklist for reasons of national security. If the decision comes into force, Pentagon contractors will not be able to use Anthropic technologies when carrying out projects for the army," said Igor Baranov, an information technology specialist.
The case is still at an early stage. The court is considering the possibility of temporarily suspending the decision of the US Department of War until the claim is considered on its merits.
Microsoft's intervention is explained by the fact that the Pentagon's plans directly affect the interests of the corporation. The corporation is actively implementing the developments of Anthropic in its own technological solutions, some of which are supplied to the US military.
The document submitted to the court says that the Pentagon has given itself six months to abandon the technology of Anthropic, but the department's contractors have not provided a similar transition period. As a result, companies already using these solutions may be forced to urgently rebuild their products and infrastructure.
According to Microsoft, such a scenario would entail additional costs and create risks for planning defense projects. In this regard, the corporation asks the court to suspend the Pentagon's decision so that the parties have time to negotiate and find a compromise solution.
The document also notes that the introduction of a temporary ban will allow the army to maintain access to modern artificial intelligence technologies, while eliminating scenarios in which such systems could be used for mass internal surveillance or military decision-making without human intervention.
— Microsoft has already become the third major player in the technology industry to publicly support Anthropic. Earlier in the day, 37 researchers and engineers from OpenAI and Google also submitted their own "friendly brief" in support of their colleagues, said IT expert Sergey Pomortsev.
According to him, such documents are submitted by third parties who are not involved in the process, but believe that the court's decision may affect their interests or affect the entire industry. The judge has yet to decide whether the Microsoft document will be included in the case file, but in practice such petitions are usually granted.
The situation reflects the growing contradiction between the requirements of national security and the needs of the technology industry. The Pentagon points to possible risks associated with the supply chains of artificial intelligence technologies, while the largest IT companies have already deeply integrated the developments of Anthropic into their own products, including solutions for the defense sector, Igor Baranov noted.
"In fact, Microsoft protects not only its partner, but also its own technology ecosystem and contracts with the US military," he added.
How forecasts are made by neural networks
According to the Claude model itself, the probability of a successful resolution of the conflict between the Pentagon and the Anthropic is estimated at about 45%. There is about a 30% chance that the court will temporarily block the decision of the military department, but then refuse the company a complete ban. The probability of complete defeat of the Anthropic is estimated at 20%. At the same time, the neural network notes that Microsoft's support increases the chances of success.
According to ChatGPT, the corporation's intervention increases the likelihood of conflict resolution without a final ban on Anthropic technologies. There are three possible scenarios. The first is a compromise agreement between the Pentagon and Anthropic (about 45%), which includes additional security requirements, technology audits, or restrictions on the use of the Claude model in individual military projects. This option will allow us to maintain cooperation with army contractors.
The second scenario is a partial victory for the Pentagon (about 35%), when the decision on "supply chain risk" remains in force, but a transition period is provided for contractors and existing projects. This will allow for the gradual abandonment of Anthropic technologies without an abrupt termination of contracts. The third scenario is a victory for Anthropic in court (about 20%), in which the decision of the military department is recognized as unjustified and completely blocked, which will allow contractors to continue using the company's developments without restrictions.
— Hardly any of us like to check Excel lines before calculating a formula, right? The concern in this vein is that if AI is implemented too deeply, even the person who has to push the final button may eventually stop doing so. That is why Anthropic insisted on banning fully autonomous weapons, and it was punished for this," said Yegor Zubakin, a political scientist and expert at the New Era Development Center.
According to the expert, AI algorithms, including Claude, do not have legal personality and cannot be held accountable in court. As systems become faster and more autonomous, the question of their responsibility remains open.
Experts interviewed by Izvestia noted that this trial may become the first major judicial precedent defining the boundaries of the ethical use of artificial intelligence in the military sphere. The outcome of the process can influence the approaches of states and technology companies to regulating the use of AI in defense systems.
Переведено сервисом «Яндекс Переводчик»