Microsoft has joined Anthropic in fighting to keep AI safety principles intact in the face of Pentagon pressure, filing a supporting legal brief in a San Francisco federal court that calls for a temporary restraining order against the Defense Department’s supply-chain risk designation. The brief highlights the potential damage to defense and commercial technology networks that depend on Anthropic’s AI if the designation is allowed to stand. Amazon, Google, Apple, and OpenAI have also backed Anthropic through a coordinated joint filing.
Anthropic’s confrontation with the Pentagon was set in motion when the company refused to sign a $200 million contract that would have deployed its AI on classified military systems without restrictions on its use for mass surveillance or autonomous lethal weapons. Defense Secretary Pete Hegseth labeled the company a supply-chain risk, and the Pentagon’s technology chief later publicly foreclosed the possibility of renegotiation. Anthropic filed two simultaneous lawsuits challenging the designation as unconstitutional and unprecedented.
Microsoft’s filing reflects its direct commercial relationship with Anthropic and its role as a major Pentagon contractor. The company integrates Anthropic’s technology into military systems and participates in the $9 billion Joint Warfighting Cloud Capability contract, along with holding additional federal agreements worth several billion dollars more. Microsoft publicly argued that responsible AI governance and robust national security were not competing goals but shared imperatives that required cooperation between government and industry.
Anthropic’s court filings argued that the supply-chain risk designation was applied as ideological punishment for the company’s public stance on AI safety, violating its First Amendment rights. The company disclosed that it does not currently have confidence in Claude’s reliability in lethal autonomous warfare scenarios, which it said was the genuine basis for the usage restrictions it sought. Anthropic emphasized that no US company had ever previously been subjected to this kind of designation.
House Democrats have separately written to the Pentagon seeking information about whether AI was used in a strike in Iran that reportedly killed more than 175 people at a school. Their questions focus on AI’s role in targeting decisions and the degree of human oversight exercised. These congressional inquiries are deepening the scrutiny on the Pentagon and adding political urgency to what is already an extraordinary legal confrontation over the future of AI in American national security.
