Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

Anthropic Sues Pentagon Over AI Supply Chain Designation

In a dramatic escalation of tensions between the federal government and the AI industry, the Trump administration has fired back at Anthropic in a heated legal battle that could reshape the future of artificial intelligence in national defense.

The Department of Justice filed a robust response on Tuesday, arguing that the government’s designation of Anthropic as a “supply chain risk” does not violate the company’s First Amendment rights—a move that could potentially cost the AI pioneer billions in lost revenue.

Government Claims Anthropic’s First Amendment Argument “Radical”

US Department of Justice attorneys pushed back hard against Anthropic’s constitutional claims, writing in their court filing that “the First Amendment is not a license to unilaterally impose contract terms on the government.” The government’s position suggests that Anthropic’s legal challenge is based on a “radical conclusion” unsupported by precedent.

The filing came in response to Anthropic’s lawsuit in federal court in San Francisco, where the company is challenging the Pentagon’s decision to sanction it with a designation that effectively bars the company from defense contracts over alleged security vulnerabilities.

Billions at Stake as Legal Battle Intensifies

Industry analysts estimate that Anthropic could lose up to $3-4 billion in expected revenue this year if the designation holds. The company’s flagship Claude AI models are currently deployed across numerous Department of Defense systems, and the Pentagon is actively working to replace them with alternatives from competitors including Google, OpenAI, and Elon Musk’s xAI.

Anthropic has requested that business continue as usual until the litigation concludes. Judge Rita Lin has scheduled a hearing for next Tuesday to decide whether to grant the company a temporary reprieve from the designation.

Government Cites National Security Concerns

In a particularly pointed section of the filing, government attorneys revealed that Defense Secretary Pete Hegseth “reasonably” determined that Anthropic staff might “sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, or operation of a national security system.”

The filing suggests that Anthropic’s insistence on contractual restrictions—including preventing its AI from being used for broad surveillance of Americans or autonomous weapons—led Pentagon officials to view the company as a potential security threat rather than a trusted partner.

“AI Systems Are Acutely Vulnerable to Manipulation”

The government’s filing argues that allowing Anthropic continued access to Defense Department infrastructure would introduce “unacceptable risk” into supply chains. The document warns that Anthropic could attempt to disable its technology or alter its behavior during active military operations if the company believes its corporate “red lines” are being crossed.

“This is not about restricting speech,” the government argues. “It’s about ensuring the integrity of national security systems during a time when high-intensity combat operations are underway.”

Industry Support Mounting for Anthropic

Despite the government’s aggressive stance, Anthropic has garnered significant support from across the tech industry. OpenAI, DeepMind, and Microsoft have filed amicus briefs supporting the company’s position, along with a federal employee labor union and former military leaders.

Legal experts who spoke with WIRED suggest Anthropic has a strong argument that the supply-chain measure amounts to illegal retaliation. However, courts historically defer to national security arguments from the government, creating an uphill battle for the AI company.

The Clock is Ticking

Anthropic has until Friday to file a counter-response to the government’s arguments. The outcome of this case could establish critical precedents for how AI companies interact with government agencies and what limitations they can impose on the use of their technologies in defense applications.

As the September hearing approaches, all eyes in Silicon Valley and Washington are watching to see whether this clash between innovation and national security will result in a compromise or a definitive legal precedent that could reshape the AI industry’s relationship with the federal government.


Tags: Anthropic, Pentagon, AI regulation, national security, First Amendment, supply chain risk, Claude AI, Trump administration, Pete Hegseth, Department of Defense, legal battle, tech industry, government contracting, artificial intelligence, constitutional rights

Viral Phrases: “First Amendment is not a license to unilaterally impose contract terms,” “AI systems are acutely vulnerable to manipulation,” “unacceptable risk into supply chains,” “corporate red lines,” “legally insufficient to constitute irreparable injury,” “AI company gone rogue,” “billions in expected revenue,” “national security trumps innovation,” “sabotage or maliciously introduce unwanted function,” “time when high-intensity combat operations are underway”

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *