US Dept of Defense seems to punish illegally Anthropic by trying to prevent it the use of its AI tools by the militaryU.S. District Judge Rita Lin said during a court hearing on Tuesday.
“It looks like an attempt to cripple Anthropic,” Lin said of the Pentagon designating a dangerous supply chain. “It seems like (the department) is punishing Anthropic for trying to bring public scrutiny to this contract dispute, which would obviously be a violation of the First Amendment.”
Anthropic has filed two federal lawsuits alleging that the Trump administration’s decision to designate the company a security risk amounted to unlawful retaliation. The government trademarked Anthropic after it pressed restrictions on how its AI could be used by the military. Tuesday’s hearing came in a lawsuit filed in San Francisco.
Anthropic is seeking an interim injunction to stop the appointment. That support, Anthropic hopes, would help influence some companies frivolous customers stay with it longer. Lin can only suspend it if he decides that Anthropic is likely to win the general case. His decision on the order is expected in the next few days.
The dispute has sparked a broader public debate about how artificial intelligence is increasingly being used by the military, and whether Silicon Valley companies should give preference to the government in determining how the technology they create is used.
The Department of Defence, now called the Department of War (DoW), has made an argument that it followed procedures and properly determined that Anthropic AI tools could no longer be relied upon to perform as expected at critical times. It urged Lin not to second-guess his assessment of what it claims to be an anthropogenic threat to national security.
“The concern is that Anthropic, instead of just raising concerns and going back, we’re going to say we have a problem with what the DoW is doing and it’s going to change the program … so it’s not working the way the DoW expects and wants,” Trump administration attorney Eric Hamilton said during Tuesday’s hearing.
Lin said it was Defense Secretary Pete Hegseth’s responsibility—not his—to decide whether Anthropic was the right vendor for the department. But Lin said it was up to him to decide whether Hegseth broke the law by taking more than just canceling the Anthropic government contracts. Lin said it was “disturbing” that the security designations and orders that more broadly restrict the use of the Anthropic AI tool Claude by government contractors “do not appear to be focused on national security concerns.”
As the Anthropic conflict with the government escalated last month, Hegseth published on X that “effective immediately, no contractor, supplier, or partner doing business with the United States military may conduct any business with Anthropic.”
But on Tuesday, Hamilton acknowledged that Hegseth does not have the legal authority to prevent military contractors from using Anthropic for work unrelated to the Department of Defense. When asked by Lin why Hegseth would post that, Hamilton said, “I don’t know.”
Lin further questioned Hamilton about whether the Pentagon considered taking non-punitive measures to remove the department from using Anthropic tools. He described the name supply risk as a powerful mandate that is usually reserved for foreign adversaries, terrorists and other hostile actors.
Michael Mongan, a WilmerHale attorney representing Anthropic, said it was strange for the government to pursue a “recalcitrant” negotiating partner with the nominee.
The Pentagon has said it is working to replace Anthropic’s technologies in the coming months with alternatives from Google, OpenAI, and xAI. It also said it has put measures in place to prevent Anthropic from getting involved any damage during the transition. Hamilton said he did not know if it would be possible for Anthropic to update its AI designs without permission from the Pentagon; the company says it isn’t.
A decision in another case, in a federal appeals court in Washington, DC, is expected soon without a hearing.





