Conflicting Judgments Leave Anthropic in ‘Supply Chain Risk’


Anthropic “hasn’t got it meeting difficult needs” waste of time supply-chain-risk appointment made by the Pentagon, the US appeals court in Washington, DC, decided on Wednesday. The decision contradicts one released last month and a lower court judge in San Francisco, and it was not immediately clear how the previous conflicting rulings would be resolved.

The state approved Anthropic under two separate supply laws with similar effects, with the San Francisco and Washington, DC, courts each deciding only one of them. Anthropic said it is the first US company to be designated under the two laws, which are typically used to punish foreign businesses that threaten national security.

“Granting a stay would force the U.S. military to prolong its engagement with an unwanted vendor of critical AI services amid a major ongoing military conflict,” the three-judge appeals panel said. he wrote on Wednesday in what they described as an unprecedented incident. The panel said that while Anthropic could suffer financial harm from the continued selection, they did not want to risk “judicializing military operations” or “ignoring” military judgments on national security.

A San Francisco judge found that the Department of Defense likely acted in bad faith against Anthropic, driven by confusion over the AI ​​company’s proposed limits on how its technology can be used and its public criticism of those restrictions. A judge ordered the supply chain’s threat label lifted last week, and the Trump administration complied by restoring access to Anthropic AI tools inside the Pentagon and across the federal government.

Anthropic spokeswoman Danielle Cohen says the company is grateful the court in Washington, DC, “recognized these issues need to be resolved quickly” and remains confident “eventually the court will agree that this selection of suppliers was unlawful.”

The Defense Department did not immediately respond to a request for comment, but acting attorney general Todd Blanche did has been published statement on X. “Today’s D.C. Circuit stay allowing the government to designate Anthropic as a supply risk is a major victory for military preparedness,” he wrote. “Our position has been clear from the beginning—our military needs full access to Anthropic prototypes if its technology is to be integrated into our sensitive systems. Military authority and operational control rests with the Commander-in-Chief and the War Department, not the technology company.”

The cases measure how much power the executive branch has over the conduct of technology companies. The war between Anthropic and the Trump administration also continues as the Pentagon deploys AI in its war against Iran. The company has claimed that it is being penalized illegally by insisting that its AI tool Claude lacks the accuracy required for certain sensitive operations such as conducting deadly drone strikes without human supervision.

Several experts in government contracting and corporate rights he told WIRED that Anthropic has strong cases against the government, but courts sometimes refuse to challenge the White House on issues related to national security. Some AI researchers they have said Pentagon’s actions against Anthropic’s “academic debate” about the performance of AI systems.

Anthropic has claimed in court that it lost business because of the designation, which federal lawyers argue prevents the Pentagon and its contractors from using Claude AI’s company as part of military projects. And as long as Trump remains in office, Anthropic may not be able to regain the important role it once had in the federal government.

The final decisions in the two company cases may be several months away. The Washington court is scheduled to hear oral arguments on May 19.

The parties have revealed little detail so far about how the Department of Defense has used Claude or how much progress it has made in transitioning personnel to other AI tools from. Google DeepMindOpenAI, or others. The military, which under President Trump calls itself the War Department, has said it has taken steps to ensure Anthropic cannot try to sabotage its AI tools during the transition.

Update 4/8/26 7:27 EDT: This story has been updated to include a statement from acting attorney general Todd Blanche.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *