Anthropic Denies It Could Destroy AI Tools During War


Anthropic cannot drive its main AI model Claude once the US military makes it, the executive wrote in a court filing on Friday. The statement was made in response to accusations from the Trump administration about the company potentially tampering with its AI tools during battle.

“Anthropic has never had the ability to cause Claude to stop working, alter its functionality, block access, or otherwise influence or compromise military operations,” Thiyagu Ramasamy, Anthropic’s head of public sector, said. he wrote. “Anthropic does not have the necessary access to disable the technology or change the behavior of the model before or during an ongoing operation.”

The Pentagon has been working with a leading AI lab for months on how its technology could be used for national security — and what the limits of that use should be. This month, defense secretary Pete Hegseth wrote the name Anthropic a supply chain riska designation that will prevent the Department of Defense from using the company’s programs, including through contractors, in the coming months. Other federal agencies are also abandoning Claude.

Anthropic opened two cases challenging the constitutionality of the ban and is seeking an emergency injunction to overturn it. However, customers have already started cancel contracts. A hearing in one of the cases is scheduled for March 24 in federal district court in San Francisco. A judge may decide on a temporary replacement shortly thereafter.

In a filing earlier this week, attorneys for the government wrote that the Department of Defense “should not tolerate the risk that critical military systems will be compromised at a time critical to national defense and military operations.”

The Pentagon has been using Claude to analyze data, write memos, and help create war plans, WIRED. information. The government’s argument is that Anthropic could disrupt ongoing military operations by shutting down access to Claude or pushing out dangerous updates if the company doesn’t approve certain uses.

Ramasamy rejected the possibility. “Anthropic does not maintain any backdoor or remote ‘kill switch’,” he wrote. “Anthropic personnel cannot, for example, enter the DoW system to modify or disable structures during operation; the technology does not work that way.”

He went on to say that Anthropic will be able to release the update with government approval and its cloud service provider, in this case Amazon Web Services, though he did not name it. Ramasamy added that Anthropic does not have access to the tips or other military users of the data entering Claude.

Anthropic executives maintain in court filings that the company does not want veto power over strategic military decisions. Sarah Heck, head of policy, he wrote in a court filing on Friday that Anthropic was willing to provide substantial guarantees in the proposed contract March 4. “For the avoidance of doubt, (Anthropic) understands that this license does not grant or give any right to control or veto the operational decisions of the Department of War,” the proposal said, according to the filing, which referred to the Pentagon’s alternate name.

The company was also willing to accept language that would have addressed its concerns about Claude being used to help carry out deadly strikes without human oversight, Heck claimed. But the talks eventually broke down.

At this time, the Department of Defense has said in court filings that it is “taking additional steps to mitigate the supply chain risk” posed by the company by “working with third-party network providers to ensure Anthropic management cannot make unilateral changes” to Claude’s existing systems.



Source link

اترك ردّاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *