The week-long dispute between Anthropic and the Department of Defense is entering a new phase. After being designated a supply risk by the DOD last week, which effectively bans Pentagon contractors from using its products, the AI company filed a lawsuit against the DOD this morning claiming the government’s actions are unconstitutional and ideologically motivated. Then, this afternoon, 37 employees from OpenAI and Google DeepMind—including Google’s chief scientist, Jeff Dean—signed amicus brief by supporting Anthropic, essentially giving support to one of their employer’s biggest business competitors (even if OpenAI itself has started controversial new contract and DOD).
The friction is unprecedented. For the past few weeks, Anthropic has been in intense talks with the Pentagon about how the US military can use the company’s AI systems. Anthropic CEO Dario Amodei was rejected terms which would appear to allow the Trump administration to use the company’s AI systems for mass domestic surveillance or to launch fully autonomous weapons, senior DOD officials said. for to sue Amodei of “putting our national security at risk” and having a “God complex.”
No one knows how this conflict will end. An Anthropic spokesperson told me that the lawsuit “does not change our long-standing commitment to using AI to protect our national security” and that the company “will pursue every avenue toward a resolution, including negotiations with the government.” A DOD spokesperson told me that the department does not comment on allegations.
But such conflict was inevitable, and more is sure to come. The government has nothing close to a legal framework to regulate generative AI or, for that matter, online data collection. There are few legal, externally enforced restrictions on the use of AI in autonomous weapons, and fewer still on how AI can be used to process the vast amount of information that federal agencies can collect on people: location data, credit card purchases, browsing history data, and so on. Because the rules are free, Anthropic and OpenAI have been able to set their own privacy policies and guidelines for how AI can and cannot be used, and then change them at will; OpenAI, Meta, and Google, for example, have all changed previous restrictions on the military use of AI. But this cuts to the other side as well: Anthropic has effectively been branded an enemy of the government by opposing the administration’s desire to be able to use its AI-generating systems in potential autonomous weapons systems and on American surveillance, as long as the application is technically legal.
Tracking concerns were a unique issue for OpenAI and the Google DeepMind employees who signed the amicus brief today. They wrote that AI has the potential to dramatically change the way once isolated data streams can be used to keep tabs on Americans: “From our vantage point in the frontier AI lab, we understand that an AI system used for crowdsourcing can break down those silos, combining facial recognition data with location history, activity records, behavioral patterns of hundreds of millions of people.”
The Pentagon has said it has no intention of using AI to track Americans en masse, and made this clear in its new contract with OpenAI, which also outlines several national security laws and policies the DOD has agreed to. But as I wrote last week, those same policies have already allowed to spy on Americans and existing technologies, to say nothing of AI. Meanwhile, Elon Musk’s xAI has reportedly agreed to a Pentagon contract with terms that are still somewhat restrictive. The American public now has no choice but to believe that Defense Secretary Pete Hegseth, Musk, OpenAI CEO Sam Altman, and Amodei will not use AI to spy on them. (OpenAI has a corporate partnership with Atlantic.)
Anthropic has said it is not entirely against the use of its technology in fully autonomous weapons but that modern AI models are not ready to use such weapons. AI employees signed today’s briefing, along with nearly 1,000 OpenAI and Google employees who signed the agreement. public letter in support of Anthropic last month, agree. Existing DOD policy on the development and use of autonomous weapons is vague and intended for different systems with specific geographic targets; some experts have argued that it is poor probability for widespread, AI-enabled warfare. The policy is also not a law, and thus it can change and be interpreted according to the views of any presidential administration.
All of these are complex issues that require real counsel. Instead, last week, President Trump he told it Politics: “I chased Anthropic. Anthropic is in trouble because I chased (them) like dogs, because they shouldn’t have.” Instead of listening and learning from the discussions, the administration discourages them.
If you take a step back, the problem of AI bypassing established rules and regulations is absolutely ubiquitous. About four years into the era of ChatGPT, school I still don’t understand what to do about not only the widespread deception but also the complete abolition of certain traditional forms of study. Existing copyright law is being violated when used for the use of the works of authors and artists, without their consent, to train AI-production models. Even if productivity-AI tools should soon personalize many areas of the economy, neither AI companies nor governments or employers are spending much resources, other than writing research reports, to know what to do about the millions of Americans who may be laid off. The energy demands of AI data centers are draining the grid and setting back climate goals around the world.
Instead of following well-considered rules by consensus, the Trump administration appears to have absolute control over AI. without being held accountable. Congress, as usual, is slow and hesitant when it comes to emerging and powerful technologies. And although AI companies often warn about their technology, they are too race ahead to develop and sell more capable models. When faced with the prospect of greater responsibility, they usually deviate; for example, when I he spoke and Jack Clark, Anthropic’s chief policy officer, last summer about whether the AI industry was moving too fast, told me: “The world is making this decision, not the company.” Elsewhere, Anthropic has he said that “avoid being too prescriptive.” For his part, Altman likes to say that AI companies must learn “from interacting with reality.” Yet the world—civil society, all of us living in this AI-filled reality—has little say in the technology’s development.
On Friday, in interview and EconomistAnthropic Amodei more or less made it clear by itself. “We don’t want to make companies more powerful than the government,” he said. “But we also don’t want to make the government so strong that it can’t be stopped. We have both problems at once.” America faces a future where no one claims responsibility for AI. Everyone will live with the consequences.





