Iran War: Is the US using AI models like Claude and ChatGPT in war?


In the week before President Donald Trump’s war on Iran, the Pentagon was waging a different war: fight with the AI ​​company Anthropic on his main AI prototype, Claude.

The dispute came to a head on Friday, when Trump said that the federal government would immediately stop using Anthropic AI tools. However, according to a report in the Wall Street JournalThe Pentagon used the weapons when it launched an attack against Iran on Saturday morning.

Were the experts surprised to see Claude in the front line?

“Absolutely not,” Paul Scharreexecutive vice president at the Center for a New American Security and author of The Four Squares: Power in the Age of Artificial Intelligencehe told Vox.

According to Scharre: “We’ve seen, for about a decade now, the military using narrow AI systems like image classifiers to identify objects in drones and video feeds. What’s more recent are big language models like ChatGPT and Anthropic’s Claude that the military is reportedly using in operations in Iran.”

Scharre spoke to Today, It’s Explained co-host Sean Rameswaram on how AI and the military are increasingly interacting – and what that combination could mean for the future of warfare.

The following is part of their conversation, edited for length and clarity. There’s more throughout the episode, so tune in Today, It’s Explained wherever you find podcasts, incl Apple Podcasts, Pandoraand Spotify.

People want to know how Claude or ChatGPT can be fighting this battle. Do we know?

We don’t know yet. We can make educated guesses based on what technology can do. AI technology is very good at processing large amounts of information, and the US military has hit over a thousand targets in Iran.

They need to find ways to process information about targets – satellite images, for example, of targets they’ve hit – to look for new potential targets, prioritize them, process the information, and use AI to do it at machine speed rather than human speed.

Do we know more about how the military used AI in, say, Venezuela in the attack that brought Nicolas Maduro to Brooklyn, of all places? Because we have recently discovered that AI was used there as well.

What we do know is that Anthropic’s AI tools are integrated into the US military’s classified networks. They can process classified information to process intelligence, helping to plan activities.

We have had this kind of interesting information that these tools were used in the Maduro invasion. We don’t know exactly how.

We’ve seen AI technology in a broad sense being used in other conflicts, as well – in Ukraine, in Israel’s operations in Gaza, to do a number of different things. One of the ways that AI is being used in Ukraine in a different kind of context is to put autonomy on the drones themselves.

When I was in Ukraine, one of the things that I saw Ukrainian drone operators and engineers demonstrate is a small box, about the size of a pack of cigarettes, that you can put on a small drone. Once a human closes the target, then the drone can carry out the attack on its own. And that has been used in a small way.

We see AI starting to enter all these areas of military operations in intelligence, in planning, in logistics, but also on the edge in terms of being used where drones are completing attacks.

What about Israel and Gaza?

There have been reports about how the Israel Defense Forces have used AI in Gaza – not necessarily big language models, but machine learning systems that can synthesize and integrate large amounts of information, geolocation data, mobile phone and relationship data, social media data to process all the information very quickly to create targeting packages, especially in the early phases of Israeli operations.

But it raises thorny questions about human involvement in these decisions. And one of the criticisms that arose was that humans were still approving these goals, but the number of strikes and the amount of information that needed to be addressed was that perhaps human oversight in some cases was more than a rubber stamp.

The question is: Where is this going? Are we heading down a path where, over time, humans are pushed out of the loop, and we see, down the road, fully autonomous weapons that make their own decisions about who to kill on the battlefield?

That is the direction of things. No one is opening a group of killer robots today, but the path is in that direction.

We saw reports that a school was bombed in Iran, where (175 people) were killed – most of them little girls, children. That was probably a human error.

Do we think that autonomous weapons will be able to make such a mistake, or will they be better at war than we are?

This question of “will autonomous weapons be better than humans” is one of the core issues of the debate surrounding this technology. Defenders of autonomous weapons will say people make mistakes all the time, and machines can do better.

Part of that depends on how much the military using this technology tries to avoid mistakes. If the military doesn’t care about civilian casualties, then AI can allow militants to attack targets quickly, in some cases even to commit atrocities quickly, if that’s what the militants are trying to do.

I think there’s this important ability here to use technology to be more precise. And if you look at the long line of precision-guided weapons over, say, the last century or so, it’s more precisely guided.

If you look at the example of the US attack on Iran right now, it is worth comparing this to the widespread aerial bombing campaigns against cities that we saw in World War II, for example, where entire cities were destroyed in Europe and Asia because the bombs were completely inaccurate, and the air forces dropped a large amount of orders to try to hit even one factory.

The possibility here is that AI can get better over time allowing militants to hit military targets and prevent civilian casualties. Now, if the data is wrong, and they have the wrong target in the list, they will hit the wrong thing correctly. And AI doesn’t have to fix that.

On the other hand, I saw a piece of the report inside The New Scientist that was scary. The headline was, “AI can’t stop suggesting nuclear attacks in war game simulation.”

They wrote about a study in which models from OpenAI, Anthropic, and Google chose to use nuclear weapons in simulated war games in 95 percent of cases, which I think is slightly more than we humans usually use nuclear weapons. Should that surprise us?

It’s a bit concerning. Happily, as far as I could tell, no one is connecting examples of big language with decisions about using nuclear weapons. But I think it points to some of the weirdest ways AI systems can fail.

They tend towards sycophancy. They tend to just agree with everything you say. They can do it to the point of absurdity sometimes where, you know, “that’s brilliant,” an example will tell you, “that’s smart.” And you’re like, “I don’t think so.” And that is a real problem when you talk about intelligence analysis.

Do we think ChatGPT is saying that to Pete Hegseth right now?

I hope not, but his people may be telling him so.

You start with this last “yes men” thing with these tools, where not only do they tend to see heavy things, which is a fancy way of saying that they make things up sometimes, but also models can be used in ways that can reinforce existing human biases, that reinforce biases in the data, or that people just trust them.

There’s this kind of, “The AI ​​said this, so it must be the right thing to do.” And people put faith in it, and we really shouldn’t. We should be more skeptical.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *