
The war between the United States and Israel against Iran has lasted for more than 2 weeks, and the world is already full of lies. The New York Times report that “a series of fake videos and photos generated by artificial intelligence have spread on social media in the first weeks of the war in Iran.” Deepfakes on X, Facebook, and other platforms, especially TikTokthey have received millions of views. The fake videos include large explosions in Tel Aviv, successful missile attacks against US warships, Israelis mourning their losses, and other images showing how Iran inflicts pain on its enemies. Many of the videos have a Hollywood look to them, with loud explosions and bursts of sound. Other videos are more muted, such as one showing girls dancing before the US attacks hit by accident Shajarah Tayyebeh primary school, killing at least 175 people, most of them children. The attack was real, but the video was fake.
According to the latest report and Cyabra, a company that monitors lobbying campaigns, Iran is behind the extensive effort. Iran’s efforts are designed to influence audiences at home and abroad, influencing those to whom Iran is accountable while undermining the legitimacy of US and Israeli operations. The best answer involves a coordinated effort between government and private companies, working together to detect, correct, and remove counterfeits. Even then, however, the details can spread far and wide and create broader perceptions of war.
The war between the United States and Israel against Iran has lasted for more than 2 weeks, and the world is already full of lies. The New York Times report that “a series of fake videos and photos generated by artificial intelligence have spread on social media in the first weeks of the war in Iran.” Deepfakes on X, Facebook, and other platforms, especially TikTokthey have received millions of views. The fake videos include large explosions in Tel Aviv, successful missile attacks against US warships, Israelis mourning their losses, and other images showing how Iran inflicts pain on its enemies. Many of the videos have a Hollywood look to them, with loud explosions and bursts of sound. Other videos are more muted, such as one showing girls dancing before the US attacks hit by accident Shajarah Tayyebeh primary school, killing at least 175 people, most of them children. The attack was real, but the video was fake.
According to the latest report and Cyabra, a company that monitors lobbying campaigns, Iran is behind the extensive effort. Iran’s efforts are designed to influence audiences at home and abroad, influencing those to whom Iran is accountable while undermining the legitimacy of US and Israeli operations. The best answer involves a coordinated effort between government and private companies, working together to detect, correct, and remove counterfeits. Even then, however, the details can spread far and wide and create broader perceptions of war.
Deepfakes should no longer be a surprise when war breaks out. In March 2022, just after Russia began its full invasion, we saw elaborate fakes showing Ukrainian President Volodymyr Zelensky called on his troops to lay down their weapons. At the end of 2023, during the war between Israel and Hamas, detailed fake videos it emerged of babies crying in a dirt field, as well as footage of supposed Israeli military operations. In a report in Congress in November 2025, the US-China Economic and Security Review Commission alleged that China used the India-Pakistan war in early May to distribute fake photos of downed French-made Rafale jets to promote their J-35 fighters. The ongoing conflict with Iran is the latest example of how war and counterfeiting go hand in hand.
Indeed, Iran is no stranger to cyber influence activities. The Handala hacker group, sometimes known as the Void Manticore, is is reported part of Iran’s Ministry of Intelligence and Security. It includes its sister groups within the ministry are reported to be in silent agreement with installed or found backdoors in various Affiliate networks with the governments of Israel, the United States, and partner nations. In addition to making traditional cyber attacks, Handala is it is owed responsible for releasing deep secrets of Israeli Prime Minister Benjamin Netanyahu and former Prime Minister Naftali Bennett.
Tehran’s current disinformation campaign may have several audiences. Iran wants to create the illusion that it is winning—or at least tolerating and inflicting pain on its enemies—in the hope that this attitude will enable it to defeat the United States and Israel.
First, Iran wants to strengthen morale at home. Before the war began, the government faced a crisis of legitimacy, and Iran’s economy was in the hands of the government shoot down thousands of peaceful protesters in the streets. A military humiliation at the hands of Iran’s two arch-enemies would exacerbate the conflict, but images of Tehran retreating as its enemies tremble in fear help counter the sense of defeat. Acting out tragedies like the US strike on a primary school also highlights the perceived brutality of the US attack and Iran’s need to resist.
Second, Tehran wants to win audiences around the world as part of it a broader strategy expand the war to increase pressure on the US and Israel. If audiences in Europe, Asia, and the wider Middle East perceive Iran as effectively retreating and believe that US wars are unjust and disruptive (an attitude that is it is already widespread), their governments will be less likely to support the war and may put pressure on Washington to end it.
Third, Iran wants to weaken the morale of the United States and Israel. Although the United States, along with Israel, is causing serious damage to Iran, many Americans already to oppose the war. Images showing the death and destruction of US forces and the human cost of US mistakes could make the war less likely, and increase pressure on President Donald Trump to end it.
The governments of the United States and Israel need to respond to elaborate lies in near real time, denying them and otherwise trying to minimize their impact. Detecting and combating deep fakes, however, may require a partnership that goes beyond government, especially as the number of deep modeling tools increases every day. Today, only one major software sharing platform, Hugging Facehosts over 93,000 text-to-image formats, 1,000 text-to-video formats, and 4,000 text-to-speech generators. Such models allow users to type a text note as input to the model, which generates an image, video or audio file that reflects the questionnaire (for example, a 30-second video showing three drones leaving the deck of a ship and heading towards a city beach with many skyscrapers). Another major software sharing platform is GitHub. It is easy to use such files to generate real fakes showing the country under attack.
While US government agencies such as the Cybersecurity and Infrastructure Agency, the National Security Agency, and the Department of Defense have invested in deep recognition, they have been unable to stem the current wave of hoaxes, and the threat is growing every day. The investment of resources is much less. In addition, the government often fails advanced technical expertise found in the private sector. Perhaps most importantly, social media companies, not governments, own the infrastructure that hosts fakes. Government efforts to engage social media companies have also slowed, with critics voicing concerns about “jawboning”—state pressure on private actors without formal legal authority that can have the effect of coercing and stifling freedom of expression.
Tech companies don’t want misinformation like Iranian fakes on their platforms, but they are often criticized for not responding. Sometimes, the designs of their platforms and algorithmic recommendations can create a problem worse. Some technology companies and social platforms have invested heavily in detecting deep fakes, while others have done little to detect and combat them. Most trust-and-security teams focus more regulatory compliance rather than keeping their platforms free of fake news and deep lies.
The recent war against Iran shows the need for all this to change. Deepfakes mislead and confuse the public and perhaps even government officials, the poison of democratic discussion. Democratic governments should increase staff who focus on this issue and pressure tech companies to do the same. Sharing information is important. Tech companies can learn how their systems are being manipulated in ways that governments must know about, while intelligence and security agencies learn about sophisticated spoofing and other manipulation programs and tip off tech companies. Academics can also play an important role in advanced lie detection—a number of universities around the world are developing new lie detectors and helping journalists and others detect and debunk deep fakes for free. Iran, of course, is only a small information power. A conflict with China, with its resources and technical know-how, would require a greater effort.
Even as anti-falsification measures are improved, decision makers must also acknowledge that widespread falsehoods will shape current and future conflicts. One problem would be to respond to information before knowing what, exactly, is true and false. A deep fake can travel halfway around the world while the detectors are still wearing their shoes, to paraphrase an old adage about lies. At times, officials will need to make decisions even before the photos are properly evaluated to determine if they are fake. In addition, there will be a higher level of misperception between their audience and the opposition, blocking the message and otherwise making it difficult to show that the US and Israel are winning or to convey the complexity of the situation to a skeptical public.
The pseudo-war that continues alongside the US-Israeli military campaign shows how cheap and widely available tools allow states like Iran to create battlefield perceptions as events unfold, targeting domestic audiences, international opinion, and enemy morale simultaneously. In future conflicts, the struggle to control the narrative—and distinguish truth from persuasive misrepresentation—will be almost as consequential as the fighting itself.




