In the wars of this new era, not only missiles circulate, images, rumors, fabricated content, and narratives designed to sow fear, outrage, or confusion also spread rapidly. That is precisely what one of our most recent investigations into disinformation surrounding the conflict involving Iran reveals.
During the first days of the information escalation, social media platforms were once again flooded with old videos presented as current events, artificial intelligence (AI)-generated images shared as though they were evidence, and real scenes taken out of context to fit false narratives. The pattern is clear. In times of crisis, visual falsehoods travel quickly and are often presented with the appearance of proof (Wardle & Derakhshan, 2017).
Monitoring and Verifying Disseminated Information
In our research exercise, we analyzed a sample corresponding to the period from February 28 to March 6, 2026, and identified three dominant mechanisms of disinformation.
The first was the recycling of old videos and photographs, republished as if they documented recent events. The second was the circulation of synthetic AI-generated content: alleged bombings, destroyed urban scenes, images of corpses, and videos supposedly “captured on the ground” that were not authentic recordings. The third was geographic relabeling, a simple yet highly effective tactic: taking footage of an explosion that occurred in another country or at another time and presenting it as though it had happened at the very center of the conflict (Wardle & Derakhshan, 2017).
One of the most troubling aspects revealed by our research is that disinformation does not rely solely on misleading text, but also on manipulated images. When an image appears real, many people interpret it as conclusive proof. That is where much of the problem lies. Reuters documented, for example, an image supposedly showing the body of Ali Khamenei trapped beneath rubble; the analysis concluded that it was AI-generated content, identified with very high confidence by detection tools associated with SynthID (Reuters Fact Check, 2026).
AFP verified another case: a video allegedly showing U.S. troops already inside Iran. The footage displayed visual inconsistencies and was classified as synthetic production; furthermore, there were no official reports confirming, at that time, the presence of U.S. ground troops in Iranian territory (AFP Fact Check, 2026a).
Synthetic Manipulation Through AI
These are not isolated cases. In our analysis, we found examples of videos claiming to show devastating Iranian attacks on Tel Aviv, but which actually displayed signs of artificial generation, such as distorted flags, inconsistent vehicles, or repetitive patterns within the frame.
Maldita.es identified several such materials and warned that, although they might appear authentic at first glance, they exhibited characteristics typical of synthetic content (Maldita.es, 2026a, 2026b). FactNameh, meanwhile, documented the spread of fake videos supposedly showing the destruction of Israeli cities, when in reality some were AI-generated and others were recycled footage from disasters in other countries (FactNameh, 2026a, 2026b).
Disinformation in wartime situations “works” (for one of the parties involved) because it combines speed, emotion, and plausibility. A video of an explosion does not need to be real to go viral; it only needs to appear believable and arrive at the exact moment when audiences are urgently seeking explanations.
Our monitoring found that the first days of the conflict concentrated the highest volume of recycled material and misleading content, which aligns with the logic of information saturation typical of crisis scenarios (Silverman, 2014). Information disorder does not emerge afterward — it appears from the very beginning, almost simultaneously with the events themselves.
Against this backdrop, institutional media organizations need to strengthen their monitoring and verification capabilities if they truly aim to become reliable sources in contexts of high uncertainty.
In situations such as international conflicts, trust is not earned merely by publishing information, but by demonstrating that there is a rigorous method for detecting, cross-checking, and explaining information quickly. Hence the importance of building bilingual monitoring systems, in Spanish and Persian, capable of detecting, translating, classifying, and verifying false narratives before they become consolidated. In this particular case, official sources, professional fact-checkers, Persian-language media, and platforms such as X, Telegram, TikTok, Instagram, Facebook, and YouTube must all be considered. It is not enough to observe what circulates; it is necessary to trace the path of each piece of content, identify who amplifies it, how it changes across languages, and what narrative variations it adopts during its propagation (Wardle & Derakhshan, 2017).
Can They Alter Public Perception?
The findings from our own research exercise show why this task is indispensable. For example, we identified a viral TikTok video claiming that Iran had attacked a U.S. military base in Djibouti, when in fact it corresponded to an explosion in Port Sudan in May 2025. We found that the clip had been shared more than 1,700 times.
Another circulating video showed a supposed “massive explosion at the U.S. embassy in Riyadh,” amplified by highly visible accounts, although geolocation analysis revealed that it actually depicted a highway located about 25 kilometers from the embassy and that the footage had already been circulating for weeks beforehand.
Cases like these demonstrate that disinformation not only confuses people, it can also alter public perception of a conflict in real time.
A fake video showing troops on the ground can create the impression that a war has escalated into a direct invasion; a synthetic image of a leader’s corpse can trigger fear, outrage, or desires for retaliation; and a misattributed clip involving an embassy or military base can intensify diplomatic tensions and fuel the perception of an imminent attack.
Disinformation, therefore, does not merely distort facts; it also drives emotions, decisions, and public narratives during especially sensitive moments. That is why, when institutional media organizations verify information with method, transparency, and consistency, they are not only correcting falsehoods — they are also providing certainty, context, and social containment.
Our analysis leaves an uncomfortable but necessary conclusion: today, wars are also fought on the battlefield of perception. Artificial intelligence, recycled archives, and the transnational circulation of misleading content have made it far more difficult to distinguish between testimony and simulation. More than reacting case by case, institutional media must build permanent capacities for monitoring, media literacy, and multilingual verification. Public trust increasingly depends on that critical infrastructure.
In the end, the war of screens is not merely a metaphor, it is the name of a scenario in which the struggle for truth unfolds simultaneously with events themselves, on the phone screen, in the forwarded video, and in the image that looks like proof even when it is not (Wardle & Derakhshan, 2017).
Author
Fernando Ignacio Gutiérrez Cortés is Director of the Division of the School of Humanities and Education at the Tecnológico de Monterrey, Mexico City Region. He is a Level 1 member of Mexico’s National System of Researchers. His work focuses on the new media ecology in the digital era. He holds a Ph.D. in Design with a specialization in Information Visualization from the Universidad Autónoma Metropolitana.
This article was originally published in the Digital Media Observatory of the School of Humanities and Education at Tecnológico de Monterrey. To read the full study, its references, and the documented disinformation cases analyzed in this report, visit the original link.






