Recent strikes by Israel on Iran have triggered a wave of misleading information online, largely powered by AI-generated videos and images. These materials have led to widespread misinformation about military actions and public sentiment, complicating the narrative surrounding the conflict.
Disinformation Surge in Israel-Iran Conflict Fueled by AI Technology

Disinformation Surge in Israel-Iran Conflict Fueled by AI Technology
The ongoing conflict between Israel and Iran has prompted a remarkable increase in AI-generated disinformation across social media platforms, raising concerns about the authenticity of shared content.
The recent escalation in the Israel-Iran conflict has unleashed a torrent of disinformation, significantly amplified by the use of artificial intelligence (AI). Following Israeli military strikes on Iran, online platforms have seen an influx of misleading content aimed at distorting perceptions of the situation. Recent analysis by BBC Verify has revealed dozens of AI-generated videos that boast of Iran’s military power and depict fictitious aftermaths of alleged strikes on Israeli installations.
The three most prominent fake videos identified have accumulated over 100 million views across various social media outlets. Notably, pro-Israeli accounts have also engaged in disseminating misleading content, circulating outdated footage in an attempt to misrepresent public sentiment in Iran as increasingly supportive of Israel’s actions. The strikes began on June 13, 2023, igniting a sequence of Iranian missile and drone reprisals against Israel.
Geoconfirmed, an organization focused on open-source imagery analysis, described the scale of disinformation as "astonishing," attributing the growing phenomenon to "engagement farmers" seeking profit through misleading content designed to generate attention. Their observations include a mix of irrelevant footage from other regions, as well as recycled clips from past conflicts, all of which have garnered millions of views.
Noteworthy accounts, branded as "super-spreaders" of misinformation, have realized significant growth in their follower counts. One particular account, Daily Iran Military, skyrocketed from approximately 700,000 to 1.4 million followers within less than a week. The accounts, often posing as authoritative figures, perpetuate confusion as users erroneously assume their legitimacy due to their official-sounding names.
This marks the initial instance where generative AI has been deployed extensively during an active conflict, as noted by Emmanuelle Saliba, Chief Investigative Officer at Get Real. Instances of AI-generated imagery aiming to exaggerate Iran's military responses have proliferated, with one wildly circulated image achieving 27 million views, suggesting massive missile strikes on Tel Aviv.
Another misleading video illegally asserted to depict a nighttime missile attack on Israeli infrastructure, a claim that is particularly challenging to authenticate. Disinformation surrounding the destruction of advanced US-made F-35 fighter jets has also gained traction; however, analysts from Alethea report that no verifiable footage has emerged to substantiate such claims.
Amidst this chaos, some disinformation has origins linked to Russian influence operations—particularly an intent to undermine Western military technology, with focus shifting from Ukraine to influencing narratives about American weaponry.
As the conflict continues to unfold, both known and lesser-known accounts have emerged, capitalizing on the war's dynamic to monetize engagement. While the pro-Iran narratives lean towards portraying public dissent within Iran towards its government, some posts, alarmingly, capitalize on the growing tension between the US and Iranian nuclear sites.
The viral nature of clandestine disinformation is further fueled by well-established social media platforms whose algorithms often promote content that aligns with users' political preferences. This paves the way for rapid dissemination among community members, particularly when emotions run high due to conflict and political discourse.
As platforms like X (formerly Twitter) continue to grapple with miscommunication, users frequently resort to AI chatbots in search of clarifying the veracity of posts. Despite the identification of misleading nature in some content, inconsistencies lead to pervasive confusion among users.
In response to the growing challenges posed by disinformation, platforms such as TikTok assert commitment to maintaining truthful content through community guidelines and independent fact-checking. However, the emergence of misleading narratives continues to overshadow authentic communication as warfare spills into the digital realm.