Cover Image for A pro-Russia disinformation campaign uses free artificial intelligence tools to trigger a 'content explosion.'
Tue Jul 01 2025

A pro-Russia disinformation campaign uses free artificial intelligence tools to trigger a 'content explosion.'

Consumer-level artificial intelligence tools have amplified Russia-aligned disinformation, facilitating the spread of images, videos, QR codes, and fake websites.

A pro-Russia disinformation campaign is using consumer artificial intelligence tools to provoke an "explosion of content" aimed at amplifying existing tensions around global elections, the situation in Ukraine, and immigration, among other controversial topics, according to recent research. Known by various names, such as Operation Overload and Matryoshka (linked by other researchers to Storm-1679), this campaign has been active since 2023 and is reportedly backed by the Russian government, according to several groups, including Microsoft and the Institute for Strategic Dialogue. The purpose of this campaign is to spread false narratives by impersonating media outlets, thereby seeking to generate divisions in democratic countries. While its target audience spans various regions of the world, the primary focus has been Ukraine, where hundreds of AI-manipulated videos have been produced that promote pro-Russian narratives.

The report reveals that between September 2024 and May 2025, content production by those running the campaign has increased exponentially, reaching millions of views globally. During the period from July 2023 to June 2024, 230 unique pieces of promoted content were identified, including images, videos, QR codes, and fake websites. However, in the last eight months, Operation Overload generated a total of 587 unique pieces of content, with most created using artificial intelligence tools, according to researchers.

The increase in content production is attributed to accessible, consumer-level artificial intelligence tools available for free online. This has facilitated a "content amalgamation" tactic, through which campaign operatives have created multiple pieces supporting the same narrative. Researchers from Reset Tech and Check First noted that this represents a shift towards more scalable, multilingual, and sophisticated propaganda tactics.

The variety of tools and types of content employed by the campaign surprised researchers. Aleksandra Atanasova, lead open-source intelligence researcher at Reset Tech, noted that the campaign has diversified its approach to capture different perspectives and nuances in its stories. While customized AI tools were not used, publicly accessible voice and image generators were employed.

One highlighted tool in the analysis was Flux AI, a text-to-image generator developed by Black Forest Labs. Researchers detected that many of the false images shared by the campaign had a high likelihood of having been created using this generator. The manipulated images included sensitive content that perpetuated negative stereotypes.

Voice-cloning technology has also been used by Operation Overload to alter videos, making prominent figures appear to say things they never actually said. The number of videos produced by the campaign increased from 150 between June 2023 and July 2024 to 367 between September 2024 and May 2025. Many of these videos employed artificial intelligence technology to deceive viewers.

One alarming example of this manipulation occurred in February when a video was posted on X showing Isabelle Bourdon, a university researcher, apparently inciting mass riots in Germany and supporting the far-right party Alternative for Germany (AfD). This video was fabricated by taking snippets from a legitimate talk and altering its original context.

AI-generated content has been shared across more than 600 Telegram channels, as well as on bot accounts on social media platforms like X and Bluesky. Recently, it has also begun to be distributed on TikTok, where 13 accounts were detected reaching 3 million views before the platform took action.

The campaign's process includes an unusual tactic: sending emails to hundreds of media and fact-checking organizations globally, requesting investigation into the veracity of their false content. This approach seeks to have their content appear in legitimate media, even if labeled as "FAKE."

Since September 2024, up to 170,000 of these emails have been sent to over 240 recipients, and while these attempts often do not attract significant traffic, their promotion on social media can generate considerable attention. Previous research has documented that Russian disinformation networks produce at least 3 million AI-generated articles annually, complicating the distinction between real and fake content.

With the growing difficulty in discerning authentic content from that generated by artificial intelligence, the use of these tools by disinformation operatives is expected to continue to rise. “They already have the recipe that works,” concluded Atanasova.