By EUvsDisinfo

In the digital age, disinformation campaigns have evolved beyond social media and ‘fake news’, becoming a full form of information warfare – an area in which Russia excels.

The Kremlin’s foreign information manipulation and interference (FIMI) campaigns have remained largely consistent since the Cold War. But the emergence of the Internet and other communication technologies have allowed for more flexibility and greater impact with fewer resources. Just as the Web 2.0 reshaped information warfare some two decades ago, the rise of artificial intelligence (AI) has transformed the Kremlin’s strategy. Instead of just pushing tailor-made narratives to the readers, Moscow now also targets machines – a strategy all the more important given that many users are now replacing Google Search for AI tools such as ChatGPT.

Instead of targeting audiences directly via social media, Russia’s disinformation apparatus has shifted its strategy to flooding the Internet with millions of misleading low-quality articles and pieces of content designed to be scraped by AI driven instruments and applications. The Kremlin is engaging in what experts call ‘LLM grooming’: training large language models (LLMs) to reproduce manipulative narratives and disinformation.

How does ‘LLM grooming’ work?

‘LLM grooming’ is a deliberate manipulation of large language models which does not only seek to spread disinformation, but corrupt the AI infrastructure more broadly by injecting disinformation, for example in relation to Russia’s war in Ukraine, into responses produced by AI chatbots, such as ChatGPT.

In February 2024, the French governmental agency Viginum – responsible for analysing and protecting France against foreign digital interference – published a report exposing the so-called ‘Portal Kombat’ operation, also known as the ‘Pravda network’. This Russian disinformation network consists of websites in various languages producing low-quality content repackaging false and misleading claims from Russian state media, pro-Kremlin influencers, and other sources of disinformation. Regions targeted by this network include Ukraine, the United States France, Germany, Poland, the UK and other European countries, as well as some African countries. The large volume of the produced content ensures that AI models take these Russian disinformation narratives into account while generating their responses. In other words, the Kremlin actively interferes in the information space in order to shape the answers you receive from your AI-assistant of choice.

For instance, a report by NewsGuard Reality Check, a rating system for news and information websites, found that the Pravda network falsely claimed that President Zelenskyy had banned Donald Trump’s Truth Social platform. Six out of ten tested chatbots repeated the claim, citing the Pravda network. The share of false and misleading information in 10 leading chatbots nearly doubled in a year, rising from 18% in 2024 to 35% in 2025. FIMI narratives in these models were also linked to the ‘Storm-1516’, another pro-Kremlin disinformation campaign and an offshoot of the former Internet Research Agency – a Russian organization known for orchestrating large-scale online influence operations, including interference in the 2016 U.S. elections. The connection was identified by a team of media forensics researchers at Clemson University in autumn 2023.

The Kremlin’s drive to pollute the information ecosystem

Russia’s efforts to inject disinformation into a rapidly growing AI information ecosystem represent a major global security threat, as it can distort public opinion, erode trust in digital information integrity and spread seemingly legitimate naratives at an unprecedented scale. A report by Halyna Padalko for the Digital Policy Hub at the Centre for International Governance Innovation notes that Russia has moved beyond traditional propaganda methods towards exploiting LLMs. Through LLM grooming, Moscow normalises false information to appear as fact-based information. Even relatively trusted platforms such as Wikipedia have amplified Kremlin disinformation by quoting sources in the Pravda network.

As AI chatbots play larger roles in fact-checking and information search, these efforts to pollute the information ecosystem poses a serious challenge. The automation and scale of these campaigns make them harder to detect and counter, undermining democratic resilience. Back in 2017, long before ChatGPT, Putin said the leader in AI would ‘rule the world.’ Don’t be deceived: seven years later, Russia’s bid for that throne runs mostly on American and Chinese models—apparently empire-building now comes with imported software.

By EUvsDisinfo