A recent report by NewsGuard highlights a significant challenge that AI chatbots face: Russian disinformation. The study, shared exclusively with Axios, uncovers the tactics used by the Moscow-based “Pravda” network to manipulate AI systems. This disinformation campaign continues to shape the results of leading AI chatbots by filling the web with pro-Kremlin propaganda.
How Disinformation Affects AI Chatbots
Instead of directly influencing individuals, the Pravda network’s main goal is to impact AI chatbots. With over 3.6 million articles published in 2023 alone, the disinformation network is distorting the results that search engines and web crawlers feed to major AI systems. These include chatbots from tech giants like Microsoft, Google, OpenAI, You.com, xAI, Anthropic, Meta, Mistral, and Perplexity. The result? A third of the time, these AI platforms recycled disinformation from the Pravda network.
The Scope of the Pravda Network’s Influence
Launched in April 2022, shortly after Russia’s invasion of Ukraine, the Pravda network has since expanded its reach to 49 countries and operates in multiple languages. With a network of 150 sites, including several Russian-language domains, Pravda aggregates content from Russian state media and pro-Kremlin influencers rather than producing its own original content.
The Pravda network has spread over 200 provably false claims, particularly about Ukraine. Its primary aim is not to convince individuals directly but to manipulate how AI chatbots process and present information.
The Risks of AI Manipulation
The findings from NewsGuard are consistent with concerns raised in previous reports about the risks of disinformation in the age of generative AI. The long-term implications, including political, social, and technological challenges, are significant. As AI continues to shape how we interact with the digital world, understanding and addressing these manipulation efforts are crucial for maintaining trust in technology.
Conclusion
As AI continues to evolve, the spread of disinformation presents a significant challenge. The Pravda network’s influence on chatbots underscores the need for greater scrutiny of the content that shapes AI responses. The risks posed by such disinformation efforts extend beyond misinformation, touching on issues that could affect global politics and societal trust.