Russia is actively embedding disinformation into chatbots and AI services globally, transforming large language models (LLMs) into a new battlefield for its information warfare. In the digital era, disinformation campaigns have expanded beyond social media and "fake news," becoming a comprehensive tool for influence, as reported by Euvsdisinfo.
The Kremlin is training LLMs to replicate manipulative narratives and disinformation, including false information about the war in Ukraine, which is then integrated into responses by chatbots like ChatGPT. Experts refer to this as "LLM grooming."
In France, in 2024, the agency Viginum identified the "Pravda" network, creating low-quality content that rephrased false statements from Russian media and pro-Kremlin sources. The primary targets were Ukraine, the USA, European countries, and some African states. The scale of the generated content ensures that AI considers Russian disinformation narratives when forming responses.
The NewsGuard Reality Check system revealed that six out of ten tested chatbots repeated false claims from the "Pravda" network. The proportion of disinformation in leading chatbots increased from 18% in 2024 to 35% in 2025. These narratives are linked to the pro-Kremlin "Storm-1516" campaign, a continuation of the activities of the former "Internet Research Agency," known for interfering in the 2016 US elections.
Russia's attempts to introduce disinformation into AI pose a significant threat to global security....
Read Full Story:
https://news.google.com/rss/articles/CBMipgFBVV95cUxOM2pkWTJhZTNsZ1BQUVRldGg1...