Chatbot Chaos: AI’s Russian Roulette with War Narratives
Chatbots powered by large language models are inadvertently citing Russian state-linked sources when asked about the war in Ukraine, according to an ISD study. This “LLM grooming” poses a risk of AI undermining sanctions on Moscow-backed media. ChatGPT, for example, provided Russian sources almost three times more often for malicious queries.

Hot Take:
Is your chatbot channeling its inner Kremlin spokesperson? Fear not, because we’ve got the latest scoop on how AI chatbots are privy to the secrets of Eastern European geopolitics – whether you asked for it or not! Who knew that a simple query could summon the ghosts of Russian state media to your screen? Next time, maybe just watch cat videos instead.
Key Points:
- Chatbots cited Russian state-attributed sources in up to 25% of answers about Ukraine.
- ISD’s study focused on responses from ChatGPT, Gemini, Grok, and DeepSeek in multiple languages.
- State media content surfaced 11% of the time for neutral queries, 18% for biased, and 24% for malicious ones.
- Google’s Gemini chatbot incorporated the least amount of state-attributed media.
- ISD suggests increased scrutiny for AI models to prevent disinformation spread.
Already a member? Log in here
