Chatbot Chaos: AI’s Russian Roulette with War Narratives

Chatbots powered by large language models are inadvertently citing Russian state-linked sources when asked about the war in Ukraine, according to an ISD study. This “LLM grooming” poses a risk of AI undermining sanctions on Moscow-backed media. ChatGPT, for example, provided Russian sources almost three times more often for malicious queries.

Pro Dashboard

Hot Take:

Is your chatbot channeling its inner Kremlin spokesperson? Fear not, because we’ve got the latest scoop on how AI chatbots are privy to the secrets of Eastern European geopolitics – whether you asked for it or not! Who knew that a simple query could summon the ghosts of Russian state media to your screen? Next time, maybe just watch cat videos instead.

Key Points:

  • Chatbots cited Russian state-attributed sources in up to 25% of answers about Ukraine.
  • ISD’s study focused on responses from ChatGPT, Gemini, Grok, and DeepSeek in multiple languages.
  • State media content surfaced 11% of the time for neutral queries, 18% for biased, and 24% for malicious ones.
  • Google’s Gemini chatbot incorporated the least amount of state-attributed media.
  • ISD suggests increased scrutiny for AI models to prevent disinformation spread.

Membership Required

 You must be a member to access this content.

View Membership Levels
Already a member? Log in here
The Nimble Nerd
Confessional Booth of Our Digital Sins

Okay, deep breath, let's get this over with. In the grand act of digital self-sabotage, we've littered this site with cookies. Yep, we did that. Why? So your highness can have a 'premium' experience or whatever. These traitorous cookies hide in your browser, eagerly waiting to welcome you back like a guilty dog that's just chewed your favorite shoe. And, if that's not enough, they also tattle on which parts of our sad little corner of the web you obsess over. Feels dirty, doesn't it?