British analysts have discovered AI chatbot manipulations by Russia. Almost 20% of the responses from popular chatbots cited Russian propaganda media sources, many of which are under EU sanctions.
A new disinformation threat has emerged from Russia—artificial intelligence manipulation. Analysts from the British analytical center Institute for Strategic Dialogue (ISD) reached this conclusion.
The institute, an independent London-based think tank, conducted research on the extent to which AI chatbots like ChatGPT and Grok filter out media sources that are supposed to be banned by sanctions.
British experts analyzed the responses of 4 popular chatbots (ChatGPT, Gemini, Grok, and DeepSeek) to a series of questions in different languages (English, Spanish, French, German, and Italian) on topics related to the Russian war against Ukraine.
The British analytical team put 300 questions to four bots—ChatGPT, Gemini, Grok, and DeepSeek—and found that overall, 18% of responses contained information sources from restricted channels.
“Almost a fifth of the responses cited Russian state sources, many of which are under EU sanctions. Questions biased in favor of Russia more often included these sources in the responses, as did queries related to the military conscription of civilians in Ukraine and the perception of NATO. Some chatbots found it difficult to identify state-related content, especially when it was disseminated by third-party media or websites,” the ISD concluded.
The questions focused on five topic areas: stance toward NATO, Ukraine-Russia peace talks, Ukraine’s recruitment of civilians for the military, Ukrainian refugees, and Russian war crimes in Ukraine.
“These included citations of Russian state media, sites tied to Russian intelligence agencies, and sites known to be involved in Russian information operations that were surfaced during prior research into chatbot responses,” the institute reported.
The neutral queries returned 11% Russian state-aligned answers compared to 18% for biased questions and 24% for malicious prompts.
The Institute for Strategic Dialogue stated that these findings are aligned with previous research that has shown AI tools to “display confirmation bias.”
British analysts emphasize that the research raises deep concerns about the ability of chatbots to restrict sanctioned propaganda media in the EU.
“This is not a new challenge for companies like Google, whose platforms have long been scrutinized for potential bias in the results displayed to users when searching for complex topics. This close analysis intensified during Russia’s full-scale invasion of Ukraine, when Google was asked to restrict results from state media in response to EU sanctions,” the British experts summarize.
Of the four AI tools explored, the one most susceptible to grooming appeared to be ChatGPT, followed by Grok, then DeepSeek, with Google’s Gemini producing the least information with Russian propaganda.
The institute highlighted that in response to malicious prompts, Gemini sometimes presented researchers with information that it was unable to help with requests that may be “inappropriate or unsafe.”
In terms of language, questions entered in Spanish and Italian jointly topped the list of Russia-biased responses, followed by English, with French and German in joint last place.

