Technology

OpenAI disclosed use of its AI technology by Russia and China in their influence campaigns

OpenAI disclosed that it has identified and disrupted five global influence campaigns leveraging its generative AI technologies. The actors involved include state entities and private firms from Russia, China, Iran, and Israel.

The company claims that they utilized OpenAI’s tools for manipulating public opinion and geopolitical influence.

AI-generated content in geopolitical manipulation

The campaigns employed AI to generate social media content, translate and edit articles, write headlines, and debug programs. These efforts aimed to bolster political campaigns or sway public sentiment in ongoing geopolitical conflicts, the New York Times reported

This is the first time a major AI company has disclosed the use of its tools for online deception, highlighting growing concerns about AI’s role in disinformation dissemination, particularly in this election-heavy year. It’s less than two weeks before the European Parliament elections.

Use of AI to generate content for social media campaigns

Despite the sophisticated technology, OpenAI reported limited success for these campaigns. They struggled to gain significant traction or expand their reach. 

Ben Nimmo, a principal investigator at OpenAI, highlighted that while the campaigns used AI for political content, their overall impact remained minimal. These operations frequently resulted in poorly received and grammatically incorrect posts.

OpenAI offers online chatbots and content generation tools powered by AI technology that can write social media posts and generate realistic images. Analysts had tracked OpenAI’s tools in influence campaigns for years, including the Russian campaign Doppelganger and the Chinese campaign Spamouflage.

Russia’s AI-powered information campaigns to support its war against Ukraine

Russia’s Doppelganger campaign, disclosed in France, generated anti-Ukraine comments and translated pro-Russia articles into various languages. It targeted audiences on platforms like X and Facebook. However, it achieved minimal engagement.

OpenAI’s tools were also used in a Russian campaign that targeted audiences in Ukraine, Moldova, the Baltic States, and the United States, mostly via the Telegram messaging service, the company reported. The campaign used AI tools to generate comments in Russian and English about Russia’s war in Ukraine, as well as the political crisis in Moldova and US politics.

A previously unreported operation from Russia, which we dubbed Bad Grammar, operating mainly on Telegram and targeting Ukraine, Moldova, the Baltic States and the United States. The people behind Bad Grammar used our models to debug code for running a Telegram bot and to create short, political comments in Russian and English that were then posted on Telegram.

OpenAI statement

Experts like Graham Brookie from the Atlantic Council’s Digital Forensic Research Labs caution that as generative AI evolves, its potential for creating more convincing disinformation may grow. OpenAI’s new flagship AI model, in development, promises enhanced capabilities, which could significantly alter the online disinformation landscape.

China’s Spamouflage campaign is associated with debugging code and generating posts against critics of the Chinese government; this campaign also had limited success.

Iran’s Iranian campaign is linked to the International Union of Virtual Media; this effort produced and translated pro-Iranian, anti-Israeli, and anti-U.S. content.

Future implications of AI-generated content for political campaigns

In Israel’s Zeno Zeno campaign, an Israeli firm used OpenAI’s technology to create fictional personas and biographies to post anti-Islamic messages on social media platforms in Israel, Canada, and the United States.

While current generative AI content generation tools have not yet unleashed the feared deluge of credible disinformation, vigilance and ongoing assessment are crucial.

OpenAI’s transparency in reporting these campaigns sets a precedent for other AI firms to follow, aiming to mitigate the misuse of advanced technologies in online influence operations.

Mike

Media analyst and journalist. Fully committed to insightful, analytical, investigative journalism and debunking disinformation. My goal is to produce analytical articles on Ukraine, and Europe, based on trustworthy sources.

Recent Posts

Putin’s ‘election guarantee’ becomes weapon: how Pro-Russian media in Europe amplify Kremlin’s war narrative

By portraying Vladimir Putin as the only actor able to “ensure security” and “restore legitimacy”…

5 hours ago

Lithuania Fights for Freedom of Speech: Society Defends Public Broadcaster LRT

Freedom of speech in Lithuania has become the centre of an unprecedented civic mobilisation, as…

23 hours ago

Where Did Nearly One Million Russian Soldiers Go? A Chilling Manpower Puzzle

The question sounds almost abstract at first, like a numbers game. But it is not.…

4 days ago

Pro-Kremlin media coordinate lies about Ukraine’s Kupiansk loss to mask Moscow’s failure

European outlets synchronized a three-stage disinformation campaign that turned Russia's military defeat in Kupiansk into…

4 days ago

The Kremlin’s Echo in Austria: How Russia-Friendly Outlets Repackage Moscow Propaganda for Local Audiences

Across Europe, Russia’s information strategy has evolved from centralized messaging to local translation—re-tailored for national…

6 days ago

Pro-Kremlin media coordinate “collapse” campaign to fracture EU unity on Ukraine

Pro-Kremlin networks across Europe weaponize democratic debates to fabricate EU and NATO collapse narratives, transforming…

6 days ago