OpenAI disclosed use of its AI technology by Russia and China in their influence campaigns

How OpenAI sees AI content generation by Russia and China

OpenAI disclosed that it has identified and disrupted five global influence campaigns leveraging its generative AI technologies. The actors involved include state entities and private firms from Russia, China, Iran, and Israel.

The company claims that they utilized OpenAI’s tools for manipulating public opinion and geopolitical influence.

AI-generated content in geopolitical manipulation

The campaigns employed AI to generate social media content, translate and edit articles, write headlines, and debug programs. These efforts aimed to bolster political campaigns or sway public sentiment in ongoing geopolitical conflicts, the New York Times reported

This is the first time a major AI company has disclosed the use of its tools for online deception, highlighting growing concerns about AI’s role in disinformation dissemination, particularly in this election-heavy year. It’s less than two weeks before the European Parliament elections.

Use of AI to generate content for social media campaigns

Despite the sophisticated technology, OpenAI reported limited success for these campaigns. They struggled to gain significant traction or expand their reach. 

Ben Nimmo, a principal investigator at OpenAI, highlighted that while the campaigns used AI for political content, their overall impact remained minimal. These operations frequently resulted in poorly received and grammatically incorrect posts.

OpenAI offers online chatbots and content generation tools powered by AI technology that can write social media posts and generate realistic images. Analysts had tracked OpenAI’s tools in influence campaigns for years, including the Russian campaign Doppelganger and the Chinese campaign Spamouflage.

Russia’s AI-powered information campaigns to support its war against Ukraine

Russia’s Doppelganger campaign, disclosed in France, generated anti-Ukraine comments and translated pro-Russia articles into various languages. It targeted audiences on platforms like X and Facebook. However, it achieved minimal engagement.

OpenAI’s tools were also used in a Russian campaign that targeted audiences in Ukraine, Moldova, the Baltic States, and the United States, mostly via the Telegram messaging service, the company reported. The campaign used AI tools to generate comments in Russian and English about Russia’s war in Ukraine, as well as the political crisis in Moldova and US politics.

A previously unreported operation from Russia, which we dubbed Bad Grammar, operating mainly on Telegram and targeting Ukraine, Moldova, the Baltic States and the United States. The people behind Bad Grammar used our models to debug code for running a Telegram bot and to create short, political comments in Russian and English that were then posted on Telegram.

OpenAI statement

Experts like Graham Brookie from the Atlantic Council’s Digital Forensic Research Labs caution that as generative AI evolves, its potential for creating more convincing disinformation may grow. OpenAI’s new flagship AI model, in development, promises enhanced capabilities, which could significantly alter the online disinformation landscape.

China’s Spamouflage campaign is associated with debugging code and generating posts against critics of the Chinese government; this campaign also had limited success.

Iran’s Iranian campaign is linked to the International Union of Virtual Media; this effort produced and translated pro-Iranian, anti-Israeli, and anti-U.S. content.

Future implications of AI-generated content for political campaigns

In Israel’s Zeno Zeno campaign, an Israeli firm used OpenAI’s technology to create fictional personas and biographies to post anti-Islamic messages on social media platforms in Israel, Canada, and the United States.

While current generative AI content generation tools have not yet unleashed the feared deluge of credible disinformation, vigilance and ongoing assessment are crucial.

OpenAI’s transparency in reporting these campaigns sets a precedent for other AI firms to follow, aiming to mitigate the misuse of advanced technologies in online influence operations.

Read all articles by Insight News Media on Google News, subscribe and follow.
Scroll to Top