Technology

OpenAI disclosed use of its AI technology by Russia and China in their influence campaigns

OpenAI disclosed that it has identified and disrupted five global influence campaigns leveraging its generative AI technologies. The actors involved include state entities and private firms from Russia, China, Iran, and Israel.

The company claims that they utilized OpenAI’s tools for manipulating public opinion and geopolitical influence.

AI-generated content in geopolitical manipulation

The campaigns employed AI to generate social media content, translate and edit articles, write headlines, and debug programs. These efforts aimed to bolster political campaigns or sway public sentiment in ongoing geopolitical conflicts, the New York Times reported

This is the first time a major AI company has disclosed the use of its tools for online deception, highlighting growing concerns about AI’s role in disinformation dissemination, particularly in this election-heavy year. It’s less than two weeks before the European Parliament elections.

Use of AI to generate content for social media campaigns

Despite the sophisticated technology, OpenAI reported limited success for these campaigns. They struggled to gain significant traction or expand their reach. 

Ben Nimmo, a principal investigator at OpenAI, highlighted that while the campaigns used AI for political content, their overall impact remained minimal. These operations frequently resulted in poorly received and grammatically incorrect posts.

OpenAI offers online chatbots and content generation tools powered by AI technology that can write social media posts and generate realistic images. Analysts had tracked OpenAI’s tools in influence campaigns for years, including the Russian campaign Doppelganger and the Chinese campaign Spamouflage.

Russia’s AI-powered information campaigns to support its war against Ukraine

Russia’s Doppelganger campaign, disclosed in France, generated anti-Ukraine comments and translated pro-Russia articles into various languages. It targeted audiences on platforms like X and Facebook. However, it achieved minimal engagement.

OpenAI’s tools were also used in a Russian campaign that targeted audiences in Ukraine, Moldova, the Baltic States, and the United States, mostly via the Telegram messaging service, the company reported. The campaign used AI tools to generate comments in Russian and English about Russia’s war in Ukraine, as well as the political crisis in Moldova and US politics.

A previously unreported operation from Russia, which we dubbed Bad Grammar, operating mainly on Telegram and targeting Ukraine, Moldova, the Baltic States and the United States. The people behind Bad Grammar used our models to debug code for running a Telegram bot and to create short, political comments in Russian and English that were then posted on Telegram.

OpenAI statement

Experts like Graham Brookie from the Atlantic Council’s Digital Forensic Research Labs caution that as generative AI evolves, its potential for creating more convincing disinformation may grow. OpenAI’s new flagship AI model, in development, promises enhanced capabilities, which could significantly alter the online disinformation landscape.

China’s Spamouflage campaign is associated with debugging code and generating posts against critics of the Chinese government; this campaign also had limited success.

Iran’s Iranian campaign is linked to the International Union of Virtual Media; this effort produced and translated pro-Iranian, anti-Israeli, and anti-U.S. content.

Future implications of AI-generated content for political campaigns

In Israel’s Zeno Zeno campaign, an Israeli firm used OpenAI’s technology to create fictional personas and biographies to post anti-Islamic messages on social media platforms in Israel, Canada, and the United States.

While current generative AI content generation tools have not yet unleashed the feared deluge of credible disinformation, vigilance and ongoing assessment are crucial.

OpenAI’s transparency in reporting these campaigns sets a precedent for other AI firms to follow, aiming to mitigate the misuse of advanced technologies in online influence operations.

Mike

Media analyst and journalist. Fully committed to insightful, analytical, investigative journalism and debunking disinformation. My goal is to produce analytical articles on Ukraine, and Europe, based on trustworthy sources.

Recent Posts

How Pro-Russian media in EU are selling Putin’s war narrative around peace talks

Pro-Russian outlets across the European Union are pushing a coordinated narrative that Ukraine has already…

2 days ago

How NarvaNews Amplifies Russian Propaganda in Estonia’s Information Space

NarvaNews has rapidly positioned itself as a local Russian-language portal, but behind its fast growth…

1 week ago

Planned leak or intelligence failure? How Kremlin uses Witkoff tapes for information warfare

The exposure of Trump's special envoy conversations with Putin's aides reveals not only a betrayal…

2 weeks ago

France: Police Arrest Suspects for Spying and Promoting Russian Propaganda

Paris police arrested three people suspected of spying for Russia and promoting the Kremlin's war…

2 weeks ago

How Russia’s Propaganda Machine Weaponizes Mobilization in Ukraine

A recent media study finds that Russia is increasingly employing disinformation efforts to disrupt Ukraine's…

2 weeks ago

Moldova Accuses Russia of Spending €400 Million to Influence Parliamentary Election

Moldova's parliamentary speaker has accused Russia of spending about €400 million to influence the country's…

2 weeks ago