How AI is Fueling Disinformation and Endangering Democracy

Artificial intelligence (AI) has transformed many different fields, raising output and efficiency. But its abuse has also brought advanced strategies for spreading false information, therefore endangering democratic institutions. 

Using artificial intelligence, both state and non-state actors are fabricating misleading narratives, influencing public opinion, and erasing confidence in democratic procedures.

With an eye toward current events in Europe, especially Russian influence in the European Parliament elections in June 2024 and the German federal elections in February 2025, this paper explores the specific strategies and methods used in AI-driven disinformation campaigns.

AI-Driven Disinformation Tactics

The integration of AI into disinformation strategies has led to the development of advanced and deceptive methods. Key tactics include:

1. Cloned Websites

Malicious actors develop fake websites that almost exactly reflect reputable news sources. These cloned sites create false narratives to deceive users into thinking they are reading reliable facts.

European intelligence services revealed in March 2024 a pro-Russian influence plot involving the news website “Voice of Europe.” Acting as an independent news source, this website distributed material meant to undermine support for Ukraine and compromise the European Union. Investigations revealed that several European politicians received payments to promote pro-Russian narratives through this site. europarl.europa.eu

2. AI-Generated Manipulated Graphics and Memes

Artificial intelligence can customize realistic pictures and memes to spread specific false narratives, typically portraying political personalities negatively.

Before the February 2025 German federal elections, researchers found over 100 Russia-linked websites containing misleading information. Using AI-generated content, these websites aimed at pro-NATO and pro-Ukraine German politicians—especially from the Green Party—painted them adversely to change public opinion. reuters.com

3. AI-Generated Deepfake Videos

Deepfakes comprise AI-generated videos convincingly showing people saying or doing things they never would have done. These videos can be used to discredit political figures or spread false information.

Deepfake videos featuring European politicians making provocative remarks emerged in the run-up to the June 2024 European Parliament elections. These videos, later debunked, were part of a broader strategy to sow discord and mistrust among voters.

A deepfake video emerged with Maia Sandu, the pro-Western president of Moldova, allegedly supporting a political party friendly to Russia. This fabrication aimed to sway voter opinion and disrupt the electoral process. en.wikipedia.org

4. AI-Driven Social Media Bots

Bots, or automated social media accounts, can be instructed to rapidly disseminate false information, cultivating the idea of general agreement or amplifying polarizing content.

Supported by Russia, a sophisticated social media campaign included more than 100 influencers with an overall following of 8 million. This operation sought to influence public sentiment in favor of candidates in line with Russian interests by thereby upsetting the election process. ft.com

Research indicates that Russian-linked bots disseminated fake warnings of terrorist attacks in Germany ahead of the February 2025 elections. These acts sought to cause panic, lower voter turnout, and affect the result of the election. reuters.com

5. AI-Powered Hyper-Personalization

AI can analyze user data to create personalized disinformation, making false narratives more convincing by aligning them with individual beliefs and biases.

During the European Parliament elections, AI was used to deliver personalized political advertisements containing misleading information to specific voter demographics, exploiting their existing biases to sway their votes.

Russian Interference in European Elections

Russia has been implicated in several disinformation campaigns targeting European elections, utilizing AI-driven tactics to influence vote outcomes.

European Parliament Elections (June 2024)

In the run-up to the elections held from June 6-9, 2024, European governments accused Russia of spreading disinformation to discredit European institutions and destabilize the EU. 

The “Voice of Europe” platform played a central role in this effort, with reports indicating that certain Members of the European Parliament (MEPs) received payments from Russian sources to propagate pro-Russian narratives. europarl.europa.eu

German Federal Elections (February 2025)

Prior to the elections on February 23, 2025, Germany experienced a surge in disinformation activities attributed to Russian-linked actors. Tactics included the dissemination of AI-generated fake news and the use of bots to spread fear-inducing falsehoods, such as fabricated terrorist threats, aiming to manipulate voter behavior and undermine democratic processes. reuters.com

In the lead-up to these elections, Russia employed various AI-driven disinformation tactics to influence voter perceptions and outcomes.

Researchers identified approximately 100 fake websites mimicking German news outlets. These sites published misleading articles to discredit certain political figures and parties, thereby manipulating public opinion. correctiv.org

The “Children of War” campaign showcased images of children allegedly killed in the Russia-Ukraine conflict. While publicly denying government affiliation, investigations revealed connections to the Russian state. The campaign aimed to infiltrate EU protest movements and weaken support for Ukraine. reuters.com

Social Media Manipulation

Russian-linked bots spread fake warnings about imminent terror attacks, intending to create fear and suppress voter turnout. This strategy sought to destabilize the democratic process and favor parties sympathetic to Russian interests. reuters.com

Conclusion

Artificial intelligence’s use in disinformation campaigns poses a serious threat to democratic countries. The incidents of Russian meddling in the European Parliament and German federal elections highlight smart strategies used to influence public opinion and disturb democratic processes.

A complex plan is needed to deal with this problem. This plan should include public awareness campaigns to make society more resistant to these threats, strong laws, and technology solutions to find and stop disinformation created by AI.

FAQs

1. How does AI contribute to disinformation?

AI enables the creation of highly realistic fake content, including cloned websites, deepfake videos, and personalized fake news, making it challenging for individuals to distinguish between genuine and false information.

2. What are cloned websites in the context of disinformation?

Cloned websites are counterfeit sites designed to closely mimic legitimate news outlets. They publish fabricated stories to mislead readers into believing they are accessing credible information.

3. Was disinformation used in European elections?

Russia has employed AI-driven tactics such as cloned websites, AI-generated graphics, and social media bots to spread false narratives, manipulate public opinion, and interfere in electoral processes in Europe.

4. How to combat AI-driven disinformation?

Combating AI-driven disinformation requires technological solutions for detection, regulatory frameworks to hold perpetrators accountable, and public education to enhance media literacy and critical thinking skills.

Read all articles by Insight News Media on Google News, subscribe and follow.
Scroll to Top