Now Reading
Combating AI-Driven Misinformation/Disinformation in Electioneering and Elections

Combating AI-Driven Misinformation/Disinformation in Electioneering and Elections

2024 may go down in history as one of the most significant election years as nearly half of the world’s population across 64 countries will vote this year. This historic wave of voting will affect millions of people across various nations, highlighting the significance of voter decisions and the heightened risk of misinformation. The risk of false information whether deliberate or otherwise and particularly AI-driven, poses a growing global threat. AI-driven misinformation is particularly harmful as it exploits emotional triggers and social biases that often appear legitimate and trustworthy. This manipulation can distort voter perceptions of candidates, policies and even election integrity itself, resulting in a compromised democratic process. According to the 2024 Global Risk Report published by the World Economic Forum, misinformation and disinformation are among the leading and most severe global risks. According to the report, misinformation and disinformation may radically disrupt electoral processes in several economies over the next few years.

AI is now a major tool for spreading misinformation. The unique ability of AI models to generate, tailor and spread information has mixed implications. While it allows for swift dissemination of important electoral information, it also risks flooding voters with misleading or even false narratives designed to influence or create confusion. These models make it easier than ever to create falsified content, from realistic voice cloning to deepfakes, shallow fakes, counterfeit news sites and the use of GenAI to create seemingly factual articles or posts designed to mislead. AI models could also be used to spread these false materials strategically across social media platforms, targeting specific groups of voters with precision. Misinformation and disinformation could destabilize societies by discrediting the legitimacy of governments, political figures and erode public trust in institutions.

With just weeks until Ghana’s general elections, political campaigns are in high gear, targeting voters of all ages. A key trend emerging is the heavy use of social media to push political narratives. Social media platforms like X (formerly Twitter), WhatsApp, Tiktok and Facebook, accessible to over 24 million Ghanaian internet users, have become powerful tools for political campaigns to shape public opinion and drive political agendas.

This reliance on social media and the internet as primary news channels has left Ghanaians vulnerable to false information. Political actors may leverage reliance on these channels to utilise AI to amplify misleading content and false narratives in an effort to manipulate voter decisions in their favour.

Common Methods of AI-Driven Misinformation/Disinformation

  • Deepfakes: This is one of the most common methods of AI-driven misinformation/disinformation.Deepfakes refer to the use of advanced AI tools to mimic faces, voices and actions. They can depict realistic videos where an individual’s face and voice is superimposed onto another person’s body. For instance, the Pakistan elections on February 8, 2024, was marked by controversy regarding the use of AI-generated content. The BBC in a report on 10 February 2024 revealed that Imran Khan, the former Prime Minister who is currently in jail, posted an AI generated video on his X account declaring election victory for his party when the official results were yet to be declared. This post amassed 6.1 million views and 54,000 reposts showing the extent to which false information can spread. Similarly, several media outlets, including the New York Times, reported that Mr. Khan announced his victory in the election using a synthesized voice. Reports mentioned that Mr. Khan had been incarcerated and disqualified from participating in the election process. However, his party used artificial intelligence to generate a voice that could convey his message to his supporters. In another instance, the CNN reported of a candidate in the Slovakian elections being a victim of an AI generated audio stating his intention to rig the upcoming elections. This went viral and according to the candidate, was responsible for his eventual loss in the elections.
  • Shallow fakes: Political actors may also utilise shallow fakes which involves using simpler and more accessible editing tools such as Photoshop to manipulate existing media by speeding up/slowing down videos or rearranging content to present a specific narrative. A well-documented example as stated in a report by MEA Integrity, is a shallow fake video regarding USA politician Nancy Pelosi appearing intoxicated in a press interview. This video, although debunked, spread quickly and shows the risk of even minor video edits on public perception.
  • AI-generated fake news: AI-generated fake news may be utilised by rogue actors to spread misinformation/disinformation. AI generated fake news refers to the use of AI tools to create realistic but false news articles or social media posts intended to mislead people. AI has made it easy for nearly anyone, irrespective of their expertise, to create these fake news articles which is at times difficult to differentiate from real news. According to NewsGuard, an organisation that tracks misinformation, over 1,100 unreliable AI-generated news websites have been identified as of 15 October 2024. These news outlets operate with minimal human oversight to produce news articles in 16 languages. For instance, in May 2023, Sky News reported a viral fake story about a bombing at the Pentagon which was accompanied by an AI-generated image that showed a big cloud of smoke. This caused a public uproar and even resulted dip in the stock market. In another instance reported by The New York Times, Republican presidential candidate, Ron DeSantis, used fake images of Donald Trump hugging Anthony Fauci as part of his political campaign.
  • Bot amplification: Another rising trend for pushing misinformation across social media platforms is the use of bot accounts to amplify content. In 2022, X estimated that 16.5 million accounts, constituting 5%, on the platform are bot accounts which are set up solely to increase user engagement with false and misleading information.

What can you do as an Individual

In the face of widespread misinformation and disinformation, individuals can take several steps to protect themselves and others:

See Also

  • Verify and validate information: Verifying information means checking the source of the news or post before you trust it. A good way to do this is by comparing the information with what trusted news outlets are reporting. If reputable sources have not reported the same story, it could be false. For instance, if you see a viral post about claims made by a candidate, you should check if trusted news sources are also covering the story. Many times, fake stories are spread to damage reputations and cross-checking with reliable outlets can help prevent the spread of false information. So always remember to verify before resharing.
  • Think critically and objectively: Individuals must think critically and be careful with content that ignites strong emotions like anger, fear or excitement. Misinformation often aims to trigger these feelings to get people to share quickly without checking the facts. It is important to pause and assess the information logically. For example, if you witness a viral claim that a candidate made a shocking statement which has caused public outrage. It is important to take a step back and think: Could this be a misrepresentation? Could this be AI generated or AI-driven? Has this been reported by credible news sources? By analysing the situation logically, you can avoid falling for misinformation appealing to your emotions.
  • Be vigilant: Individuals must also be on the lookout for signs that videos, images or audio might have been digitally altered to deceive people. Deepfakes are produced using AI to create highly realistic but fake content, often making it hard to tell if something is genuine. However, certain signs like unnatural facial movements, mismatched audio and lip-syncing or strange lighting may be clues that the content has been manipulated. Individuals should also look for watermarks on AI-generated content. Spotting these watermarks is a simple way to detect AI-generated content.

The Way Forward

The intersection of AI and misinformation raises tough ethical questions, particularly regarding free speech. AI-manipulated content blurs the lines between creative expression and malicious intent. At what point does edited content cease to be art or satire and become a dangerous lie?

As we look ahead to the polls, combating AI-driven misinformation and disinformation requires a proactive approach that leverages the strengths of each stakeholder. Only through coordinated efforts can we protect the integrity of democratic elections in the digital age. Now is the time for each of us, government, companies, civil society and citizens, to take responsibility for ethical use of AI in ensuring that AI serves as a tool for truth and democratic empowerment rather than a weapon of deception.


View Comments (0)

Leave a Reply

Your email address will not be published.