Blog Post
Come the end of December, this year, we will have seen over 50 countries participate in democratic elections, with 1.5 billion voters participating (King’s College London, 2024). But with these elections, there is a risk of both misinformation and disinformation spreading - affecting the outcomes of these elections and disrupting social harmony. But what is most worrying, is the role that artificial intelligence (AI) will play.
AI has drastically improved since the last US election, and will continue to do so. What this results in is the blurring between fact and fiction - what is real and what is computer generated? As will be discussed later in this article, there are examples of AI deepfakes having an effect on elections. And what is most worrying is the fact that AI has improved so much, in so little time - meaning the problems that democratic countries have faced in previous elections, with regards to misinformation/disinformation and AI, will pale in comparison to the quality of technology now possessed by enemies of democracy.
Before setting the course for this research article, some definitions are required in order to form a platform. According to the Oxford English Dictionary, AI can be defined as “software used to perform tasks or produce output previously thought to require human intelligence, esp. by using machine learning to extrapolate from large collections of data”. This article will also define misinformation as "misleading information created or disseminated without manipulative or malicious intent" (Dame Adjin-Tettey, 2022, p. 2). And that "[m]isinformation is, by definition, false or misleading information" (de Ridder, 2021, p. 2). With regards to disinformation, this article takes the definition to be "[t]he dissemination of deliberately false information, [especially] when supplied by a government or its agent to a foreign power or to the media, with the intention of influencing the policies or opinions of those who receive it”. (Oxford English Dictionary, 2017, as cited in Cheyfitz, 2017)
The answer that this article sets out to answer is the following: To what extent will AI have an effect on misinformation and disinformation in future democratic elections? To answer this, the article will first analyse the history of misinformation, disinformation and AI - and their impact on elections. The following section will look into the ways in which governments and political organisations from around the world have been combating misinformation, disinformation, and AI misuse. The article will then show public perceptions on both misinformation/disinformation and AI, before a section offering a list of policy recommendations. The article will then conclude with a few brief thoughts on what has been learnt.
This article will use both secondary source information - academic articles, research articles, and news articles, as sources. In addition to this, Dame Wendy Hall and Professor Matt Ryan were both interviewed to offer their expert opinions on the subject. Dame Wendy Hall is a computer scientist at the University of Southampton and is on the High-Level Advisory Body on AI for the United Nations. Professor Matt Ryan works at the University of Southampton specialising in democracy and web science. I would like to take this moment to thank them both for their time and contributions.
This section will discuss how misinformation/disinformation spreads, followed by a look into how much is domestic and how much is foreign. The section will then investigate past and future elections and the role of AI, before finally launching an analysis as to what constitutes legitimate and illegitimate interference.
For centuries, if not longer, misinformation has been an issue. The promulgation of fake news, therefore, predates the internet age. But the internet exacerbates the problem - with a plethora of harmful lies already having been spread, infecting the minds of millions. When we take the events that occurred in Southport in July 2024 as an example, we saw a lack of verified information - which resulted in widespread misinformation on social media. This information may have come from anywhere - but it spread across social media like a wildfire. Consequently, divisive figures, like Tommy Robinson and Andrew Tate, latched on to this false information and shared it with their millions of followers (Cheshire, 2024).
To get technical, one of the ways in which misinformation spreads is through micro-targeting. Labelled as “especially effective” by Bontridder and Poullet (2021, p.e32-4), they go on to write that “[t]he design of algorithms based on micro-targeting can also directly amplify the spread of both mis- and disinformation. For example, on YouTube, more than a billion hours of video are viewed every day, 70% of which by automated systems in order to provide recommendations on what video to watch next for human users” (p.e32-5). It is therefore apparent that with this microtargeting, and the algorithm responses as a result, people are hearing what they want to hear - regardless as to whether it is true. Furthermore, as stated by DiResta (2023), “there [is] a proliferation of the creation of alternative social media platforms that catered to the interests of right-leaning users, you now see the same thing happening on the left”. With microtargeting affecting algorithms, and a population hearing what they want to hear, people are now living in their own, independent echo chambers. This has resulted in the situation continuing to deteriorate. In the past, on Youtube and Twitter, for example, the population was united by the same platform - where people would, should they desire, find alternative points of view. Today, however, we find that despite the maintained high popularity for these platforms, there is, as DiResta says, a rise in independent alternate platforms. Consequently, people are now on platforms where there is but one view - an example being Truth Social.
Another way in which misinformation or disinformation can spread is through bots. “The deployment of social bots by malicious stakeholders also contributes to the effective dissemination of disinformation. These bots (an abbreviation for software robots) are fully or semi automated user accounts operating on social media platforms, which are designed for communication and the imitation of human online-behavior” (Bontridder and Poullet, 2021, p.e32-5). Bots have been widely discussed in the field of politics over the last decade, with many articles discussing Russian bots and their attempted influence on western democracies (O’Sullivan, 2018). A specific example of Russian bots interfering in a democratic vote in a western country would be the 2016 EU Referendum in the United Kingdom, where “it was found that political bots played a small but strategic role in shaping Twitter conversations” (Narayanan et al, 2017, p. 1). Moreover, Narayanan et al (2017) further details that “media reports claim that they have discovered 150,000 bot accounts linked to Russia that have been active during Brexit” (p. 1). This would mean that the Russian-backed bots achieved their goals - in changing the conversation and wreaking havoc and division in a western democratic state.
This leads onto the next section as to how much interference is foreign and how much is domestic.
Regarding foreign interference, many powerful actors on the global stage have tried repeatedly to interfere in western democratic practices. The Russians, as previously exemplified, the Chinese, and the Iranians have all been caught meddling in elections. For example, during the Taiwanese election of 2024, there was a Chinese disinformation campaign. “Disinformation experts say China has a hand in spreading this message, and may even be creating it. Their evidence also points to Taiwanese close to Beijing… It's not always conspiracy theories - most of the time it's a highlighting of news that shows the US in a bad light, or points to it as an untrustworthy superpower” (Wong, 2024a).
Although, the results of this election indicate that this did not have a strong effect on the election, as the pro-sovereign party, Democratic Progressive Party (DPP), “comfortably” beat the Beijing-friendly Kuomintang (KMT) in January of this year (Wong, 2024b). But what is worth noting about the Chinese disinformation campaign is their methods. “In the 2024 presidential election, the first instance of AI video and audio forgery broke out when an audio file supposedly ‘leaked’ out of an internal meeting on August 17, 2023. At 8 p.m. on the 16th, some media received an email titled "Audio File: Ko Reveals the Inside Story of Vice President Lai's Visit to the United States", with a 58-second audio file, in which Taiwan People's Party Chairman Ko Wen-je criticised Vice President Lai Ching-te, for deliberately delaying the signing of a bilateral trade agreement during his visit to the United States and accused him of embezzlement” (Chang, 2023, as cited in Lin et al, 2024, p. 3-4). The fact that these attempts to tarnish the opposition failed is perhaps a sign that the technology is not yet strong enough to be completely believable.
This may go against some data shown in the Public Perceptions section, as shown in Hamilton (2024), but I would argue that the technology, clearly, still has room to improve - despite already being remarkable. And that, in the future, should the correct measures not be taken, situations like this will be common. As explored, the Russians have been known to use bots to spread disinformation, with the Chinese opting, earlier this year, for a couple of techniques - most notably deepfakes.
But in the U.S. Election of 2020, it was reported that both President Biden’s and President Trump’s campaigns were targeted by Iranian-backed hackers (Corera, 2024; U.S Department of Justice, 2021). “The U.S. previously accused Iran of orchestrating a campaign to sow doubts about the integrity of the 2020 presidential election, including obtaining voter records from one state and sending threatening emails to some voters” (Collier, 2023). All this information, specifically in Corera (2024), highlights the fact that there will be attempts at foreign interference in the upcoming U.S. Election. Despite the United States spending a lot on defence, as shown in the Combating Misinformation, Disinformation, And AI Misuse section, it is likely that the US is still going to be targeted by hostile states. Given that the United States is the global face for democracy, it would be highly unlikely for her enemies to sit this election out and wait until 2028. Furthermore, the Iranians have already been revealed to be planning a strike on America’s democracy, with news coming in September 2024 of assassination threats from Iran to President Trump (Wright, 2024).
Concerning domestic interference, it is possible that unknown civilians, treading the same streets as their compatriots, can distribute misinformation or disinformation through the means of AI. “Artificial intelligence now makes such deceit easier for the non-technical to commit” (Omand, 2024). This is reinforced by Professor Ryan, stating that “[p]otentially, anybody could be contributing to the problems [we now face regarding] democracy” (Ryan, interview with author). This would imply that anyone, permitting that they are technologically competent and that they have malicious intent, can share harmful misinformation or disinformation.
However, we must note that the reach of most individuals is limited, as pointed out by Dame Wendy. “[T]here will be some people sitting in their rooms doing this kind of stuff, but the general population are not going to be able to do that. I think it’s more like organised crime. If you’re going to use AI to generate a lot of batches of messages to send out in bulk to many people targeted, then that takes some organisation to do that” (Hall, interview with author). This would imply that we should treat this issue as organised crime - that is to say, that it is a group of domestic extremists perpetuating lies, using AI as a means to increase capacities. This can already be seen in both France and the rest of Europe with the far-right, where AI is playing a role in creating harmful and divisive content. “[F]ar-right parties have used artificial intelligence (AI)-generated images to support political messaging” (Desmarais, 2024). With the recent victories and improving numbers for the far-right in Europe, it is possible that AI is helping them in their attempts to gain political control - that is to say, the technology is making it easier to disseminate misinformation, disinformation, or to upload any other harmful and divisive content.
It would appear, therefore, that there is a disagreement in this field as to who can diffuse misinformation. The position that I take is somewhere in between the two perspectives - that is to say, misinformation/disinformation campaigns can only, effectively, be conducted by organised groups, as demonstrated by the aforementioned foreign interference and the utilisation of AI by far-right groups. However, using the previously mentioned Southport incident, we have seen the spreading of misinformation by X users, for example, who have either rushed to a conclusion or who have been misinformed themselves. The user will then share the fake news, which will subsequently be seized upon by bigger fish in the sea of X, examples being social influencers or political figures/activists.
On the topic of future elections and the influence that AI will play, there are two possibilities that should be cause for concern. The first is the fact that AI will continue to improve, the second being the idea of AI relationships.
Regarding AI’s continual growth and development, we will witness the development of more realistic and convincing deepfakes. Hamilton (2024) reports that the The Centre for Countering Digital Hate (CCDH) conducted a study on deepfakes, showing that “[i]n 193 of the 240 test runs, or 80 per cent, they created convincing voice clones [of prominent UK politicians, Presidents Trump and Biden, Vice-President Harris, President Macron, and others]”. The key point here is that the voices were convincing - meaning that many people, the technologically unsavvy, will have difficulty distinguishing fact from fiction. But as mentioned in the previous section, despite it being convincing, the quality of the deepfakes may not be at the level required to be convincing en masse - but this day is fast approaching.
What is most worrying, however, are the messages that these voices are saying. “The voices of Starmer, the Labour leader, and Sunak were cloned to produce statements that warned there had been “multiple bomb threats” so voters should not go to the polls”. (Hamilton, 2024). This would most certainly have an effect on our democracies, here in the West - given the possibility that voters may not show up at polling booths over fears for their lives. In fact, something similar to this has already happened. According to Harris (2023a), during the Slovakian election of 2023, “there was a deep fake audio recording of a political leader seemingly plotting corruption two days before a very tight election”. As a result, his pro-Russian opponent, Robert Fico, went on to win (Conradi, 2023). It is quite possible that in future elections, enemy states will follow this example - to tactically release its AI generated disinformation very close to election day and influence voters’ selections.
This method would result in the false information going viral in the target state, leaving the target’s defence to reel back and flail in its attempts to squash the disinformation. What is most concerning, here, is that the target’s attempts to fight back and dispel the fake news will be extremely difficult - for many people will already be affected by the disinformation and may be put off from voting for the victimised target.
Another way in which AI could influence future elections is through one-on-one relationships. “I don't have any evidence this is being used, but I would be astonished if this isn't being explored is not trying to send messages to a very large group, but instead trying to influence a target audience by establishing a great many direct one-to-one relationships with that audience” (Miller, 2023). With the normalisation of chat bots in our daily lives, examples being ChatGTP and Snapchat’s My AI, it does seem that this is a strong possibility. What is perhaps the worst case scenario would be the development of a chat bot with malicious programming by a hostile state or an extremist organisation. The aim here would be to radicalise and poison minds. The target would most likely be that of the younger generation, for it would be quite easy to influence them, given that the adolescent population is becoming ever more lonely (Twenge et al, 2021). Moreover, the younger generations have been brought up alongside technology - where it plays a tremendous role in their lives. We have seen in the West similar instances of this, where lonely young muslims have been radicalised by the teachings of Islamic fundamentalism, and other young people being radicalised by the far right (Abbas, 2023; Miller, 2018). I would, therefore, agree with Harris’ remark that “loneliness might be our biggest national security threat” (2023a).
One of the ideas put forth by Professor Ryan was the notion as to what constitutes legitimate and illegitimate interference. “There’s a debate as to what is legitimate and what is illegitimate interference” (Ryan, interview with author). Given that this article has just discussed the ways in which illegitimate interference occurs, the following paragraph will discuss legitimate interference - as to what foreign heads of state can do, but boundaries will be established.
There are some out there who will argue that any interference is illegitimate - that is to say foreign politicians should steer clear of meddling in the affairs of another country. But in politics, there are subtle ways of interfering in an election. For example, should the US President and the UK Prime Minister have a good relationship and personal friendship, they can assist one another - this has, in recent decades, become more and more common (Giacomello, Ferrari, and Amadori, 2009).
As a result, friends will assist one another in achieving their goals - for their countries may be aligned in desires and, more often than not, in values. An example of this would be the decision by the then UK Prime Minister, David Cameron, to invite President Obama over to help the Remain side in the EU Referendum of 2016. Where President Obama acted inappropriately, however, was by stating that the “UK [was] going to be in the back of the queue” should Leave win (as quoted in BBC News, 2016). By stating this, President Obama was trying to scare undecided voters in not voting for Leave, subsequently interfering in the outcome of the vote. Given that the Leave campaign won, this attempt was unsuccessful at swaying the minds of voters. It should therefore be noted that whilst President Obama did not illegitimately interfere in this scenario, his behaviour was inappropriate.
This section will look into the measures taken by governments and political organisations from around the world to combat misinformation, disinformation, and AI misuse, followed by a look into the challenges that governments still face with regards to these issues, and then an analysis of the measures taken by social media companies.
Governments and political organisations from around the world have deployed a number measures to combat misinformation/disinformation and abuse of AI technologies.
One of the ways in which disinformation, AI misuse and direct attacks upon the electoral processes has been countered is through increased funding. “Since the 2016 election and the federal government’s decision to add the nation’s voting systems to its list of critical infrastructure, Congress has sent $995 million to states for election administration and security needs” (Cassidy, 2024). Although, according to a report by The Select Committee On Intelligence (2020), “[d]espite the focus on this issue since 2016, some of these vulnerabilities remain” (p. 4). This is further supported by Professor Ryan, who said that “the issue is investing and sustaining the democratic infrastructure, it’s not that new technology is bursting out of control” (Ryan, interview with author, 2024).
Whilst I would contest the latter part of this sentence, for new technologies will accentuate existing faults and gaps within the infrastructure, what he is saying supports and matches Cassidy (2024) and The Select Committee On Intelligence (2020) - that is to say, governments must increase its funding on its defence in combating disinformation distributors and those seeking to sway elections with AI or through any other technological means.
Regulation has also been implemented by governments and organisations in an effort to counter both AI misuse and misinformation/disinformation. With regards to AI legislation, the US and the EU have both taken steps in regulating AI - although the EU has gone one step further, introducing, in early 2024, an AI Act - which classifies “products according to risk and adjusting scrutiny accordingly” (McCallum, McMahon, and Singleton, 2024). This is the first “comprehensive” piece of AI legislation (McCallum, McMahon, and Singleton, 2024), although the US has been implementing measures - albeit to a slower and less severe extent. In America, “the US Federal Election Commission announced that it had launched a rule-making process on AI deepfakes for political purposes. Election law already prohibits the fraudulent misrepresentation of candidates and campaigns” (Hawes, Hall and Ryan, 2023, p. 4). Moreover, “[i]n October 2023, US President Joe Biden announced an executive order requiring AI developers to share data with the government” (McCallum, McMahon, and Singleton, 2024). Despite all these minor pieces of regulation, in comparison to the EU, Engler (2023) reports that the US has taken the stance of investing in infrastructure, that “[mitigates] AI risks”, over major regulation.
However, it must be noted that whilst the United States has not been as hands-on with AI in comparison with the EU, the State of California has taken some action. “Earlier today, the Governor announced signing legislation to protect the digital likeness of actors and performers by ensuring that AI is not used to replicate their voice or likeness without their consent” (Newsom, 2024). This bill will “make open-source AI more difficult to the extent that a jury would find releasing models with dangerous capabilities without restriction to not exercise reasonable care", due the fact that safeguards, at present, can be removed with relative ease (Turner and Turner Lee, 2024). This idea was echoed by Imran Ahmed, who said earlier this year that “AI companies could fix [the deepfake problem]... with tools that block voice clones that resemble particular politicians” (as quoted in Hamilton, 2024). As a result of this policy, visual and audio deepfakes will be much harder to produce, which, in turn, will minimise the existing threat that AI poses in this sphere.
With regards to misinformation and disinformation, the UK has introduced some legislation in an attempt to counter it. “In the context of misinformation, section 71 of the 2023 Act requires category 1 services… to ensure that they have adhered to their own terms and conditions. For example, removing misinformation or disinformation content that meets the thresholds set out in their own policies. Ofcom has enforcement powers including issuing fines of up to £18 million or 10% of a company’s worldwide revenue (whichever was higher), as well as business disruption measures. It also empowers Ofcom to require the largest service providers to publish annual transparency reports. Ofcom would be able to specify the information service providers included in these” (Tyler-Dodd and Woodhouse, 2024). But the new Labour Government may go further in combating misinformation, with ministers “looking [to introduce] a duty on social media companies to restrict “legal but harmful” content” (Gibbons, 2024).
Gibbons goes on to say that the previous Conservative government originally included “[a] “legal but harmful” clause, requiring firms to take down or restrict the visibility of content deemed to be dangerous but not against the law, was included in the original Bill brought forward by the Tories in 2022” (2024). Though, this was removed over concerns for free speech. Therefore, the question that I ask the Labour government, to which they must answer, is “What constitutes as harmful?” - for a step towards prosecuting speech is a step towards an oppressive and totalitarian government. This, in my eyes, would be catastrophic - for we must remember, before tackling an issue, what the West stands for.
It should also be noted that the UN has followed the proposal put forth by Whitfield (2020), which recommended “[t]he establishment by the UN Secretary General of an Advisory Group on AI co-operation” (p. 63). In the report published by the Advisory Body on Artificial Intelligence (2023), it says that it “was formed to analyse and advance recommendations for the international governance of AI” (p. 4). This, I believe, is an important step in the right direction - despite it taking three years for the establishment of such a board to be created. The UN must now, however, start alerting member states of the potential hazards that lie ahead.
Regarding the difficulties that governments still face, one of them is the fact that AI is constantly improving. “Part of the challenge is achieving clear, consistent and impactful application of terms and conditions in relation to novel content. Part is about resources: keeping up with the quantity of misleading material, which tends to peak before elections” (Hawes, Hall and Ryan, 2023, p. 5). This would imply that in the months leading up to the elections in the UK, the US, and other major democracies across the globe, the amount of misinformation/disinformation will increase - this will be due to the desires to promote or tear down a politician/political party, be it from a domestic or foreign actor. Moreover, according to DiResta (2023), “generative AI was available, but in a very limited sense in 2020. It was not as sophisticated as it is now, and I think far fewer people were aware of its potential in 2020”. With the last presidential election being in 2020 for Americans, this year will mark a great leap forward in terms of AI technologies, and one cannot begin to imagine the levels to which it will reach come the next election in 2028. One would, given this information, be feeling sympathetic towards governments for these technological advances in the field of AI. But according to Simon, McBride, and Altay (2024), “[t]he motivations to blame technology are plenty and not necessarily irrational. For some politicians, it can be easier to point fingers at AI than to face scrutiny or commit to improving democratic institutions that could hold them accountable”.
Therefore, we must start to hold lethargic governments accountable- who are unwilling to put in the necessary hard work to regulate AI and improve democratic infrastructures. But we must also acknowledge that this will require funding and cooperation between governments and with big tech companies as well.
On the subject of social media, it must be noted that some measures have been taken in an effort to combat misinformation/disinformation. If we take X’s Community Notes as an example, we will see that although its intentions may be good and that it rights the wrongs, it is questionable as to how effective it is. “Most people within the industry are sceptical as to the efficiency of measures taken by social media companies… Community notes have an effect - but we don’t know who it has an effect on exactly nor the strength of its effects” (Ryan, interview with author). There is, therefore, using the example of X’s Community Notes, some disagreement over the efficacy of the measures that have been implemented.
On the one hand, according to Wirtschafter and Majumder (2023), “users do not approach accounts that receive Community Notes on their tweets more skeptically in subsequent interactions” (p. 6). This idea, of users not being more sceptical with certain accounts after seeing Community Notes’ corrections, would highlight the fact that we are living in divisive times - where speaking the truth is irrelevant, with citizens rather seeking sources that support their worldview, even if they have been proven to be false. DiResta (2023) has also questioned the efficacy of Community Notes, saying that a “rumour is going to go viral before the correction appears, even if it's not a correction from a journalist or a fact checking organisation, which are inherently distrusted by half the American population, even if it is provided as a correction through Community Notes”. It would appear, judging from this point, that social media companies must find a way of either speeding up the correction process or preventing the misinformation in advance. Therefore, the question, to the latter suggestion, is - how can we prevent misinformation without infringing upon our civil liberties - that is to say, without taking away their voice? For this would almost certainly require the banning of individuals from a platform.
On the other hand, there is some evidence that Community Notes are somewhat effective. According to a study by Drolsbach, Solovev, and Pröllochs (2024), in a sample size of 1,810 participants (politically balanced), over 60% of Americans trust Community Notes (68% Biden supporters, 58% for Trump supporters). This would indicate that a majority of Americans have trust in the Community Notes measure. Moreover, it must be remembered, as pointed out in the aforementioned article, that “people trust information sources more if they perceive the source as similar to themselves” (Drolsbach, Solovev, and Pröllochs, 2024, p. 2). Given the fact that this study was only published very recently, and that it contradicts with the previous point, further studies should be conducted to examine and further understand the issue. But it would appear that there is an effect from the Community Notes, the only issue is that we do not know the extent of its efficacy.
This section will discuss public perceptions on the topics of AI and on misinformation/disinformation.
With regards to AI, public opinion is on the negative side. According to global averages, 74% believe that “[a]rtificial intelligence is making it easier to generate very realistic fake news stories and images” (Dudding, 2023, p. 5). Moreover, 15% disagree with the idea that “[a]rtificial intelligence will make misinformation and disinformation worse” (Dudding, 2023, p. 6). Furthermore, in the United States, “73% of Americans believe it is “very” or “somewhat” likely AI will be used to manipulate social media to influence the outcome of the presidential election – for example, by generating information from fake accounts or bots or distorting people’s impressions of the campaign” (Rainie and Husser, 2024).
The consequences of these opinions are potentially monumental. “[T]he perception, partially co-created by media coverage, that AI has significant effects could itself be enough to diminish trust in democratic processes and sources of reliable information, and weaken the acceptance of election results.” (Simon, McBride, and Altay, 2024). With much talk of foreign interference, as detailed earlier, and their utilisation of technologies, including AI, it would appear that there is a lot of substance in this argument. But according to Dame Wendy, by talking extensively about the potential issues that may arise with the ever-developing AI technologies, issues are discussed in depth and raised well in advance. “There haven’t been a lot of AI labelled catastrophes. I think that’s partly like the Y2K [issue], people had talked about it, the media were aware of it, and I think [the issue] is all part and parcel of preparing ourselves for future elections” (Hall, interview with author). I would agree with this comparison, but would add that there would be an issue if media companies over-discuss an issue - for an overly worried population would result in social disharmony.
On the matter of misinformation and disinformation, many people are unable to distinguish between the two online. “There was often a gap between people’s confidence in being able to recognise advertising, identify a scam message or judge the veracity of online content, and their ability to do this when shown examples” (Ofcom, 2022, p. 1). Moreover, “[a] third of internet users were unaware of the potential for inaccurate or biased information online; 6% of internet users believed that all the information they find online is truthful and 30% of internet users don’t know – or don’t think about – whether the information they find is truthful or not” (Ofcom, 2022, p. 2). And, “[a]lthough seven in ten (69%) adult internet users said they were confident in judging whether online content was true or false, most were actually unable to correctly evaluate the reasons that indicate whether a social media post is genuine” (Ofcom, 2022, p. 13).
Continuing on the topic of misinformation, but with an emphasis on elections, high proportions of voters are worried. According to a study conducted by Ipsos Mori, “[87% of people are] worried about the impact of disinformation on the upcoming elections in their country” (Quétier-Parent, Lamotte, and Gallard, 2023, p. 7). Moreover, “85% express concern about the impact and influence of disinformation on their fellow citizens” (Quétier-Parent, Lamotte, and Gallard, 2023, p. 7). Both this paragraph and the previous one highlight the fact that people are uneducated in being able to distinguish between fact and fiction, yet they are, at the same time, fearful of their fellow citizens’ capabilities of telling what is real and what is fake. This was perhaps best put by Tristan Harris (2023b), who said that “[people are becoming] simultaneously less informed while the world's issues are getting more complex”. That is to say, that people are reading misinformation, taking it to be true, and subsequently becoming less informed on issues - despite the fact that these issues are becoming ever more complex.
With regards to social media and their tackling of misinformation, the general public are not impressed. In the United Kingdom, “[s]even in ten Britons say social media networks did a bad job at tackling misinformation during the 2024 riots” (Smith, 2024). Seven in ten also believe that social media companies are not regulated tightly enough (Smith, 2024).
As a result of these recent findings, it is fair to say that the general public want to see improvements in this field, most likely due to the division that is occurring as a result - with the consequences of that being violent riots, as exemplified in the aftermath of the Southport attacks. Whilst I believe that some regulation may be required to combat this, I do not believe that it is the sole answer to the crisis.
This section will propose some of my suggestions as to how we should minimise the effects of AI on misinformation and disinformation to improve our election processes.
1. I would recommend that we take note from Governor Newsom’s AI Safety Bill, that we further debate it and improve upon it, and then implement a similar policy. This would result in a significant decrease in deepfakes of politicians, that would have the potential to influence and sway voters. And should any deepfakes be made of politicians without their consent, legal punishments would be advised.
2. In addition to the previous point, I would recommend further studies into the efficacy of social media’s measures in combating AI misinformation and disinformation. In addition to this, pressure must be applied onto social media platforms and their role in the dissemination of misinformation and disinformation. Though, I would add that we must be careful as to what is deemed misinformation and what is not - that we still allow a debate, and judge issues on facts not emotions. If the measures are insufficient and if social media companies are reluctant in changing their algorithms, in clamping down on extremist figures, stricter regulation may be required. I would like to add that the changes to the algorithm must not, in any way, be politicised - that is to say, that it does not promote a conservative, liberal, or leftist bias. It must remain neutral.
3. I would also recommend that we educate the electorate on how to spot misinformation and disinformation. This is due to the fact that, as shown in the Public Perceptions section, the population is neither confident in their compatriots abilities to distinguish fact from fiction, nor are they themselves capable. This would also introduce a healthy scepticism as to what they read, see, and hear - limiting the efficacy of a deepfake or an email posing to be official, when it is actually from a hostile source.
4. I would also recommend an idea put forward by Dame Wendy, to introduce an AI Licence and AI Code (interview with author). That is to say, we install a system where dangerous misuse of AI, much like drink-driving in a car, is deemed illegal and punishable by legal action. Moreover, a code whereby people are taught the ways in which we should use AI, ought to be taught - that we abide by a common knowledge, much like a highway code. And finally, makers of an AI system and AI content creators are required to have a licence - to ensure that AI is utilised both legitimately and safely.
This article set out to analyse the extent to which AI will have an effect on misinformation and disinformation in future democratic elections. To do so, the article first analysed the history of misinformation, disinformation and AI and their impact on elections. This was followed by an observation as to the ways in which governments and political organisations from around the world have been combating misinformation, disinformation, and AI misuse. Public perceptions were then analysed, followed by a list of policy recommendations.
This article has highlighted many important issues with regards to misinformation, disinformation, AI and elections. Most notably, it has pointed out the worst time for a disinformation attack to happen - that is to say, right before an election. Furthermore, it has analysed existing and potential threats with regards to AI and misinformation/disinformation - such as the possibility for one-to-one relationships between AI chatbots and individuals. And finally, this article has also explored the efficacy of social media counter measures to combat misinformation, namely X’s Community Notes.
As listed in the Policy Recommendations sections, I put forth some ideas to counter the ever-growing threat of AI on misinformation, disinformation and how it will impact democratic elections. First, we acknowledge, study, and debate further the AI Safety bill put forth by the State of California in September 2024, before implementing something similar at a national level. Second, we study the efficacy of the measures put in by social media companies, on top of holding them accountable for the content that they have on their platforms - to change the algorithm and cease promoting extremist figures. Though as stated, this must be done with neutrality - that everyone, regardless of their politics, is held to the same account. Third, we educate the electorate - to teach them ways in which one can spot misinformation/disinformation, which in turn introduces a healthy scepticism as to what they read, listen to and see. And finally, we introduce an AI Licence and Code - similar to that of a driver’s licence or highway code. This will allow the general population to reap the benefits of AI responsibly, and give them some accountability as well.
In the last analysis, governments from around the world must act on AI now. Its capabilities to dispense misinformation and disinformation will be revolutionary. I suggest these policies in an attempt to cover all fields - namely accountability by the government, social media companies, and the people. AI has the capacity to greatly improve the lives of the citizenry, it is therefore necessary to maximise its potential and to diminish its hazards. To paraphrase President John F. Kennedy, mankind must take control of AI or AI will take control of mankind.
Abbas, T, (2023), Conceptualising the waves of Islamist radicalisation in the UK, Journal of Contemporary European Studies
AI Advisory Body, (2023), Governing AI for Humanity, United Nations
Bontridder, N, and Poullet, Y, (2021), The role of artificial intelligence in disinformation, Data & Policy, 3, p.e. 32.
Cassidy, C.A, (2024), Election officials in the US face daunting challenges in 2024. And Congress isn’t coming to help, Available at:
https://apnews.com/article/election-2024-congress-funding-security-ballots-331bcd11694e70 2efbee6f31bee03faf#, (Accessed: 03.09.24)
Cheshire, T, (2024), Southport attack misinformation fuels far-right discourse on social media, Available at:
https://news.sky.com/story/southport-attack-misinformation-fuels-far-right-discourse-on-soci al-media-13188274, (Accessed: 04.09.24)
Collier, K, (2023), Iran-linked hackers broke into election results website in 2020, general say, Available at:
https://www.nbcnews.com/tech/security/iran-linked-hackers-broke-election-results-website-2 020-general-says-rcna81304, (Accessed: 19.08.24)
Conradi, P, (2023), Was Slovakia election the first swung by deepfakes?, Available at: https://www.thetimes.com/world/russia-ukraine-war/article/was-slovakia-election-the-first-sw ung-by-deepfakes-7t8dbfl9b, (Accessed: 02.09.24)
Corera, G, (2024), What to know about US election hacking, Iran and other countries, Available at: https://www.bbc.co.uk/news/articles/c86ljyv38vxo, (Accessed: 26.08.24)
Cheyfitz, E, (2017), The Disinformation Age: The Collapse of Liberal Democracy in the United States, Routledge
Dame Adjin-Tettey, T, (2022), Combating fake news, disinformation, and misinformation: Experimental evidence for media literacy education, Experimental evidence for media literacy education, Cogent Arts & Humanities, 9:1, 2037229
De Ridder, J, (2021), What's so bad about misinformation?, Inquiry
Desmarais, A, (2024), Far-right, including France’s National Rally, use AI to support political messaging, reports say, Available at:
https://www.euronews.com/next/2024/07/04/far-right-including-frances-national-rally-use-ai to-support-political-messaging-reports-c, (Accessed: 26.08.24)
DiResta, R, (2023), How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller, Interviewed by Tristan Harris, Center For Humane Technology, 21 December
Drolsbach, CP, Solovev, K, and Pröllochs, N, (2024), Community notes increase trust in fact-checking on social media, PNAS Nexus, Vol. 3, No. 7
Dudding, A, (2023), Global Views on A.I. and Disinformation, Available at: https://www.ipsos.com/en-nz/global-views-ai-and-disinformation, (Accessed: 04.09.24)
Engler, A, (2023), The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment, Available at:
https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-co mparison-and-steps-to-alignment/, (Accessed: 04.09.24)
Giacomello, G, Ferrari, F, and Amadori, A, (2009), With friends like these: foreign policy as personal relationship, Contemporary Politics, 15(2), pp. 247–264
Gibbons, A, (2024), Tech giants will be forced to ban fake news under Labour plans, Available at:
https://www.telegraph.co.uk/politics/2024/08/09/tech-giants-forced-ban-fake-news-labour/, (Accessed: 03.09.24)
Hall, Interview with Dame Wendy Hall on 05 September 2024
Hamilton, F, (2024), AI clones of Keir Starmer and PM raise fears of election interference, Available at:
https://www.thetimes.com/uk/politics/article/ai-clones-of-keir-starmer-and-pm-raise-fears-of election-interference-7s2rvz952?id=17515457033&gad_source=1&gclid=Cj0KCQjw2ou2Bh CCARIsANAwM2Hio6sBx_hM1BaJQiWP1Qd9NGIMcmryTCn809v2EI19wmj-CT1xcWQ aAjf9EALw_wcB, (Accessed: 19.08.24)
Harris, T, (2023a), How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller, Interview conducted by Tristan Harris and Aza Raskin, Center For Humane Technology, 21 December
Hawes, B, Hall, W, and Ryan, M, (2023), Can artificial intelligence be used to undermine elections?, Web Science Trust
King’s College London, (2024), A guide to who is voting and when in this historic year for democracy, Available at:
https://www.kcl.ac.uk/a-guide-to-who-is-voting-and-when-in-this-historic-year-for-democrac y, (Accessed: 06.09.24)
Lin, H.C, et al, (2024), AI Disinformation Attacks and Taiwan's Responses during the 2024 Presidential Election, Thomson Foundation
McCallum, S, McMahon, L, and Singleton, T, (2024), MEPs approve world's first comprehensive AI law, Available at: https://www.bbc.co.uk/news/technology-68546450, (Accessed: 06.09.24)
Miller, C, (2023), How Will AI Affect the 2024 Elections? with Renee DiResta and Carl Miller, Interviewed by Tristan Harris, Center For Humane Technology, 21 December
Narayanan, V, Howard, P.N, Kollanyi, B, and Elswah, M, (2017) Russian involvement and junk news during Brexit, The computational propaganda project, Algorithms, automation and digital politics
Newsom, G, (2024), Governor Newsom signs bills to combat deepfake election content, Available at:
https://www.gov.ca.gov/2024/09/17/governor-newsom-signs-bills-to-combat-deepfake-electio n-content/, (Accessed: 28.09.24)
Obama, B, (2016) as quoted in BBC News, (2016), Barack Obama says Brexit would leave UK at the 'back of the queue' on trade, Available at:
https://www.bbc.co.uk/news/uk-36115138, (Accessed on: 03.09.24)
Ofcom, (2022), Adults’ Media Use and Attitudes report, Ofcom
Omand, D, (2024), How can information-related threats be addressed in the lead up to the UK General Election? in How is 'fake news' affecting the UK General Election and can anything be done about it?, Available at:
https://www.kcl.ac.uk/how-is-fake-news-affecting-the-uk-general-election-and-can-anything be-done-about-it, (Accessed: 29.08.24)
O’Sullivan, D, (2018), Russian bots retweeted Trump nearly 500,000 times in final weeks of 2016 campaign, Available at:
https://money.cnn.com/2018/01/27/technology/business/russian-twitter-bots-election-2016/in dex.html, (Accessed: 04.09.24)
Oxford English Dictionary, (2024), Artificial Intelligence, Available at: https://www.oed.com/dictionary/artificial-intelligence_n?tab=meaning_and_use#38531565 (Accessed: 15.08.24)
Quétier-Parent, S, Lamotte, D, and Gallard, M, (2023), Elections & social media: the battle against disinformation and trust issues, Available:
https://www.ipsos.com/en/elections-social-media-battle-against-disinformation-and-trust-issu es, (Accessed: 04.09.24)
Rainie, L, and Husser, J, (2024), New survey finds most Americans expect AI abuses will affect 2024 election, Available at:
https://www.elon.edu/u/news/2024/05/15/ai-and-politics-survey/, (Accessed: 04.09.24) Ryan, Interview with Professor Matt Ryan on 20 August 2024
Simon, F.M, McBride, K, and Altay, S, (2024), AI’s impact on elections is being overblown, Available at:
https://www.technologyreview.com/2024/09/03/1103464/ai-impact-elections-overblown/, (Accessed: 04.09.24)
Smith, M, (2024), Two thirds of Britons say social media companies should be held responsible for posts inciting riots, Available at:
https://yougov.co.uk/politics/articles/50288-two-thirds-of-britons-say-social-media-companie s-should-be-held-responsible-for-posts-inciting-riots, (Accessed: 04.09.24)
The Select Committee On Intelligence, (2020), REPORT OF THE SELECT COMMITTEE ON INTELLIGENCE UNITED STATES SENATE ON RUSSIAN ACTIVE MEASURES CAMPAIGNS AND INTERFERENCE IN THE 2016 U.S. ELECTION VOLUME 1: RUSSIAN EFFORTS AGAINST ELECTION INFRASTRUCTURE WITH ADDITIONAL VIEWS, The Select Committee On Intelligence
Turner, J, and Turner Lee, N, (2024), Misrepresentations of California’s AI safety bill, Available at:
https://www.brookings.edu/articles/misrepresentations-of-californias-ai-safety-bill/, (Accessed: 29.09.24)
Twenge, J.M, et al, (2021), Worldwide increases in adolescent loneliness, Journal of adolescence, 93, pp. 257-269.
Tyler-Todd, J, and Woodhouse, J, (2024), Preventing misinformation and disinformation in online filter bubbles, Available at:
https://commonslibrary.parliament.uk/research-briefings/cdp-2024-0003/, (Accessed: 03.09.24)
U.S. Department of Justice, (2021), Two Iranian Nationals Charged for Cyber-Enabled Disinformation and Threat Campaign Designed to Influence the 2020 U.S. Presidential Election, Available at:
https://www.justice.gov/opa/pr/two-iranian-nationals-charged-cyber-enabled-disinformation and-threat-campaign-designed, (Accessed: 19.08.24)
Whitfield, R, (2020), Effective, Timely and Global: the urgent need for good Global Governance of AI, One World Trust
Wirtschafter, V, and Majumder, S, (2023), Future Challenges for Online, Crowdsourced Content Moderation: Evidence from Twitter’s Community Notes, Journal of Online Trust and Safety, 2(1)
Wong, T, (2024a), Taiwan election: China sows doubt about US with disinformation, Available at: https://www.bbc.co.uk/news/world-asia-67891869, (Accessed: 03.09.24)
Wong, T, (2024b), Taiwan elects William Lai president in historic election, angering China, Available at: https://www.bbc.co.uk/news/world-asia-67920532, (Accessed: 03.09.24)
Wright, G, (2024), Trump warned by US intelligence of Iran assassination threats - campaign, Available at: https://www.bbc.com/news/articles/cd0z2394ey2o, (Accessed: 29.09.24)