The new information chaos: How AI is transforming online disinformation

Join our social channels to fight disinformation, raise awareness about emerging threats and discuss ways to detect them
A N3XTCODER series

Implementing AI for Social Innovation

Welcome to the N3XTCODER series on Implementing AI for Social Innovation Series.

In this series we are looking at ways in which Artificial Intelligence can be used to benefit society and our planet - in particular the practical use of AI for Social Innovation projects.

Disinformation online and particularly on social media has become a major global issue in recent years. Many experts on disinformation as well as experts in AI are now warning that AI could take us into a new era of information chaos. Many of these warnings present AI as a future disinformation threat, but AI generated disinformation is spreading on the internet now, and it’s already having a major impact on our information landscape.

This article will explore:

  • What online disinformation is, how it spreads and why it’s a threat.
  • How AI is already being used to create new types of fake content at scale and entirely automated influence networks.
  • How you can spot AI-generated disinformation and what you can do about it.

What is disinformation?

Before we look into AI, it may help if we explain what we mean by disinformation.

Disinformation is the intentional spreading of false information. Of course this has happened for thousands of years, but for the purposes of this article we’ll focus on disinformation that is shared online. And when we are discussing it as a wider global issue, our main concern is with organised disinformation, i.e. disinformation that is spread in a co-ordinated way either to reach a specific group, or as widely as possible.

That doesn’t mean that everyone who spreads disinformation is spreading it intentionally, far from it. But it does mean those people sharing the disinformation - whether knowingly or not - are acting in the interests of a wider coordinated project. It’s also important to note that we’re not only concerned with people spreading disinformation; a vast amount of online and social media activity is either created or amplified by bots and algorithms.

Disinformation has become a major cause of concern for three reasons:

  1. There have been very high profile cases where it appears to have had a significant impact on elections and other political events worldwide.
  2. Online culture and discourse has for many people become increasingly toxic for different people, particularly those in vulnerable groups, and many blame disinformation for this.
  3. Social media platforms and the programmatic advertising platforms that generate their revenue appear to allow disinformation to be produced and shared at an imaginable scale.

Two types of disinformation

Disinformation is produced for two main reasons: for profit and for political influence. 

Commercial disinformation is created simply to generate “engagement bait”, that is content that simply creates enough interest for people to click on it or, even better, share it in their networks. By creating lots of very similar websites with very similar content, or Facebook pages, “content farms” can build up traffic that can be used to generate revenue through Facebook and Google’s advertising platforms. Alongside the content, these disinformation networks also build up networks of inauthentic accounts that like each other's posts and shares. This creates further amplification, ensuring content shared on these networks will go viral.

Disinformation is also created for political influence with false political messages, conspiracy theories, fake photos shared among specific political networks to get a response, engagement and to either advance a political agenda or to create confusion and disrupt an opposing agenda. Normally, as with commercial disinformation, posts that provoke a strong response of anger or disgust for example, do especially well; these posts provoke more engagement. As with commercial disinformation, inauthentic networks profiles are grown and managed to massively amplify reach and engagement on disinformation content. 

These networks are set up by state organisations, political networks and also criminal networks in many different places around the world. If you want to find out more, this in-depth investigation by MIT Review shows how the disinformation networks operate.

The important point to note here is that there is virtually no difference between the commercial and political disinformation networks, in fact they are more or less interchangeable. A commercial network can be set up to generate traffic in a particular country with an inauthentic network of accounts, and its sheer size can attract interest from millions of genuine people. If there is a big political event, for example an election or a scandal, that network can be instantly repurposed to spread political disinformation, if someone is willing to pay for it.

This disinformation industry - and it is an industry - has been developing for many years now. It is very profitable and/or politically useful for some people, so how will AI affect it?

There are a number of ways we can see that AI is already having a big impact, some in fact you can probably see in your own social media feeds.

The internet is being flooded with AI-generated wooden dog sculptures

A report in 404 media reports that in recent weeks Facebook has seen a vast number of fake wooden dog pictures, many of which go viral. All are very similar and clearly based on the work of one artist in Scotland.

Why do content farms do this? These images are clearly used to generate traffic. Other seemingly banal images are being propagated by the same methods with the hope of going viral – and therefore generate advertising revenue. AI allows these content farms to produce huge numbers of these images that can evade plagiarism filters, and also the platform algorithms that suppress excessive repetition of the same image.

Many will see these images as harmless, and of course there is certainly more harmful content, as we shall see. But there are certainly reasons to be concerned about this: 

  • Firstly, vast quantities of banal content is degrading our information environment.
  • Secondly, the networks and followers built up with this seemingly innocuous content could be instantly refocused to serve other agendas with far more harmful consequences.

AI for voiceovers

AI is increasingly being used to generate life-like commentary on different social media content. This has been used by politicians for example the New York Mayor, Eric Adams’ administration used AI for thousands of robocalls to New York citizens that sounded like Adams’ voice, but in languages he doesn’t speak, such as Hindi, Cantonese and Spanish.

Other politicians have also had their voice manipulated with AI for different reasons. In India a clip of a hit bollywood song seemingly sung by Prime Minister Modi got 3.5 million views. Although there’s no indication that Modi or his party had been involved, it certainly did not do Modi’s profile any harm. In Pakistan jailed former Prime Minister Imran Khan’s voice was used to generate a speech that was played to his supporters at a rally, it should be noted that the voice was labelled in accompanying video as AI generated.

AI voice overs have also recently been used in a wave of bizarre conspiracy theory TikTok videos, such as one that claims comedienne Joan Rivers was murdered after revealing the then President Barack Obama was having a love affair. This is of course complete nonsense, and its hard to imagine anyone believing them, but these videos gain huge numbers of views. The organisation that identified them, NewsGuard, traced the generated voices back to a company Eleven Labs, where anyone can sign up and generate voice overs for free. 

AI-generated videos in political campaigns

Deep fakes have been a major issue for some time. There have recently been many examples of political campaigns using them, albeit mostly transparently, for example this Extinction Rebellion Video of Belgian Prime Minister Sylvie Wilmes linking covid to climate change. Extinction Rebellion was very quick to reveal they created the video with deep fake technology. In 2023, the Republican National Committee released a video speculating on what might happen if President Biden is re-elected. Again this video was labelled as AI generated, 

It should be noted that deep fake and other AI video technologies are one of the few areas of digital media where the computing power and technologies required are not quick, easy and cheap to use. However, that is changing fast.

In Bangladesh, where an election is scheduled for January 2024, fake videos are being widely used to target and undermine politicians by their opponents. A Financial Times investigation found that many videos were using tools such as Heygen to create unlimited  AI generated news clips, for as little as $24 per month.

The wider problem with deep fake videos and other inauthentic content is not just the fake content, but how it impacts on all content. It is now commonplace for politicians and other figures to denounce any damaging content as fake, and it is increasingly hard to prove them wrong.

This confusion can have huge consequences. In Gabon in 2018, a video of ailing president Ali Bongo was denounced by many as a fake designed to re-assure the public, and was quickly followed by a failed coup. Ali Bongo was restored to power (although deposed again in 2023). To this, day no-one has established conclusively whether the video was genuine or not.

AI influence networks

While AI-generated content is a major concern, AI-generated influence networks add another dimension to the threat. An Australian investigation in December 2023 found over 4,500 AI-generated videos on 30 YouTube channels with over 730,000 subscribers in total. However, in this campaign it wasn’t simply the content, but the network itself that was also being managed by AI algorithms to exploit YouTube’s also AI-driven amplification algorithms. 

With this network, as with many, it was unclear if it was a state backed political campaign or a commercial operation. Equally possible, it was somewhere between the two - a commercial campaign set up with some links to state agencies.

This, then, seems like an early warning of some disinformation researchers’ nightmare scenario: an AI-managed inauthentic network exploiting the AI-generated systems that platforms have in place for managing disinformation. This could lead to an escalating arms race between two competing AI systems. The risk here is that such a scenario could potentially lead to exponential amplification, with potentially dire consequences for content integrity on social media platforms.

AI solutions to disinformation

Not surprisingly, given the escalating threats of AI disinformation, many companies and organisations are developing AI tools to combat disinformation. Logically, for example, is one such tool which is trained on disinformation to rate suspect content on its likelihood to disinformation. There are many other tools, such as Adverif.ai , that offer an AI-driven “FakeRank” rating.

So far, most AI tools are either in development or beta, so it’s unclear how effective they are. 

The unresolved question for all of these products is: can you fight AI with AI? Or will more dependence on AI simply escalate the problem further?

How to spot AI disinformation

As AI disinformation gets better, it becomes harder to identify. But there are resources to stay informed:

Join the AI for Impact community to share your tools and strategies for fighting disinformation.

Conclusion

AI disinformation is often assumed to be a future threat. There is ample evidence to show its already here now. Although the patterns we are seeing in AI disinformation are in the early stages of development, we can be sure they will become more widespread very soon. 

It remains to be seen how much of the threat it will be to us. Most examples of its far have been quite obvious, we can see AI generated information become more ubiquitous. 

If politicians and others use AI-generated or AI-augmented content, is it disinformation at all? Certainly many would argue that because they label it in some way, they are not disinformation anyone. However, the lines between genuine and inauthentic content are becoming increasingly hard to define - as AI-augmented content is becoming more and more common.

That presents some severe and possibly less expected risks that we may find hard to manage:

  • Firstly, the line between authentic and inauthentic content is becoming increasingly blurred, where voices are enhanced, or video interviews are translated - or maybe composite images are made - potentially making all content suspect.
  • Secondly, if AI-generated content spreads more widely we may find that we increasingly assume all the information and news we see has a high probability of being inauthentic - that could further erode public trust and our collective capacity to manage political challenges.

2024 is billed as the world's biggest election year ever with national polls in dozens of countries, including the UK, India and of course the USA. Over 2024, we’ll get a far better picture of how prepared we really are for AI-generated disinformation, and hopefully also how we can build our resilience against it.

Was this article helpful? yes no

Join us in the conversation on various social channels

Join us in the conversation on various social channels. We discuss the latest developments in technology as they happen!

This article has been realised with the help of
Bundesministerium für Wirtschaft und Klimaschutz
NextGenerationEU