With the body of research growing rapidly, we know a lot more about the mechanics and impact of disinformation than we did a few years ago. We also know more about what we still don’t know. What did we learn this year?
If we zoom in on the term disinformation – just one of the many keywords related to this field – we can see a steady upward trend. According to Google Scholar – the largest database of academic writing – in 2011, disinformation was mentioned in 2 610 academic texts. By 2016 this number had risen to 3 850 and in 2020 it jumped to new heights, peaking at 17 100. This year we’ve seen 16 000 pieces so far, but it’s far too early to tell whether we’ve actually reached peak disinformation or not. Most likely not.
That said, there is still a lot we don’t know about it and many aspects of the phenomenon remain elusive. At times it feels like nailing jelly to the wall, but we have no choice but to keep trying. Many questions that are still unanswered in necessary detail are related to the the effects of disinformation on our society, but also the effectiveness of our responses to it.
We’re bringing you a highlight reel of disinfo-related research published this year. This is neither a comprehensive nor necessarily a representative overview of all research conducted, rather our selection of some of the more interesting trends.
Can’t do without COVID-19 (yet)
As expected, a mountain of research papers was written on COVID-19-related disinformation, covering it from all possible angles. It’s quite safe to assume that the huge increase in disinfo-focused research we saw in 2020 was due to the outbreak of the pandemic and the concurring infodemic, as dubbed by the World Health Organization.
One of the better reading collections on this is a special issue of “Big Data & Society” issued in May that focused on studying the COVID-19 infodemic at scale. It contains articles on the intersection of infodemiology, big data and COVID-related mis- and disinformation.
The collection starts with a useful primer by Kacper T. Gradoń, Janusz A. Hołyst and Wesley R. Moy on the spread of mis- and disinformation as a public health challenge. The authors provide an overview of selected opportunities for applying technology to study and combat disinformation. Machine learning, data- and text-mining, and sentiment analysis being just some of them.
Another interesting piece by Kai-Cheng Yang, Francesco Pierri and Pik-Mai Hui compared the spread of low-credibility content on Facebook and Twitter. What both platforms have in common is the presence of “superspreaders” – a minority of influential users who generate the majority of content (including toxic). On both platforms there is evidence of coordinated, inauthentic behaviour. Last but not least, the article highlights how inconsistent data-access policies impose limits on researchers’’ abilities to study harmful manipulations of information ecosystems.
It came as a little surprise, that the pandemic also kicked off a tsunami of conspiracy theories that has swept over the globe. The urge to make sense of what’s happening around us is very much human, but not all of us are satisfied with the mundane and technical explanations that underlie most ongoing events. Once down the rabbit hole, it’s hard to get out, as belief in one conspiracy theory often leads to belief in another, and so it goes. Earlier this year, the Swedish Contingency Agency (MSB) commissioned an insightful report on COVID-19 conspiracy theories, covering the full spectrum from theory and examples to ways of responding. Although focusing on Sweden, the lessons are applicable internationally.
Coming to possible solutions to disinformation more broadly, Melisa Basol, Jon Roozenbeek and Manon Berriche had a go at unpacking the concept of psychological herd immunity. They assess the efficacy of two prebunking interventions aimed at improving people’s ability to spot manipulation techniques commonly used in COVID-19 disinformation. The authors find that Go Viral!, an online game available in dozen languages, improves people’s confidence in their ability to spot mis- and disinformation and reduces their willingness to share it with others.
Psychological side of disinformation
2021 was a year when more attention was paid to the behavioural aspects of information manipulations. Diving more deeply into the psychological aspects of disinformation, Gordon Pennycook and David G. Rand – two heavyweights in this field – presented a timely overview of the latest developments in the field. They provide evidence contradicting the common narrative that partisanship and politically motivated reasoning explain why people fall for mis- and disinformation. Rather, they link poor truth discernment to a lack of reasoning and relevant knowledge. In addition, there is a large disconnect between what people believe and what they share on social media. An effect largely driven by inattention.
While much research has been conducted on believing false information, less is known about mistaking true information for false – an equally important aspect. Cornelia Sindermann et al offer us a glimpse into the latter. They outline using a variety of news sources, getting informed about recent events, expanding one’s culture-specific knowledge and (re-)gaining trust as ways to correctly classify true information.
Psychology doesn’t just help us understand the spread of mis- and disinformation, it’s also one of the keys to curbing it. Sander van der Linden, Jon Roozenbeek et al offer us a number of possible interventions, focusing primarily on corrective (debunking) and pre-emptive (prebunking) approaches. As a bonus, they offer a research agenda of open questions within the field of psychological science that relate to how and why mis- and disinformation spreads and how best to counter it.
To debunk or not to debunk?
It’s quite ironic that some narratives surrounding the effectiveness of fact-checking need fact-checking themselves. Many of you have probably heard the statement that fact-checking can backfire and make people dig deeper into their beliefs. Hence, it should be avoided. Most stories that claim this and refer to a source, if at all, are pointing to one of the two studies conducted about a decade ago. However, almost none of the later studies have managed to repeat those results.
Nadia M. Brashier et al tested whether the longer-term impact of fact-checks depends on when people receive them. Spoiler alert: it does. The authors found that providing fact-checks after headlines (i.e. debunking) improved subsequent truth discernment more than providing the same information during (i.e. labelling) or before (i.e. prebunking) exposure.
While fact-checking has received a fair share of attention as a potential tool to reduce the spread and negative effect of mis- and disinformation, whether and how fact-checking lessens peoples’ intentions to share mis- and disinformation on social media is less known. Myojung Chung and Nuri Kim ran two experiments to explore this. They found that exposure to mis- and disinformation along with fact-checking information increased the belief that others are more influenced by the news than oneself. This, in turn, led to weaker intentions to share mis- and disinformation on social media.
Another aspect of fact-checking that is yet to receive due attention is related to the world of advertising. This summer, Jessica Fong, Tong Guo and Anita Rao looked at whether debunking can reduce the impact of mis- and disinformation on consumers’ purchasing behaviour. The results of their study show that it can. This finding is important to both private companies running ads and policy-makers.
Disinformation comes in many shapes and sizes
Research on mis- and disinformation tends to focus unproportionally on textual forms of it. At the same time, more and more people tend to prefer information in other shapes and sizes, be it audio, video or image. Earlier this year, Viorela Dan, Britt Paris, Joan Donovan et al published an extensive overview of the state of play of visual mis- and disinformation, covering topics such as the effects of visual mis- and disinformation to the infamous deepfakes.
Regarding deepfakes, Michael Yankoski, Walter Scheirer and Tim Weninger advise us to focus on popular, not perfect, fakes in their research piece on meme warfare. Although a possible threat, they argue that sophisticated faked content isn’t the most pressing problem. Instead, the challenge lies in detecting and understanding much more crudely produced and widely available content: memes.
Wrapping it up
Even though tackling the information disorder is a mountain of a challenge, we’re feeling optimistic, as the picture is getting clearer year by year.
Just as with curbing the spread of COVID-19, it’s important that we don’t sit and wait for the final truth to crystallise, but act now, taking into account the best knowledge we have at the moment. We at EUvsDisinfo aim to do just that and keep bringing you the latest on foreign information manipulation and interference. See you in 2022!