Social media being used by governments to spread fake news and manipulate public opinion, finds first comparative study of automated political propaganda worldwide

Social media platforms are being used support campaigns of political misinformation on a global scale, according to evidence released today by researchers from the University of Oxford’s “Computational Propaganda” project, which studies the effect of new forms of digital propaganda on democracy.

Detailed evidence of manipulation of public opinion is presented in the project’s nine-country case-study report released today, which investigates the use of computational propaganda to sway public opinion and spread disinformation in the United States, Russia, China, Germany, Brazil, Ukraine, Taiwan, Poland and Canada.

Representing the first systematic attempt to collect and analyze computational propaganda worldwide, the project team has found significant evidence that many governments are using software ‘bots’ to artificially shape public life, influence voters and defame critics through dissemination of “fake news”, coordinated disinformation campaigns, and troll mobs to attack human rights activists, civil society groups, and journalists.

Top-line findings from the report include:

  • Automated social media profiles had a measurable influence on information sharing over Twitter during the 2016 United States election. In the key battleground state of Michigan, fake news was shared as widely as professional news in the days leading up to the election.

  • Interviews with US political party operatives, campaign staff, and digital strategists reveal that social media bots have been used to manipulate online discussion around US political campaigns for almost a decade.

  • Sixty percent of Twitter activity in Russia is managed by highly automated accounts, and Russian-directed campaigns have targeted political actors in the United States, Poland, and Ukraine.

  • Disinformation campaigns have been waged against citizens in Ukraine across VK, Facebook, and Twitter. The industry that drives these efforts of manipulation has been active in Ukraine since the early 2000s.

  • A significant portion of the conversation about politics in Poland over Twitter is produced by a handful of alt-right accounts.

  • Chinese-directed campaigns have targeted political actors in Taiwan, using a combination of algorithms and human curation. Chinese mainland propaganda over social media is not fully automated but is heavily coordinated.

  • Government responses vary greatly from country to country. In Taiwan, the government has responded with an aggressive media literacy campaign and public fact-checking bots. In Ukraine, the government response has been minimal, but a growing number of private firms are making a business of fact checking and protecting social media users.

The project’s principal investigator, Professor Philip Howard, said: “Social media are a significant platform for political engagement and sharing political news and information, but are increasingly being used by many governments around the world to spread disinformation in order to strengthen social control. In our research, we found significant evidence that political bots are being used during political events like elections to silence opponents and push official state messaging over platforms like Twitter and Facebook. The growing use of computational propaganda as a powerful tool to disseminate fake news and coordinate hate and disinformation campaigns is a worrying trend — by confusing and poisoning online political debate it threatens our democracies, while strengthening the hand of authoritarian states.”

Contact

Samuel Woolley, Director of Research, Oxford Internet Institute, University of Oxford, [email protected]

Prof. Philip Howard, Professor of Internet Studies, Oxford Internet Institute, University of Oxford [email protected]

Notes for editors

  • The Computational Propaganda Research Project is a European Research Council (ERC)-funded project at the Oxford Internet Institute, University of Oxford, which studies the effect of computational propaganda on democracy. See: http://comprop.oii.ox.ac.uk

  • “Computational propaganda” describes the use of algorithms, automation, and human curation to distribute misleading information over social media networks. ‘Bots’ are software agents that are able to rapidly deploy messages, interact with users’ content, and affect trending algorithms on social media, while passing as human users. Malicious uses of bots include spamming and harassment.

  • The project has undertaken case studies on the state of digital disinformation and political bot usage in the United States, Russia, China, Germany, Brazil, Ukraine, Taiwan, Poland and Canada. The research was based on both social media analysis and interviews with victims of attacks and with creators of political bots and propaganda; process tracing; participant observation; social network analysis; and content analysis of media articles.

  • The research team interviewed 65 experts, and analyzed tens of millions of social media posts during a number of elections and political crises between 2015 and 2017.

  • The research team has previously released evidence on the effect of computational propaganda in the UK’s Brexit referendum.

As part of our new country case study series, project members Mariia Zhdanova and Dariya Orlova investigated the use of bots and other false amplifiers in Ukraine.

Abstract:

This working paper examines the state of computational propaganda in Ukraine, focusing on two major dimensions, Ukraine’s response to the challenges of external information attacks and the use of computational propaganda in internal political communication. Based on interviews with Ukrainian media experts, academics, industry insiders and bot developers, the working paper explores the scale of the issue and identifies the most common tactics, instruments and approaches for the deployment of political bots online. The cases described illustrate the misconceptions about fake accounts, paid online commentators and automated scripts, as well as the threats of malicious online activities. First, we explain how bots operate in the internal political and media environment of the country and provide examples of typical campaigns. Second, we analyse the case of the MH17 tragedy as an illustrative example of Russia’s purposeful disinformation campaign against Ukraine, which has a distinctive social media component. Finally, responses to computational propaganda are scrutinized, including alleged governmental attacks on Ukrainian journalists, which reveal that civil society and grassroots movements have great potential to stand up to the perils of computational propaganda.

Citation: Mariia Zhdanova & Dariya Orlova, “Computational Propaganda in Ukraine: Caught between external threats and internal challenges.” Samuel Woolley and Philip N. Howard, Eds. Working Paper 2017.9. Oxford, UK: Project on Computational Propaganda. comprop.oii.ox.ac.uk<http://comprop.oii.ox.ac.uk/>. 25 pp.

Read the full report here.

By Oxford Internet Institute, University of Oxford