The record store Grooves advertises their presence on Facebook in their shop window in Somerville, Massachusetts, July 25, 2017. REUTERS/Brian Snyder

By Klára Votavová and Jakub Janda, for Atlantic Council

Online platforms have become the world’s most influential editors-in-chief. According to a 2016 Reuters Digital News Report, 51 percent of people access online news through social media, allowing these platforms to curate their news intake through personalized algorithms.

These platforms have simultaneously gained significant economic leverage: experts estimate that in 2016, the two most influential online platforms, Google and Facebook, controlled around three-fourths of the US advertising market and were responsible for up to 90 percent of its growth. Traditional media companies, meanwhile, are struggling to define a new business model amid lower advertising revenues and declines in readership. This has negative consequences for the quality and independence of media, and can accelerate the spread of low-quality news, disinformation, and hate speech. Given the challenge to shore up quality journalism, Western governments must now create a regulatory framework that forces online platforms to take greater responsibility for their content.

Attempts to do this have already started. Owing to the United States’ strong freedom of speech protections, the European Commission is the world’s strongest regulator of online platforms’ conduct. So far, it has proposed two initiatives that strengthen platforms’ content responsibility: the Audiovisual Media Services Directive reform and the Code of Conduct on Fighting Illegal Hate Speech Online. These two initiatives’ aim to enforce the quick recognition and removal of illegal hate speech by online platforms. The Commission also wants to create a level playing field between newspaper publishers and platforms in negotiations over royalties by adding a dedicated right for publishers to European copyright law. The German government adopted an even stricter position, proposing a law that allows platforms to be heavily fined if they don’t remove illegal content quickly enough.

There are several shortcomings to these regulatory approaches. First, any regulation of online platforms threatens to sow uncertainty in the digital business environment and complicate the position of small media companies and startups. Second, such regulation may also decrease the public’s access to information and possibly repress freedom of speech. This is especially worrying since the current EU definition of hate speech is broad, and current policies encouraging the quick removal of illegal content do not prioritize free speech. Existing media laws are already being used by people in power to silence their opponents, and the term “fake news” has been quickly taken up by politicians aiming to discredit the media. It is too easy to imagine a government using the blurred fake news/hate speech discourse to censor inconvenient views.

Using precise definitions of hate speech in legislation is the only way to avoid this situation. Furthermore, delegating responsibility to recognize and remove hate speech to platforms themselves threatens to further “privatized governance,” whereby platforms govern the conduct of billions of users through their terms of service and non-transparent editorial practices. Therefore, obligations for taking down illegal content should be subject to proper judicial oversight or transparency and reporting obligations.

Responding to pressure from governments and civil society that originated largely after the election of US President Donald Trump, Facebook and Google have proposed measures targeting the malicious effects of online news distribution by partnering with fact-checking organizations, clamping down on advertisements for untrustworthy websites, modifying their algorithms, and launching their own media support projects.

These initiatives have generally been praised by media experts, but remain controversial. Employing human editors or fact-checking organizations to recognize authoritative or “low quality” stories will inevitably stir accusations of political bias, since the algorithms used by these firms remain trade secrets and platforms have defined their value commitments to society only vaguely. Also, by enhancing their filtering and editorial features, Facebook and Google may further assimilate themselves in the media environment and become de facto monopolistic media companies on the global market. Finally, online platforms can hardly do enough in a situation when their business model was built around providing their customers with content that they like; it does not incentivize the production of quality news.

The regulation of online platforms is an extremely complex issue. Nevertheless, given the challenges journalism is facing, it is impossible for Western societies to avoid this discussion. Below are the principles that governments, platforms, private companies, and civil society should follow when trying to mandate online platforms’ responsible behavior in providing news.

Governments:

  1. Examine the use of algorithms by online platforms in order to reveal potential errors and biases; understand to what extent the algorithms are a conscious editorial choice and how this should effect platforms’ liability.
  2. Provide guidelines on the editorial and take-down practices of online platforms. Make sure they are transparent and in line with freedom of speech principles and human rights law. Install dedicated bodies to oversee and report on their conduct.
  3. Properly apply existing legislation on platforms, notably from the realms of copyright, audiovisual, and competition law.
  4. When proposing legislation about hate speech or fake news, develop definitions for these terms that are as specific as possible.
  5. Ensure that platforms install appropriate redress mechanisms that allow users to complain if their content had been unjustly removed.

Platforms:

  1. Be transparent about editorial practices and report them, especially when it comes to taking down content.
  2. Continue partnering with journalists and fact checkers.
  3. Graphically differentiate news content from other types of posts.
  4. Publically proclaim your intention to support media literacy and your trust in high-quality journalism.
  5. Fund media literacy classes, particularly in those parts of the world that have recently democratized and whose media market does not have an established tradition (e.g. Central and Eastern Europe).

Civil society and the private sector:

  1. Push online platforms toward being transparent about their editorial practices.
  2. Promote a discourse that views fake news and hate speech as “not cool,” like eating unhealthy food.

By Klára Votavová and Jakub Janda, for Atlantic Council

Klára Votavová is a member of the Kremlin Watch Program at the European Values Think-Tank. She tweets @klaravot. Jakub Janda is head of the Kremlin Watch Program at the European Values Think-Tank in Prague. He tweets @_JakubJanda. This essay has been taken from “Making Online Platforms Responsible for News Content,” a report of the European Values Think-Tank.