Taisa Sganzerla, Afef Abrougui, Ivan Sigal and Filip Stojanovski. By Pernille Bærendtsen

By Ethan Zuckerman, for Global Voices

Taisa Sganzerla, Lusaphone editor for Global Voices, leads a panel titled “Fake News is Old News” with the story of a Brazilian cartoonist from a century ago who was continually accused of defamation and blasphemy for lampooning the monarchy, the church and other institutions of the time. She wonders whether there’s a line from blasphemy to the current furore over fake news, and whether fake news will be used to silence voices, much as other concerns about speech have been used to silence speech.

Afef Abrougui, Advox editor for Global Voices, is Tunisian and explains that as someone who grew up in Ben Ali’s Tunisia, she tends to see “fake news” as “government news”. When the protests that ousted Ben Ali began, the government produced news describing them (incorrectly) as violent. As a result, false news feels like an uncomfortable frame to her.

A satirical news website in Tunisia led to its creator being arrested and prosecuted for misinformation, when the project should have been understood as challenging existing media narratives. When we ask governments or commercial social media platforms to handle false news, we run the risk of silencing dissenting and creative voices.

Filip Stojanovski edits central and eastern European news for Global Voices, and he’s from Macedonia, the “home” of fake news. Filip mentions that journalism has always suffered from challenges with truth. The oldest writing we have is state propoganda – it’s Babylonian inscriptions about the infallibility of political leaders. It’s always been a challenge to find and report the truth, especially when untruths are spread by powerful people, even in democratic societies. He tells us that there’s a more sophisticated language for fake news in some European languages, including the term “journalist dac”, a term explicitly to describe fake news.

In the former Soviet Union, people grew up under decades of news divorced from morality or public interest. The cottage industry of fake news in Macedonia, about the 2016 elections in the US, came primarily from one small city. There were small groups of people building these sites as clickbait to earn ad money from Facebook. Filip sees this just as an extension of state news and being told what’s true, and the advertising industry’s use of fake profiles and accounts to draw people to online advertising.

Taisa asks whether the Macedonian fake news purveyors had any coordination with US or Russian powers – Filip tells us that, thus far, no one has found those ties. There was already a cottage industry of content creation in Macedonia around topics like wellness and US football – it’s likely that this was simply a market mechanism, creating particularly outrageous and lively articles to attract clicks.

Ivan Sigal, executive director of Global Voices, has the unenviable task of explaining Russia role in fake news. Rather than detailing the structure of troll farms, Ivan wants to explore the psychology of the troll, and specifically the agent provocateur, based on his eight years in Russia and post-Soviet states.

Ivan asks us to imagine we’re in a press conference. There’s a man working for a state-affiliated news organization who asks a question, insulting and accusatory to people on the panel. Everyone in the room knows the questioner is untouchable. The message is that everything said on the panel doesn’t matter, because he’s got power and can bully or intimidate anyone speaking truth. My ability to create fear, he’s implicitly saying, is more important than your ability to speak truth.

Imagine trolls and thousands of agent provocateurs. We know that what people are saying is not true, but it changes the dynamics of the dialog and destabilizes what we know to be true.

Taisa asks whether we’re late to the game in talking about fake news, given its recent importance. Filip suggests that silence on issues like this stops progress. Underneath fake news is assertion of power through creating irrelevant or unbelievable content, rather than through censoring and silencing voices. Filip’s group. Metamorphasis Foundation, has been offering a fact checking service (factchecking.mk) for political statements. He’s realized that they need to be checking journalism ethics as well. The service now looks at questions like whether journalists are quoting relevant sources, multiple sources, avoiding conflicts of interest, etc.

Afef asks whether we should be asking different questions about the US or the UK – should we be as concerned about fake news as we are with democratic dysfunctions or the rise of the extreme right. Is fake news a distraction in this space? Ivan suggests that discussion of fake news calls attention towards outsiders – the Russians or Macedonians – and away from problems within our society. The technological and financial incentives of mass media both contribute to this situation as well.

Referencing Ivan’s explanation of troll psychology, Tasia asks whether this psychology is different, or just more amplified, in a social media age. Ivan explains that the agent provocateur is someone in the room, known to you, which makes the attack very personal. With trolls, you don’t know who you’re dealing with, and whether the attack comes from a person or a bot, the silencing and dampening effect is at least somewhat different. The agent provocateur seeks to silence a media outlet – the troll farm desires to distract society, the larger social conversation.

Tasia notes that Facebook is sometimes seen as pro-government media in Turkey because it takes down a great deal of content – LGBT content, content that insults the nation’s founder – at the request of the government. If platforms like Facebook are already doing a poor jon of regulating speech, why would we ask them to tackle “fake news” for us?

Filip notes that younger generations are often unaware that seeking truth in journalism is the objective. There’s so much use of media to persuade and make political arguments that it’s possible to think that news is primarily a space of contention between actors.

Tasia notes that this used to be a battle with two parties – news organizations and governments. Now that tech platforms have jumped into the fray, it’s a much more complicated equation. Ivan notes that social media companies already regulate speech within their terms of service. Once social media companies are offered a new category of speech to regulate, we find ourselves in an Orwellian situation where words can disappear from discourse. Once Facebook began trying to regulate fake news, Pakistan told Facebook that they saw blasphemy as fake news and asked Facebook to regulate the category. “Our fundamental rights include the right to be wrong. If we can’t be wrong, we can’t progress.”

A question asks whether the problem is that platform companies aren’t held to the same standards as journalism organizations. Afef argues that there are types of content where we need more action from social media companies – bullying, rape threats for example. But these systems have a lot of errors. YouTube took down channels documenting the conflict in Syria due to concerns over extremist content. There need to be better mechanisms both to manage content and to understand what’s taken down and why.

Filip notes that existing laws apply online as well as offline. Laws protecting us against crime apply online as well as off. Many problems we face today could be solved without inventing new laws or regulations, but applying the existing laws we already have. Ivan notes that law isn’t necessarily justice, or equity. Germany is using regulation and fines to control certain types of expression online – a German minister argues, “freedom of expression ends where criminal behavior begins.” Fair enough, given Germany’s history. But if the Chinese government decides to apply the same logic, we are likely to see a wave of imprisonments based on Chinese definitions of what’s acceptable speech.

Nathan Matias of Princeton asks about qualities of human behavior that make us susceptible to this behavior. At the same time, he wonders about the specific contexts of these conversations about fake news. How much progress can be made globally around fake news, and how much needs to engage not at the platform level, but in a specific context? Afef notes that local context often is more restrictive than restrictions on these global-scale platforms. Filip notes that the platforms are often highly responsive to local laws, responding not just to governments but to powerful individuals. Issues like copyright are very likely to force content offline, as platform companies care deeply about not losing money. Filip believes we may need to solve these problems with a form of civic education, helping people understand how technosocial systems control speech.

Ivan notes that earlier today, we’ve heard a lot of ideas about how we might design social networks for discourse: privacy first, the ability to choose multiple identities online. Too often, these ideas are at odds with the business model of social media platforms. Since these incompatibilities are so deep, there’s more of a tendency to respond to legal threats than to redesigning the values behind these systems. Yahoo has been dealing with the issues since 2005… but there’s limits to what changes given the existing business models.

hvale asks about intermediary liability and about Facebook’s experiment in removing media from news feeds in a set of countries. What international organizations might be willing to start up a different platform based on social justice principles? Filip observes that reach through social media has been dropping for a long time, and has now dropped much more in Serbia where Facebook is experimenting with a separate Pages feed. This is a catastrophic change for independent media, solvable if you pay for ads.

Ivan notes he’s heard Facebook answer this question. Their answer is that they’re here to allow individuals to connect to other individuals – journalism is a tiny fraction of the information that’s shared. How do we have a conversation with Facebook and others about what sorts of platforms we need, how do we interact with each other online?

A Global Voices contributor from Myanmar asks about removal of dangerous content. What happens when content urging people to kill each other is allowed to remain online? The question goes unanswered due to time, but remains a worthy challenge as we think about the relationship between citizens and platforms.

By Ethan Zuckerman, for Global Voices