By Claire Wardle, for Nieman Lab

“We’re in a terrifying moment where our global information streams are polluted with a dizzying array of mis- and disinformation. Politicians are targeting the professional media as a way of building direct connections with citizens through social media.”

In 2017, I certainly wouldn’t have predicted the term “fake news” would become so stretched and contorted as to render it utterly meaningless, much less weaponized by world leaders. I wouldn’t have predicted so little concrete action had been taken to mitigate information pollution globally. I also wouldn’t have predicted the scale of coordinated media and platform manipulation.

So with these caveats, here are my predictions for 2018, and some suggestions for remedying the problem that I wish I could predict to happen.

The term “f*** news” will continue to be peppered into news articles, used by editors who claim SEO leaves them no choice, and added to academic articles by researchers riding a trend in hopes for more grant money. It will appear in government inquiries that want to seem relevant, and will continue to be weaponized by politicians wanting to undermine the media and, ultimately, free speech.

I wish I could predict that in 2018 most people would use more nuanced terms to describe different types of mis- and disinformation.

Visual disinformation will become much more prevalent, partly because agents of disinformation will recognize its power to instantly fire up emotions, evade tripping critical engagement from the brain, and be consumed directly from the News Feed. Visuals are also much harder to monitor and analyze computationally.

Technology companies are working on solutions to these challenges, and I wish I could predict that this subject would become a global research and technological priority so we might have a comprehensive solution to visual disinformation in the next twelve months.

Computational techniques that allow realistic audio, still images, and video to be automatically manipulated or created are just in its infancy, but reporting on these technologies will begin to have a significant impact on people’s trust in audio and visual evidence. Politicians will claim negative clips of them were manipulated or fabricated. We won’t see a major successful hoax using this technology in 2018. But despite that, we will spend a lot of time writing about it, raising fear and potentially jeopardizing people’s trust in audio and visual materials.

I wish I could predict fewer of these types of stories.

Techniques to manipulate platforms and the media will become much more sophisticated. There will not be enough engineers at the technology companies, nor enough reporters at news organizations, assigned to monitor these techniques. Most senior staff will continue to lack a serious understanding of how these systematic disinformation campaigns are damaging their respective industries.

I wish I could predict that technology companies and news organizations would begin to share “intelligence,” becoming much more aware of the consequences of publishing, linking to, or in any way amplifying mis- and disinformation.

Though media companies may not effectively combat disinformation, they will continue to report about disinformation and use headlines with terms like bots, Russia, cybersecurity, hacking, and fake news to generate traffic. Though the news industry will continue to use these terms, it will not explain them responsibly. Moreover, senior editors will not consider how these terms might affect the public’s trust in democratic systems and the media itself. The race for clicks may have some unintended consequences at the ballot box in elections.

I wish I could predict more nuanced reporting on disinformation that has considered the potential unintended consequences of these types of stories.

Governments around the world will continue to hold “fake news” inquiries, and some will pass knee-jerk, ill-informed regulation that will do little — or worse, suppress free speech. If a European government passes a well-intentioned law, a regime far away will use the precedent to pass similar legislation aiming to stifle what it decides is “fake news.”

I wish I could predict a truly global, regulatory conversation which recognises the cultural, legal, and ethical complexities of dealing with mis- and disinformation.

Most governments will continue to work independently on information literacy programs, despite the fact that a truly global response to this problem is required. Programs will not fully incorporate materials on the impact of big data, algorithmic power, ethical considerations of publishing or sharing information, or emotional skepticism. The programs will not be future-proofed, as they will not adequately focus on making sense of information in augmented and virtual reality environments.

I wish I could predict a global coalition bringing together the smartest minds, along with the best content creators from all companies from Netflix to Snapchat, to create information “literacy” content of global relevance.

Philanthropic organizations will continue to give relatively small grants to independent projects, meaning the scale and global nature of this problem will not be adequately addressed.

I wish I could predict the creation of a significant global fund that is supported by money donated by governments, the technology companies, and philanthropists, and is managed by a coalition of organizations and advisors.

Closed messaging apps will become even more prevalent than they are today (which is already significant in a number of countries in Latin America and Asia-Pacific). We will continue to be blind to what is being shared and will therefore be ill-equipped to debunk rumours and fabricated content spreading on these platforms.

I wish I could predict that there would be a significant, new focus on studying these apps, and testing experimental methods for effectively slowing down the sharing of mis- and disinformation on them.

Anger at technology companies will continue to rise. Consequently, platforms will be less likely to collaborate.

I wish I could predict that there would be greater moves towards transparency that involves greater data sharing, independent auditing, and collaborations with trusted academic partners.

I am unapologetic about the depressing nature of these predictions. We’re in a terrifying moment where our global information streams are polluted with a dizzying array of mis- and disinformation. Politicians are targeting the professional media as a way of building direct connections with citizens through social media. Journalists and platforms are being targeted and manipulated by agents of disinformation who crave and require the credibility that comes with their exposure. Political polarization is creating dangerous schisms in societies worldwide, and the speed of technological advancements is making manipulation increasingly difficult to detect. These are all reasons to be depressed.

It doesn’t have to be this dire. As outlined here, if everything I wish I could predict actually happens, we might have a fighting chance. I would love to be proved wrong.

By Claire Wardle, for Nieman Lab

Claire Wardle is strategy and research director of First Draft News and a research fellow at the Shorenstein Center on Media, Politics and Public Policy at Harvard Kennedy School.