By Mike Caulfield, for NiemanLab

Mike Caulfield is head of the Digital Polarization Initiative of the American Democracy Project

Among the many differences between older social software and post-Facebook social software is the peculiar flatness of the newer platforms. Older tools — recognizing that the user of social software is the group, not the individual — empowered those invested in health of communities with tools to help keep the community healthy. Effective social software was oriented not toward the average member of a community, but toward the community’s stewards. That’s why, for example, Wikipedia foregrounds to users an array of information useful to making quick judgments about editors, edits, and claims on articles’ History tab. It’s why the bread and butter of community blogging systems was different levels of trusted user status, and why BBS tools showcased moderation features over user capabilities.

Platforms split community management from community activity, and we’re still feeling the effects of that. Wikipedia has a half dozen different access levels and at least a dozen specialized roles. Twitter has one role: user. But even though specialized formal roles don’t exist, different patterns of influence do, and this has been woefully underutilized in the fight against misinformation.

That’s why my prediction for the coming year is that at least one platform will engage with its most influential users, giving them access to special tools and training to identify and contextualize sources and claims in their feeds. This will allow platforms to split the difference between a clutter-free onboarding for Aunt Jane and a full-featured verification and sourcing interface for users whose every retweet goes out to hundreds of thousands of people, or whose page or group serves as an information hub for users and activists. These tools and training will also eventually be released to the general public, though for the general public, they will default to off.

Until recently, most online communities put resources into making sure that those with influence had tools to exercise that influence responsibly, built right into the main interface. It’s time for platforms to follow suit.

And here’s a bonus prediction, this one for online information literacy. Over the past few years, much of the focus in infolit has been on trustworthiness, truth, and bias. While the truth sometimes is clear cut, and the intentions of those working in media literacy are good, putting these things at the core of any large public initiative can be problematic. Trustworthiness, for example, is often seen through an explicit news agenda, where journalistic processes are seen as a platonic ideal to which other types of information should aspire. Bias, if anything, ends up being too powerful a tool, allowing students to filter out almost any publication as unworthy of their attention.

For the past several years, we’ve been taking a different tack. We’ve been asking students a simple question: What context should you have before engaging with a particular piece of content? And if you share this content, what context should you provide to those with whom you share?

While we’ve been doing this for its pedagogical benefits, a recent public project has made me realize that it is an approach uniquely sensitive to community values, and, as such may provide a starting point for broad educational initiatives. Truth is a battleground, trustworthiness a minefield. Yet even in these divided times, most people agree that one should know the relevant context of what one reads and shares. It’s as close to a universal value as we have these days.

Because these issues will become more salient as broader adoption is pursued, I predict that online information literacy initiatives will begin to pivot from trust as an organizing principle to the reconstruction of missing context.

By Mike Caulfield, for NiemanLab

Mike Caulfield is head of the Digital Polarization Initiative of the American Democracy Project.