By Christine Schmidt, for Nieman Lab

Sad non-surprise: Young people are not great — actually are kind of terrible — at evaluating digital sources on the internet, new research from Stanford found. But remember, this isn’t just young people, as some of the same Stanford researchers discovered that highly educated people are pretty bad at it too.

Setting aside any and all “ok boomer” stereotypes that could come through this analysis, young people like the ones studied in this research could be a huge part of the next electorate as they come of age for the 2020 election (that’ll be mostly anyone born before November 3, 2002). Stanford’s Joel Breakstone, Mark Smith, and Sam Wineburg and a team from Gibson Consulting assessed the digital literacy of 3,446 high school students between June 2018 to May 2019 and found:

  1. 52 percent of students believed a video allegedly showing ballot-stuffing in the 2016 Democratic primaries, when it was actually footage from Russia. Only three of the 3,000+ students surveyed went looking for the source of the video.
  2. Two-thirds of students didn’t see the difference between sponsored content (even when it was labeled as such) and news stories, using Slate’s homepage as an example.
  3. 96 percent of students didn’t think about how a relationship between a climate change website and a fossil fuel company could impact the website’s credibility.

“Nearly all students floundered. Ninety percent received no credit on four of six tasks,” the researchers wrote. “The purpose of this study was to explore whether the intense concern about information literacy since 2016 has had an effect on students’ digital abilities. Are young people today, over three years after our original study, prepared to make choices based on the digital information they consume?” Um…

The students they sampled comprised a representative demographic profile of American high school students (the full report includes an exhaustive explanation of how they reached that sample). The study involved a few assessments; here’s one example followed by results across all of them:

The Evaluating Evidence task gauged students’ ability to evaluate the trustworthiness of a social media post. The post presented students with a video on Facebook from a user named “I on Flicks” and asked students if this video was “strong evidence” of voter fraud during the 2016 Democratic primaries. The video includes four clips of poll workers surreptitiously stuffing ballots into bins. The video is silent, but the captions tell viewers that the clips depict Democratic 2016 primary elections in Illinois, Pennsylvania, and Arizona. The post accompanying the video reads: “Have you ever noticed that the ONLY people caught committing voter fraud are Democrats?” None of this information is true. The clips actually show voter fraud in Russia, not the United States.

Over half of the students in our sample (52 percent) were fooled by the video. These students concluded that the video provided “strong evidence” of voter fraud in Democratic primary elections. A student from a rural district in Ohio took the video at face value: “Yes, it shows video evidence of fraud in multiple different states at multiple different times.” Another student from an urban district in Pennsylvania wrote, “Yes, because the video showed people entering fake votes into boxes. I’m assuming that they were Hillary votes.”

Of the more than 3,000 responses, only three students actually tracked down the source of the video. Unprompted, a student from suburban California skillfully engaged in lateral reading: “Doing some research on the internet, I found the exact same video as shown above. However, the video which I found, which was presented by BBC news, showed that the cam footage had nothing to do with the 2016 Democratic primary elections. Instead, BBC presented the cam footage as that of the Russian 2016 election. According to the announcer of BBC news, the video showed that the people in Russia—not America—were committing voter fraud.”

The researchers suggest some ways to address this failure: “We desperately need research-based approaches to digital literacy instruction.”

And one thing that they think you should stop using right away? “The CRAAP Test (Meriam Library, California State University, Chico, 2010), a ubiquitous tool that appears on scores of college and university websites, is the most widely available example of digital curriculum. This approach instructs students to answer a variety of questions about a site’s currency, relevance, authority, accuracy and purpose (hence, CRAAP). By focusing students’ attention on a single site, rather than teaching them how to consult the broader web to establish a site’s trustworthiness, the CRAAP test is inconsistent with how expert evaluators reached sound judgments.” So yes, the CRAAP test is basically crap.

The full report (and Stanford’s writeup) is available here.

By Christine Schmidt, for Nieman Lab