Soon after the chemical weapons attack on residents of Ghouta, Syria in August 2013, analysts, journalists and policymakers scrambled to understand who was responsible. The prospect of American intervention hinged in part on what they discovered, Adam Rawnsley writes in his article for medium.com.

U.K.-based Eliot Higgins had made a name for himself among reporters and conflict analysts for his ability track weapons in the Syrian civil war using open sources—that is, news reports, social media postings and amateur videos that anyone can access.

Higgins was also looking for answers in the Ghouta attack. It seemed likely the regime of Syrian president Bashar Al Assad was to blame. But then one of Higgins’ correspondents forwarded him some compelling videos. They appeared to implicate Syria’s rebels as the perpetrators of the gas attack.

The videos depicted men with gas masks and flags bearing the logo of Liwa Al Islam, an Islamist militant group in Syria. The men made a point of showing off a specific kind of rocket reportedly used in the chemical attack.

“I’m giving you a heads up, these videos appear to be new and you will have to deal with them,” the correspondent wrote in an e-mail. “I will not speak to the authenticity of this video, but what it shows is obvious.”

But the videos were a hoax—one meant to deflect blame away from the Syrian regime. And that kind of disinformation is becoming more common, and more dangerous, as powerful entities increasingly hijack open-source information.

As Higgins documented, the weapons, timing and publication didn’t line up with known facts about the chemical attacks or Liwa Al Islam’s media outlets. Someone, in other words, was trying to trick Syria-watchers into absolving the Al Assad regime.

The proliferation of cell phones and the Internet has made the work of analysts like Higgins a lot easier. It’s put cameras in the hands of people in virtually every country in the world and given them the means to distribute their videos and pictures.

It’s created a wealth of freely-available data about events across the globe, making conflict analysis at a distance possible like never before.

But it’s not just reporters and analysts who have noticed the power of open sources to shape the public’s understanding of war. More and more, participants in those conflicts are aware of what open sources like social media reveal about them.

Of course, the proliferation of forgery is hardly a recent development. Back during the Cold War, spy agencies such as the KGB used to drop fake letters in friendly dead-tree newspapers in order to get the ball rolling on a disinformation campaign. Maybe you’ve heard about a few.

But today’s media environment has added a new dimension to the fakery game. Using faked pictures and videos on social media, sometimes laced with malicious software, some are trying to piggyback on the popularity of open-source analysis in order to muddy the waters.

Case in point—the July shoot-down of Malaysian jetliner MH17. In the wake of the disaster, eagle-eye open-source analysts including Higgins and others managed to trace the Buk missile system responsible for destroying the jet from Russia, into the hands of Russian-backed rebels, to the site of its fateful launch and its quiet slink back into Russia.

The sleuthing became a much-celebrated symbol of the power of open sources, but the reaction from the Kremlin was telling. Faced with evidence that its proxies in Ukraine had committed a terrible crime, Russian officials tried to turn the tables with a social media case of their own.

In what looked like a nod to their open source nemeses, Russia’s general staff trotted out a video of the allegedly responsible Buk missile system, released by the Ukrainian government on YouTube, moving next to a billboard for a car dealership.

https://twitter.com/AricToler/status/491308413557538816/photo/1

Russia pushed back on the claims, arguing that a blow-up image of the billboard showed an address in Krasnoarmeysk, a town in Ukrainian government control — the implication being that Buk was Ukrainian, not Russian.

The blow-up was, in all likelihood, a forgery. Aric Toler, a Russia analyst and columnist for Global Voices, dug deeper and discovered the car ad in the billboard was a generic one, not tailored with an address in Krasnoarmeysk.

In fact, Toler noted, Krasnoarmeysk hadn’t even participated in the dealership promotion. Lugansk, where the Ukrainian government argued the film was taken, had.

Sometimes, though, social media fabrications aren’t as amusing. On Christmas of 2010, Iran hastily told the public it would be executing Habibollah Latifi, a Kurdish activist and political prisoner held by Iran, prompting protests in Iran and around the world.

James Miller, managing editor of The Interpreter magazine, was working with a handful of bloggers who noticed images circulating on the Internet featuring Iran’s notorious Evin prison, home to political prisoners and the site of protests against Latifi’s execution.

Contrary to reports from the scene, it showed Evin on fire. Curious, Miller dug further.

“A reverse image search led to a Website hosted by a server with connections to Hezbollah in Lebanon—and installed some particularly nasty spyware for anyone who followed the link,” Miller tells War Is Boring.

“A security specialist who removed the spyware for me told me that the malware was specifically attempting to gain access to passwords for e-mail, Twitter and Skype, and some security analysts believed it had been created by the Iranian Revolutionary Guard Corps.”

“A hacked email address, Twitter account or Skype channel could easily net someone the real identity of activists on the ground in countries where activism is a death sentence,” Miller says. “So sometimes the pictures aren’t just fake—they’re deadly.”

As worrying as these examples may be, targeted, sophisticated fakes tend to be the exception, rather than the rule. The problem for the would-be disinformation artist is that making a fake picture or video is easy. Making it genuinely convincing is relatively more difficult.

As a result, most of the fakes about the wars in Syria and Ukraine circulating on social media wouldn’t survive a two second reverse-image search in Google.

Diligence, however, is for the skeptical. So the impact of fakery registers most strongly among the already-entrenched and sympathetic audiences, already primed to believe the worst about their opponents.

Examples abound.

A still from the film Fortress of War
A still from the film Fortress of War

Movies have proved a handy source of fiction for conflict media-fakers. Toler says one image of a weeping child is particularly popular among those sympathetic to Ukrainian rebels.

The image comes taken from Fortress of War, a World War II film about the Red Army’s attempt to hold off a Wehrmacht assault against a Belorussian fort. “Almost every time I look at photos of the weekly pro-separatist rallies in Moscow, I see this image somewhere—either from signs in the crowd or at the fundraising booths,” Toler says.

Russian social media users also circulated photos of a man chewing on what look like a disembodied arm, claiming Ukrainian troops were engaging in cannibalism with the remains of dead rebels.

The image they offered as proof in reality is a jovial behind-the-scenes shot of a prop artist on the set of a 2008 Russian sci-fi film We Are From the Future.

Even legitimate news organizations can get suckered. CNN briefly fell for footage on social media purporting to show a Ukrainian military helicopter shot down in the country’s civil strife.

Screenshot from Russian news portal Rusvesna purporting to show Ukrainian military cannibalism
Screenshot from Russian news portal Rusvesna purporting to show Ukrainian military cannibalism

The video was actually a year old and filmed in Syria, but its placement in such a premier media organization quickly turned it into fuel for the prejudices of each sides.

“A ton of people on both sides of the Ukraine-Russia conflict were posting the video that CNN put out as proof of either ‘savage separatists’ or ‘weak Ukrainians’ before they put it up,” Toler says.

Imagery from other conflicts, taken out of context and given the right backstory, provide the appropriately graphic visual fodder for fakers to stoke outrage. Pro-Ukrainian-rebel news and social media outlets have circulated pictures of an Israeli mother and daughter sheltering from rockets, dead Syrian children or a morgue from the cartel violence-plagued city of Ciudad Juarez in Mexico.

Who’s behind the misinformation? Even the simple fakes can be hard to trace back to patient zero. Much of the lower-quality social media fakery starts circulating among faithful online partisans from one side in a conflict.

From there, the images then take on a new life as state-friendly or ideological news outlets pick them up and circulate them to a wider audience. The motivation isn’t always clear and the line between between a poster’s credulous enthusiasm for a photo and cynical manipulation of it is hard for observers to discern.

It is clear, however, that governments are looking to play a covert role in social media. “Defectors and leaks from Iran, Syria, China and Russia all say that those governments are actually paying people to troll or muddy the waters,” Miller says—and reports back up his claims.

Earlier this summer, Buzzfeed’s Max Seddon documented a trolling campaign subsidized by the Russian government, which paid Russian bloggers to create sock puppet accounts on social media and troll the comments section of unflattering Western articles about Russia.

Some news organizations are trying to stem the tide of fakery about conflicts on social media—or at least make the public aware of them. Ukrainian singer and activist Margo Gontar co-founded StopFake.org, a Website dedicated to identifying and debunking fake imagery and stories about the war in Ukraine.

StopFake has pushed back against the prevalence of forgeries in Russian and rebel media, from MH17 casualties passed off as mass graves of rebels to a fictional pro-Ukrainian toddler militia in Lviv.

Gontar says the site is having an impact “I’ve seen at least some results. There are fewer boldfaced lies,” she explains. “In the beginning we had 150 e-mails per day,” warning the group about fake news items spotted in the wild. “And now we have five.”

But trying to run down every fishy picture floating around on social media can be exhausting. Fortunately, a number of companies helpfully make software platforms that can assist in detecting online deception.

“Analyzing the content of communications and messages is really productive but it’s also really time-consuming,” says Matt Kodama, vice president of products at Recorded Future. Recorded Future’s software, funded in part by the CIA’s venture capital arm In-Q-Tel, allows users to discover patterns in large data sets.

“[In] a lot of these hoax networks, all the accounts will pop up all at the same time and they’ll all be following each other,” Kodama says. “But if you really map out how the accounts relate to each other, it’s pretty clear that it’s all phony.”

It’s the metadata, the information about the accounts and the messages, that’s often most revealing, Kodama adds. “That’s often the first filter, to look at the pattern of instantiation of their social media personas and the patterns of relationships and communication between them to see if there’s any basis for giving the benefit of the doubt that these personas actually have some real identity as opposed to a hoax.”

But for the everyday reporters, observers and analysts who can’t afford expensive software suites to sort fact from fiction, there’s just one way to safely use open-source information. Be skeptical.

By Adam Rawnsley, medium.com.