We stand with Ukraine

It’s Nothing But Hot Air – But How Can We Fight Fake News?

Expert.ai Team - 9 January 2017

The internet community, politics, and media must act together – and technology needs to play its part, too.

“Post-truth” was named word of the year 2016 by Oxford Dictionaries. This distinction epitomizes the trend that has seen objective facts lose their power of reference for analyzing a situation or question. Instead, more and more people seem to rely as much or more on their emotions and gut feelings when making complex decisions.

The same can be observed from discussions on the pros and cons of child vaccinations all the way to far-reaching debates such as election and referendum campaigns: If an expert supports one side of the argument with convincing facts, public opinion won’t necessarily be swayed in that direction – quite on the contrary. Many people feel suspicious about any concept supported by “the establishment,” “elites,” or “experts,” which increases the likelihood that they will reject it.

This growing wariness of experts and their opinions is supported and strengthened by digital media trends. A spectacular fake news story can quickly be invented, published online, and propagated even faster through social media. Even if it is later retracted, the statement is often only read by a tiny percentage of the original audience. In searching results in the web on complex topics, the carefully worded statement of a famous scientific institution appears in the same size and font as the rant of a vociferous troll who “shouts” from his attic room with all the power the Internet gives him. This is his right, of course, and the importance of the web as a platform where everyone can express and share their opinions cannot be overvalued. It is becoming increasingly clear, however, that the right to free speech with no regards for the facts is riddled with risk – and we don’t know yet how to tackle it.

In a thoughtful article, Tim O’Reilly explores the question as to whether digital algorithms could be designed to help us find our way in a sea of competing views. Very pragmatically, he dismisses artificial intelligence (AI) as a potential solution, since AI technology with the required level of sophistication for this task does not yet exist. Instead, he looks for ways to fact-check an expressed view against a few simple criteria: Is a source given for each quote or statement? Does this source really contain the quote or statement? What kind of source is it? Do other sources exist? If yes, are they independent of each other? Skeptical readers could answer many of these questions themselves, but it would take some effort. O’Reilly’s criteria can, however, be tested surprisingly easily with methods already used in bibliometrics, for example. The resulting algorithms would still not be able to evaluate the contents of a news post, but computers excel at following links and scoring sources according to a given set of metrics.

O’Reilly suggests that the resulting score might be used to label posts which appear suspect according to predefined criteria. It is not yet clear, however, where this labelling would occur: Dubious news sites trying to attract visitors with overblown or fake news reports are unlikely to be interested. faPlatforms like Facebook or Google, who see themselves as content brokers rather than content generators might also be hesitant. Who wants to antagonize Internet surfers living in their own “social bubbles” with warning messages such as “Caution: possibly not verifiable!” Trying to counter the flood of fake news with such means would be as futile as challenging all the drunken drivel in every bar in the country, argues Jeff Jarvis in ZEIT magazine.

People will have to get more media savvy: Just like the majority of Internet users now know that you should not click on a link in an unsolicited email promising the top prize in a lottery, we will hopefully all learn in time not to pass on suspect news we read on social media. After all, who wants to be seen as the gullible fool who was duped by yet another fake? People might welcome algorithm-based tools such as the plausibility test suggested by Tim O’Reilly to save them from painstakingly fact-checking every story themselves.

At the moment, those who use technology to spread fake news in order to gain votes or advertising income still have the upper hand over credulous users, inert Internet providers and passive politicians. But there is still hope that in the near future, lapses like those in the 2016 US election campaigns will be seen as the high-water mark of an erroneous trend that eventually saw the Internet community turn its back on “post-truth” rumor mongering. It is also to be hoped that the Internet community will be supported by appropriate software tools and by media companies and politics taking a stand against the slide of our society into a delirium of truth-twisting and resentment.

So what should happen now? Is it conceivable that all those with an interest in fact-based public debate (the media, the government, and other groups in society) will support a label system like the one proposed by Tim O’Reilly, a kind of plausibility seal for news articles and posts that fulfill certain criteria? The automated, software-based validation by an independent body could become a sign of quality, the kind of certification already in place in other areas of the economy – like child-labor-free clothing or foods containing no artificial colors or flavors. Plugins for browsers or social media apps could – with the user’s permission – scan the Internet in the background to check the credibility of displayed content. Media-savvy users could then stop the spread of tall tales and lies that fooled their more gullible friends, “Yes, I saw that, too, but it comes from that notorious rumor mill XYZ. Three of the people mentioned in the report have already denied it and anyway, there isn’t a public pool with that name in Boulder…”

Many of the forces that spread discord and confusion for various distasteful reasons have become masters at exploiting Internet technology. It is high time that civil society opposes them with adequate technical, community-based, and legal means.

 

Stefan GeisslerStefan Geißler
General Manager Germany
Stefan is Expert System’s General Manager for Germany. As co-founder and managing director for TEMIS GmbH, Stefan managed customer projects in Germany and provided high level consulting to leading TEMIS customers worldwide. Previously, he led IBM Germany’s Text Mining activities where he was responsible for solution design, development and deployment for corporate customers. His work also included several large research projects in natural language processing such as KANT at Carnegie Mellon University in Pittsburgh and Verbmobil at the Scientific Center of IBM Germany in Heidelberg. Stefan holds a Master of Arts in Linguistics and Computer Science from the University Erlangen-Nürnberg (Germany).