Tech has a role to play in fighting today’s increasing barrage of misinformation. However, it’s important that we recognize tech’s limitations, or we risk wasting talent and resources by working under misguided assumptions. On top of that, once we understand these limitations, some promising strategies come to light.
To paraphrase Brandolini's “bullshit asymmetry principle”, the work it takes to refute misinformation is much greater than the work to produce it. It takes me no effort to state something like, “The Senator from Nebraska voted against more than 20 worker safety measures in 2019.”
Is it true? I don’t know. I just made that up, and it took me no effort.
Now how much effort would it take someone to refute a claim like that? Quite a lot. They’d have to carefully read through the bills she voted on in 2019, find the measures related to worker safety, and show that her votes don’t line up with the claim.
Tech vs Man-made Misinformation
In the face of this misinformation asymmetry, it’s natural to consider automated solutions. Even though it takes more work to refute misinformation than to produce it, software can do certain work much faster and much cheaper than human beings. Could some technology — possibly some artificial intelligence — detect and flag misinformation in less time and with less cost than the people who generate it?
I don’t know of any theoretical reason why not. There is no shortage of research and startup initiatives taking this exact approach.
However, as tempting as it is, this approach hits a major limitation.
Tech vs Automated Misinformation
What happens when software is used to detect and flag automated misinformation?
The misinformation asymmetry principle still applies. In fact, it tells us that the work it would take software to refute misinformation is much greater than the work it would take software to produce it. It stands to reason that for any software that detects and flags misinformation, it’s possible to build another software which produces misinformation and which is cheaper to build and cheaper to run. [1] [2]
Automated misinformation technology is not limited to tech which produces falsehoods as the main goal, it also includes anything which produces falsehoods or inaccuracies as a side-effect. For example, software to help writers produce content at a faster and faster pace — content optimized for clicks, engagement, and reactions. This would generally be easier to build and cheaper to run than tech which evaluates factual accuracy.
You could argue that it’s possible society will expend sufficiently more resources on technology to detect falsehoods than technology which produces falsehoods. However, we need only look at today’s information ecosystem and compare the amount of sensationalism with the amount of rigorous journalism to see why that’s unlikely.
This limitation doesn’t mean that tech to automatically detect misinformation will have no role to play. It just means that even if such technology gets very advanced, it likely won’t be enough to keep pace with misinformation.
Journalism’s Proof of Work
So what can we do? If we can’t keep up with and flag the majority of misinformation out there, how can we separate the factual from everything else?
There is one thing that factual evidence-based journalism has behind it which misinformation simply doesn’t — the work that goes into it. What would happen if we made it easy for journalists to share that work with readers?
Imagine reading a piece of journalism and being able to click into the research behind every claim in that piece. You would see the original primary source evidence backing that claim. That evidence would include annotations to guide you through the sections, passages, and definitions used to back the given claim.
Primary source evidence is often long and complicated, which is why simply linking to it isn’t very accessible to you as a reader. With these annotations, it becomes much easier for a reader to review the evidence for themselves.
A journalist making a claim like the one I made about the Nebraska Senator’s voting record would not only be expected to reference the original bills hosted on Congress’s website (not just any copy hosted on an unofficial source), but also to annotate the relevant sections.
If we made it easier for journalists to share their work, and if we made that work as accessible readers as possible, then more and more readers would expect proof of work with annotated primary source evidence in the journalism they consume. [3]
The Grocery Store Without Nutrition Labels
This kind of societal expectation is not without precedent.
Imagine walking into a grocery store and learning that from now on, no more nutrition labels, no more lists of ingredients. Don’t worry though, if you really want to know what’s in your food, you can personally take it to a lab to get it chemically analyzed.
That would be crazy. No one in their right mind would expect that kind of model to scale. Even if we started building automated chemical analysis stations, it would be a backwards approach.
Fortunately we’ve come to expect that the food we consume comes with a list of ingredients and some nutritional breakdown. It’s not crazy to think that one day, a significant number of consumers will expect any piece of journalism to come with accessible and annotated evidence before they consider it credible. [4] [5]
Just as not everyone reads the ingredients lists on the food they buy, not everyone will review the evidence behind the journalism they consume. The point is that it’s out there in the open, so those who do want to review it can do so at any time.
Bringing Reviewable Journalism to the Mainstream
It’s with that exact aim — empowering journalists to make their annotated primary source evidence accessible to readers — that I built SourcedPress. The long term goal is to enable anyone, writing anywhere on the web, to use this style of reviewable evidence in their writing. In the meantime, to show the world what this approach can look like, we’ve started publishing pieces of our own. Our aim is to cover topics which include civil liberties, new pieces of legislation, and climate change initiatives.