Every time I hear people talk about how artificial intelligence will save us humans from fake news, I get uneasy.
It’s an appealing idea. People blame social media and Google (on computers) for the unchecked spread of fake information (via computers) and demand the tech companies (computer experts) do something about the problem on social media and Google (those darn computers!).
So tech companies announce they’ll use artificial intelligence (AI, or computers that learn) protect us from all the bad fake news filling our social media feeds (on the little computers we keep in our pockets).
Here’s a recent pitch by a technology company to protect us from the ills of technology: Fake news detecting technology developed by DarwinAI
The Waterloo, Canada company wants to good by making AI do good for us dumb humans. What’s not clear to me is how much the new AI is a partner with humans in identifying fake news. Or it is simply going to decide what’s truth for us?
My fear is many people worried about fake news will grasp at new AI technology, and let computers solve our problems. Except that it’s people creating and profiting from fake news by encouraging people to use computers to solve all our problems. And they’re using AI to create more fake news, too. Awkward.
Fake news was around long before computers and artificial intelligence. People have long twisted the truth, stirred negative emotions and counted on entrenched group think to spread their version of “truth.” Think rumor and innuendo. Malicious gossip on the schoolyard. Political and corporate propaganda. None of that will magically disappear when mercenary AI-powered truth detectors start scrubbing junk news from our computer screens.
The only way I know to find truth is to question everything I see and hear. What’s the source? What emotion does the “news” I am told stir inside me? What aren’t I being told? Who benefits from the story I’m being told?
If a news story stirs negative emotion in me and offers a simple answer to complex problems, I go on Fake News Alert.
In 30 years as a newspaper journalist, I learned questions are wonderful things. So is a healthy skepticism of everything I see and hear. Neuroscientist Daniel J. Levitin writes about how easy it is to trick our brains if we aren’t careful. I’m a fan of his book “A Field Guide to Lies” (also published as “Weaponized Lies”) and encourage everyone to read it. Here’s a video of Levitin speaking at Google (and I appreciate that irony):
I’m all for fighting fake news with all the tools smart and inventive people are able to develop. What I am not ready to yet do is blindly trust technology to protect us from fake news. That’s the same complacency purveyors of fake news count on to enable their strategies.
Sifting truth from facts is work. Sometimes hard work. And it’s worth it.
I have no plans to outsource the job and blithely trust someone else — or something else — to interpret what’s truth or lies.
What do you think about fighting fake news with artificial intelligence?