It’s an appealing idea. People blame social media and Google (on computers) for the unchecked spread of fake information (via computers) and demand the tech companies (computer experts) do something about the problem on social media and Google (those darn computers!).
So tech companies announce they’ll use artificial intelligence (AI, or computers that learn) to protect us from all the false news filling our social media feeds (on the little computers we keep in our pockets).
Here’s a recent pitch by a technology company to protect us from the ills of technology: Fake news detecting technology developed by DarwinAI
The Waterloo, Canada company wants to good by making AI do good for us dumb humans. What’s not clear to me is how much the new AI is a partner with humans in identifying fake news. Or it is simply going to decide what’s truth for us?
Fake news solution?
My fear is many people worried about fake news will grasp for new AI technology, and let computers solve our problems. Except that it’s people creating and profiting from fake news by encouraging people to use computers to solve all our problems. And they’re using AI to create more fake news, too. Awkward.
“Alternative facts” and deceptions dressed up as news has around long before computers and artificial intelligence. People have long twisted the truth, stirred negative emotions and counted on entrenched group think to spread their version of “truth.” Human brains react to threats in a heartbeat. It’s a hard-wired negative bias to the world around us.
That emotional self-protection circuitry is in control before the logical part of our brain processes more details for a nuanced response to a range of inputs. Think about you how you respond to rumour and innuendo in the workplace. What does it feel like when you’re the target of gossip on the schoolyard? How effectively do you uncouple your emotions when talking politics over the dinner table? None of that will magically disappear whenever mercenary AI-powered truth detectors start scrubbing junk news from our computer screens. Learning and practicing measured responses to what I hear, see and read is the best way I know to manage a world of differing opinions. It’s also how I approach the concept of AI-sanitized storytelling in my social media feeds.
The only way I know to find truth is habitual questioning everything I see and hear. What’s the source? What emotion does the “news” I am told stir inside me? What details am I missing? Who benefits from the story I’m being told?
If a news story stirs negative emotion in me and offers a simple answer to complex problems, I go on Fake News Alert.
In 30 years as a newspaper journalist, I learned questions are wonderful things. So is a healthy skepticism of everything I see and hear. Trust, but verify.
Neuroscientist Daniel J. Levitin writes about how easy it is to trick our brains if we aren’t careful. I’m a fan of his book “A Field Guide to Lies” (also published as “Weaponized Lies”) and encourage everyone to read it. Here’s a video of Levitin speaking at Google (and I appreciate that irony):
I’m all for fighting fabricated news with all the tools smart and inventive people are able to develop. What I am not ready to yet do is blindly trust technology to protect us from fake news. That’s the same complacency purveyors of fake news count on to enable their strategies.
Sifting truth from facts is work. It’s often hard work. And it’s worth it.
I have no plans to outsource the job and blithely trust someone else — or something else — to interpret what’s truth or lies.
What do you think about fighting fake news with artificial intelligence?