Our Lazy Brains Weren't Made for the 2020's
Updated: Jun 1
The awful combination of lazy brains and an increasingly hostile information environment powered by artificial intelligence could destroy us all. Reading this article is your first step in saving us!
Recently, about 100 people responded when I asked my network to subject themselves to a short test of their critical thinking skills. Take the quiz yourself before you read the rest of the article to see how you do. It is 5 questions long and takes about 5 minutes. I’ll wait.
I am fortunate to have a network of, quite literally, the smartest people I know. Still, this wasn't a walk in the park. Why?
The same way hackers, criminals, terrorists, and spies do every day, the quiz took advantage of some of our brain’s deeply entrenched biases and its natural tendency towards conserving its cognitive energy.
So, what were the "right" answers?
The first question was a short scenario about a CEO and his lax cybersecurity practices. Quiz-takers had to choose which of 3 predictions were the most likely…a phishing attack against the corporation, a spearfishing attack against the CEO, or an Asian criminal group getting the CEO’s password to use in multiple attacks.
The only wrong answer here is the one about the Asian criminal group, yet a full third of quiz-takers chose this incorrect option. This response is attractive to our brains because of the combined effects of the conjunction fallacy and availability bias.
In terms of which is most likely, it is impossible for an option containing multiple variables to be more likely than an option that only contains one of them.
It cannot be more likely that an Asian criminal group is going to conduct a social engineering/spearfishing attack against the CEO than it is that any group of any demographic would conduct that attack.
The increased level of detail in that answer, however, is attractive to our brains. With sufficient detail, our brains begin to see the scenario playing out in our minds as if we were reading a story. These internalized images make the scenario more plausible, and therefore, easily available in our mind.
This matters in real life when people who are trying to influence your thinking prime your mind with images beneficial to their cause. Take a look at your Facebook feed. What you saw in the last 10 posts influences what you think about the next 1. This was a tactic Russian trolls used in the 2016 US Presidential election.
The second question was one of pure probability. If your coworkers are buying 20 tickets in a raffle, how many tickets do you need to buy to have at least a 50% chance of winning?
You also need to buy 20 tickets to have half.
One-third of people taking the quiz missed this one with answers ranging from 1 to 100. Some people are more numerically inclined than others, and some people may have misread the question...still, if a simple 50-50 proposition trips up 1/3rd of the people some of the time, there's a lot of room for criminals or unscrupulous snake oil vendors to exploit.
The third question was about Cheryl’s she-shed. This question used dialog from the State Farm commercial where a woman stands watching her exquisitely appointed shed burning while she is on the phone with her insurance agent.
She contends “someone burned down” the shed while her husband stands, trickling water hose in hand, contending lightning struck the shed.
When asked which was more likely (arson, lightning, or not enough information), 65% of the quiz-takers said not enough information was available.
This is the only correct answer. The real purpose of this question was to find out what type of information people would want to know in order to make a decision.
I wasn't expecting this. Several people believed the shed was struck by lightning simply because the husband was the one who said so.
One respondent offered, "Husband had a solid cause while wife only had a ‘somebody’.”
As we learned in the question about the CEO and hackers, more detail does not make an answer more likely—just more attractive.
Most people wanted to see an official report from the insurance company or fire department investigation. Others wanted to know if there were any storms in the area the night before or if there had been a rash of arson lately.
A key skill in an uncertain information environment is understanding what sources of information to trust. Those who wanted information on the recent presence of storms or arson are on the most reliable path to knowledge—understanding the base rate.
When we take time to understand what is “normal” first, we can guard against deceptively detailed stories about what might have happened. If we know there were no storms but 3 other she-sheds were burned down in the neighborhood, the scales of judgment tip towards arson.
This step alone, finding the base rate, is more powerful in the long run than reading an investigative report. After all, the author of the report is subject to the same biases that led 35% of the quiz takers to jump to a conclusion with no evidence.
As you will see in the 5th question below, once an incorrect answer moves into our brains, it is tough to evict.
The fourth question asked quiz-takers to make a life or death decision. If they took no action, 60 people would die. They had 2 potential solutions. In the first solution a certain number of people would live, and in the second there was a 1/3rd chance of all living and a 2/3rd chance of all 60 still dying.
Which did you choose?
I changed the wording describing the first solution after a few dozen people had taken the quiz. For the first quiz-takers, Solution A was framed as one that “will save 20 people.”
When I used these words, 70% of quiz takers thought this was the best option.
But, for the later quiz-takers, I framed the choice as “Solution 1 will fail to save 40 patients.” Either way, 20 people live and 40 people are dead.
Yet, this simple change of phrase caused the choice to flip. Now 65% of quiz-takers thought the second solution was the better one.
When our lazy brains think about the positives (saving 20 people), we are not willing to risk the 2/3rds chance all will die. But, when our lazy brains are already facing the loss of 40 lives, the risk seems much more attractive.
Finally, the last question was labeled as a “control question”. This label had no meaning for the test but was designed to encourage lazy brains to slip on their comfy pants.
This is a simple math question. One item costs $300 more than another item. Together the two items cost $400. How much does the first item cost?
I was surprised how many people missed this one—56%!
The math is easy….the only solution is for the first item to cost $350 and the second to cost $50.
So, why did so many people miss it?
Our lazy brains jump to an (incorrect) conclusion and hold onto it like grim death.
If we initially believe the right answer is $300, it takes a ton of mental effort to walk ourselves back from that choice.
This is the same effect Nobel Prize-winning economist Daniel Kahneman described in his book Thinking Fast and Slow. An astute quiz-taker pointed me towards Kahneman’s book as a great reference.
Clearly this person read the book and understood its lessons, yet they still got the question wrong and answered $300.
Here’s what it means for the 2020s:
Generations of evolution tuned our brains into optimized answer-generators.
Unfortunately, they are optimized to grab onto easily accessible answers rather than complicated ones requiring knowledge and insight.
We dislike the effort of difficult thinking and indecision more than we crave the comfort of a conclusion, even a poor one.
In the age of #AI, this isn’t just an interesting quirk of humanity—it has deadly consequences for all of us.
Previous industrial revolutions made it easier or faster for humans to "do" something. Whether a textile mill, a steel mill, or a spreadsheet, humans were the "doers" aided by machines.
But, the fourth industrial revolution is different. Autonomous machines become the “doers” and our worst cognitive tendencies drive humans to a deceptively dangerous passive role.
AI powered by Big Data can decide what needs to be done and how to do it faster and, often, more effectively than lazy-brained humans. In some cases, even the best thinking humans can't think fast enough to respond to changing conditions--think cyber intrusions and hypersonic missile attacks.
Already, auto-piloted driving cars can take over the thousands of small decisions about how to maneuver, and they are so good at it, you can find videos of people trusting them enough to take naps behind the wheel.
Even worse, according to a report from the International Consortium of Investigative Journalists, Chinese security forces are preemptively arresting people based on AI predictions about which segments of society are most likely to commit crimes.
“The program collects and interprets data without regard to privacy, and flags ordinary people for investigation based on seemingly innocuous criteria, such as daily prayer, travel abroad, or frequently using the back door of their home,” according to the report.
Thinking beyond the boxes and biases we create for ourselves will be the difference between being literally asleep at the wheel and ensuring technology is on the right path.
How well we choose our future depends on our ability to force our lazy brains to think, collaborate, and communicate.
These distinctly human skills…thinking both critically and creatively, collaborating, and communicating are the keys to success and survival in this latest revolution.
When we understand our biases, weaknesses, and mental shortcuts, we have the opportunity to mitigate them.
When we learn how to problem solve with others, we create cognitive networks of human intelligence reducing our collective vulnerability.
When we learn to speak and write clearly and precisely, we spread solutions and knowledge to others.
Making a good decision, whether about Cheryl’s she-shed or life-altering technology, is more a question of willingness than skill. If we are willing to engage in critical thinking despite our biases, blindspots, and lazy cognitive tendencies, we unlock tremendous opportunities, both individually and collectively.
But, if we allow ourselves to be asleep at the wheel, we will surely find ourselves on a road we didn’t realize we were choosing.
I am the Lead Instructor at #HumanIntelligence. Blending a career of military problem solving experience with the latest academic research, we help businesses, non-profits, and governments solve their toughest problems. #HumanIntelligence is the newest curriculum designed to teach practical thinking, collaboration, and communication skills to today's and tomorrow's workforce.
What are you doing to retool your team's brains for the 2020s? Let's talk.