Without a Critical Thinker, AI Doesn't Work
I recently heard a smart, successful AI entrepreneur say “to make AI work, you need 4 things: Data, Computing Power, Software, Algorithms or Models”
Rarely do I pause a podcast and rewind to hear something again, but I did this time.
This AI expert made a mistake.
As a frequent public speaker, I know it’s easy to have even well-rehearsed words come out wrong. It happens.
But, this error is one lots of people who make AI and purchase AI systems make, and it extends beyond just a poor choice of words.
To make an AI prediction, you need those 4 things…data, computing power, software, and algorithms…but AI doesn’t “work” without a decision maker practicing solid critical thinking.
Today’s world is full of examples of AI systems with the 4 required ingredients generating predictions. If there is an industry with data and money to be made, you can bet someone has produced an AI system to try to optimize the process.
The evidence, and AI, is already everywhere. AI systems to predict the price of gasoline. AI systems forecast the price of cryptocurrencies. AI systems predict the value of homes for real estate investors.
But recently, each of these systems would have made predictions that critically thinking decision makers would have been wise to dismiss as unreliable.
The home I sold in Alexandria, Virginia at the end of 2019 was recently listed for sale by its new owners. With no significant improvements to the house, an AI-aided prediction of the home’s 2021 value would have been a data-driven, conservative, historical 3-5% growth…which is the price the new owners set for it. But, due to the red-hot seller’s market fueled by low interest rates as a result of a global pandemic, the home sold for 10% above asking price…for a remarkable 15% appreciation in 18 months. A price point outside the dataset used to train the AI.
Crypto-currency investors use AI systems to predict the future value of bitcoin, Ethereum, and dogecoin. But, no AI system would have predicted the impact of Elon Musk’s SNL skit where he called dogecoin a “hustle”. The crypt-currency lost 30% due to a weak, poorly delivered joke that was not a part of the deep learning dataset.
Finally, AI systems predict fuel prices and fuel demand. Refineries, oil companies, service stations, trucking firms make decisions based on the output of these systems. Yet, none of these systems would have predicted a ransomware attack on a pipeline company and panic gas buying…a consumer phenomenon not unlike the great Toilet Paper Panic of March 2020.
Here’s the problem…no amount of data, computing power, software, and algorithms will be able to make accurate predictions about humans 100% of the time. There will be times when the system’s predictions are unreliable because the data they were built on doesn’t accurately reflect the current context.
Only critically thinking human decision makers can recognize when reality differs from the model world the AI knows.
Even more, it takes a human of great constitution to consider the output of an AI model, funded and trusted by the bureaucracy, and reject it as unreliable.
These are the humans we are looking for--those who understand enough about AI, the training data, AND the current context of the problem the system was designed to solve. Rarely will this also be a person who is coding in Python to create their own models. If the CEO of an oil company or the 4-star general running a war is drinking Mountain Dew at midnight while debating about whether to use a sigmoid or tanh activation function, we are doing it wrong.
Forgetting to include the decision maker in an AI system that “works” is forgetting the most important requirement for harnessing the speed and scale AI promises.
A good AI system automates the science of decision making, so humans have the time and cognitive bandwidth to do the art.
Without the critical thinker, we just have an expensive Ouija board.
Interested in learning more about the intersection of Critical Thinking and AI? Check out one of our free workshops.