Thinking Better About Trusting
My wife and I are fortunate to own a building in the small town near our home. Our modest office building on Main Street provides an opportunity to both grow our businesses and establish roots in the community. The building started out as a home, 120 years ago, and although renewed and renovated over the past century, the pioneers in the year 1900 didn’t have the foresight to design it as a combination Zoom-studio/coffee roasting kitchen (would have been huge during the 1918 pandemic). So, we needed to hire a contractor to help us with renovations.
After exhausting our personal network of contacts for people who did commercial contracting work, we went to Google. We found a contractor who specialized in commercial kitchen work and set a time to meet with him. He and an architect came out to look at the building and make a plan. He was very knowledgeable, gave good references for his work, and noted places where we may be able to save money. He promised to send an initial estimate after a week or so. About a month later, we finally received an estimate, but it didn’t include all of the work we requested.
He mentioned he had been busy and wasn’t able to put as much time in it as he expected. The contractor’s price was high, but only slightly above the rates for someone with his experience. He promised to follow-up in week, but after a month, still nothing. After some phone calls fit in around his other jobs, we finally decided that although he had great experience, we couldn’t rely on him to fit our job into his busy schedule.
We tried again with another contractor. He showed up when he said he would. He delivered his estimate for the work on time, and it included everything we asked him to include. His price was fair, and we had no indication he may be trying to take advantage of our inexperience in this area. This contractor was young, and he didn’t have any experience with commercial kitchens and seemed stumped by the additional permitting processes for kitchens in our county. In the end, even though we believed he was fair and reliable, we didn’t have full confidence in his capability to get the job done.
The third contractor we tried, showed up on time as well and had an impressive list of similar jobs. He took a look around the building with us, and asked if “we would be taking care of the permits”. I knew enough to know this was code for…”do you want this to be a legal, permitted job or one done off the books?” I told him we were trying to establish a legitimate business, and we would want to follow the laws and regulations. We quickly ended the meeting, and I told him there was no need for anything further.
Even though he had demonstrated capability and seemed to be reliable, his willingness to intentionally skirt the law was troubling. If he was willing to mislead the inspectors, what would prevent him from misleading us?
Here was the problem…we needed a contractor whose intentions were fair, who had the capability to do the job, and who was reliable enough to fit our work into their existing schedule.
In a word, we didn’t trust any of the three we met.
Unfortunately, in the English language, we have a poor vocabulary for the concept of trust. We use one word to mean many concepts. Depending on the speaker and the circumstances, the word trust can mean:
- A belief someone has good intentions and will act with fairness
- A belief that someone has the capabilities--knowledge, skills, and experience—to do what is required
- A belief that someone has the self-management skills to be reliable
To further complicate matters, we use the word to refer to inanimate machines or structures. What word would you use to describe your belief that a bridge is not going to collapse when you drive across it on the way to work? Many people would say they “trust” the bridge.
The proliferation of Artificial Intelligence systems into our lives has been accompanied with an increasingly voluminous discussion of “Trust in AI”, or the idea of how to get humans to believe what the machine is telling us.
Now, I am not the Word Police. Having been raised in the Deep South, I know it is a fruitless endeavor to try to get people to change the way they speak. Language is alive. It evolves. It flows and twists meaning according to context and speaker. Language is beautiful.
But, as a person who teaches critical thinking, creativity, and collaboration, I understand language drives thinking, and thinking drives action (or inaction). The words we use influence how, and whether, we solve problems. Our limited vocabulary of trust and our limited thinking about trust drives us toward actions and solutions that not only aren’t helpful, but contribute to the very problems we are trying to solve.
Imagine each of the three contractors I worked with read this article. When they get to the summary that says “I didn’t trust them,” what should, or could, they do to improve their chances for business next time?
The Dodgy Contractor who tried to evade the county, could make a moral decision and take action to ensure his intentions align with my values, so I am not worried about being taken advantage of.
The Young Contractor who seemed fair and reliable, but light on skills, could get more capability either by subcontracting on some more advanced commercial jobs, hiring a more experienced crew, or partnering with another contractor for parts of my job requiring special expertise.
Finally, the Too-Busy Contractor could hire an assistant, keep a better to-do list, or take advantage of a productivity app to stay on top of his commitments…or make fewer commitments.
Yet, none of these solutions would work for increasing my trust in a shoddy-looking bridge or a new AI system that my doctor wants to use to diagnose whether I have a rare form of cancer.
Our lack of specificity around the word trust costs us money, time, and relationships.
By thinking better about the concept and vocabulary of trust, we begin to shatter our cognitive illusions around the word and open the opportunity for better solutions to our “trust” problems.
What is Trust?
At a neurological level, trust is a behavior adaptation resulting from the absorption of the hormone oxytocin in the brain. A hit of oxytocin causes a variety of responses. It causes nursing mothers to begin lactating. It causes mice mothers to respond to the cries of mice babies. It makes dogs (and humans) better able to read human non-verbal cues. In high doses, it can also cause malevolent loyalty, the in-group bias that discriminates against outsiders.
Most often, however, oxytocin is known as the “trust hormone.” In particular, it increases our risk tolerance for social risk taking. Thomas Baumgartner from the University of Zurich’s Center for the Study of Social and Neural Systems found that oxytocin caused people to continue trusting other players in a game despite knowing those players had betrayed their trust in earlier iterations of the game. Study participants, however, who took a placebo, stopped trusting the players who weren’t playing fair.
When Baumgartner manipulated the game so the player’s winnings were determined by random chance rather than by another human, neither those who took the oxytocin nor those who took the placebo changed their risk tolerance in the game. Oxytocin increases social risk tolerance, but not general risk tolerance.
Neuroeconomist Paul Zak and his colleagues found that trust is not only a result of the presence of oxytocin, but being on the receiving end of trust caused a person to release the hormone…and cause more trust.
As much as I hate admitting it, humans are dependent on other humans for survival.
Compared to other animals, it takes a crazy-long time to raise a human to anything resembling self-sufficiency. Even in adulthood, few of us grow our own food or access our own water sources. Most humans live in communities and rely on others, or structures built by others, for protection from threats posed by carnivore predators or other humans.
Because we are so dependent on other humans, we need a mechanism to help us know who we can count on…and give us an incentive to be counted upon. That mechanism, deeply wired into our brains, is trust.
At #HumanIntelligence, we use this definition of trust:
Trust is believing, when given the chance, you will not do something damaging to me.
There are 3 important elements to this definition:
1. Trust is an action verb…believing. It isn’t a feeling. It isn’t a reward that is earned. It isn’t even a noun built over time. It is a choice we make. Of all the things powerful people can do, no one can force the choice to trust.
2. In the absence of vulnerability, there is no trust. When we do not make the choice to trust, we act to increase control. If I have perfect control, there is no opportunity for vulnerability, trust is not in question.
3. Trusting is a behavior modification that results in giving up control. Whether someone chooses to exploit the vulnerability you exposed, is completely their choice.
So now, let’s revisit the Dodgy Contractor, the Young Contractor, and the Too-Busy Contractor. I didn’t hire either of them because I didn’t trust them. I didn’t make the choice to make myself vulnerable to them by engaging them on a costly job I needed done well.
I didn’t choose to make myself vulnerable to the intentions of the Dodgy Contractor.
I didn’t choose to make myself vulnerable to the capabilities of the Young Contractor.
I didn’t choose to make myself vulnerable to the reliability of the Too-Busy Contractor.
I didn’t choose to make myself vulnerable because I feared that when given the chance (the job) they might do something damaging to our building, or more likely, our bank account and deadlines.
My choice to withhold trust from them, however, is not universal. If I were to find myself stranded on the side of the road in my broken-down car, I’d accept a ride from any of them. I’d make the choice to believe they would not take advantage of my vulnerability to do something damaging to me.
If I were to find myself behind them in the grocery store line as they discovered they left their wallet at home, I’d offer to loan them the money for their groceries until they could pay me back the next day. I’d trust they would not take advantage of my monetary vulnerability created by offering an unsecured loan at the Winn-Dixie.
If I were to find myself sitting next to them in a group therapy session or a 12-step meeting, I’d be willing to share the most personal elements of my lived experience with them…knowing they could do damage to me by minimizing it, dismissing it, laughing at it, or betraying my trust in them.
Trust is not categorical. I may choose to expose a particular vulnerability and fiercely protect another. I may withhold trust from an acquaintance I see in the office every few days, but then upon unexpectedly running into them on vacation strike up a much more vulnerable conversation. But, this reality belies our default beliefs about trust. When we talk about “trust”, we often allow our limited vocabulary drive a belief that trust is binary—either I trust someone, or I don’t.
We are quick to apply this attitude to AI as well. Developers want users to “trust” the algorithms. Users don’t want to “trust” a system they don’t understand. But this approach leads both the developers and the users away from the solutions they ultimately need.
When we talk about trust in terms of intentions, capabilities, and reliability, we can have a more productive conversation leading to better outcomes.
In my workshop, AI for Decision Makers, I encourage attendees to think about AI more like our alarm clocks. I wouldn’t say I “trust” my alarm clock. My alarm clock cannot make a choice to do something damaging to me. Trust is only a question among things that can choose. My alarm clock doesn’t have choice, and neither does AI.
So, trust in intentions is not in play.
But, I am concerned that my alarm clock be good at doing the job it is intended to do. I would expect it to have the capability to effectively keep time. I would not want a clock that lost 15 minutes of time every hour.
I am also concerned that my alarm clock be reliable. Does it have enough battery life to last through the night. When I set it correctly, does it sound an alarm at the appointed time at a loud enough volume to wake me from my blissful slumber?
These are the types of questions we should be asking about people and AI systems as well. For co-workers, I want to know if they have the knowledge, skills, and experience to do the job I am considering trusting them to do. For AI, I want to know if the model has the capability to make relevant predictions? In other words, was it trained with a sufficient dataset to perform in the way the developers intended and expected.
But also, I need to know under what circumstances my coworkers and the model will be reliable. For people, I may ask questions about competing priorities. For the AI model, I would want to know the limits of what the developers intended and expected, and whether I am operating inside those limits?
The AI that unlocks my iPhone was great, until the pandemic made us hide half of our faces behind a piece of fabric. The unlock algorithm became unreliable because the problem fundamentally changed.
Asking a question about reliability, rather than trust, leads us to a non-binary answer. Do I trust this person or this AI system? Maybe, maybe not.
Is this AI system or expert reliable? “In 95% of the cases in the field under real-world conditions, we received accurate predictions. The 5% of times when the predictions were inaccurate occurred at night.”
Now, I have the resolution to say when the system is most reliable and least reliable. This is a more valuable assessment than, “I don’t trust AI” or “I don’t trust Predictor Bob.”
Whether we are using technology to solve a problem or old-fashioned human ingenuity, the concept of trust is inescapable. Here’s the BL:
- Trust matters to problem-solving leaders because you need other humans to trust you with not only their good ideas, but their bad ones as well. When people prune their own thinking, you lose too many potential fruits.
- When we treat our machines the way we treat people, we incorporate assumptions, expectations, and conclusions that don’t apply. In the end, we suffer the consequences, and our machines…well, trust me, they don’t notice.
Want to learn more about how to build trust on your team to get great ideas or to think better about the interactions between tech and team, let's talk. The chat is right there...or email Dan at email@example.com.