Ask the proverbial man on the street if he trusts AI, and you’re likely to get an earful. Many of us sort of know that intuitively. But how does that translate into the business world?
Well, it turns out that trust is deeply important in using AI for business. That’s partly because we have teams of people working on business processes. They’re people. So they’re going to have a lot of these same concerns that anyone else would have about artificial intelligence, and how it is used.
Over at Deloitte, researchers are looking in a scientific way at how trust impacts AI in business.
“AI is deeply misunderstood,” said Ashley Reichheld in a recent talk at an Imagination in Action event in April.
She talked about a study of around 300 brands and thousands of workers that found that in certain cases where AI is deployed, it leads to up to a 149% decrease in trust.
On the other hand, she pointed out workers who do trust AI tools are about two and a half times more likely to adopt them, where that number is about double for consumers.
But, as she acknowledged, “trust is kind of soft.”
So what about specific ideas on how to measure trust and its value?
Four Pillars of Trust
Reichheld pointed out some of the criteria for actually measuring human trust of AI.
She separated this into four categories – humanity, transparency, reliability, and capability.
Capability is the ability of the systems to work. Reliability is the system’s ability to be consistent.
She called these two “table stakes” of AI trust, but also suggested that the other two are important as well.
Humanity. she said, means that you demonstrate empathy with the AI system: that you show people why it’s in their best interest to adopt the tools.
Transparency means not necessarily explaining the technology to everyone, but making sure that people are understanding basic things about how that technology works.
Is This Thing Broken?
Imagine a person in the analog age pounding on the side of a television set. The screen was wavering or scrolling, or there was static, and they wanted to make sure their television was working properly.
Reichheld pointed to certain cases where misunderstandings about AI lead people to think that AI systems are broken.
One example was getting different responses from a model in real time. As she pointed out, if you ask ChatGPT something, and then you ask it again later, its answers are likely to change. But if a human user sees that as a system failure, then there’s a problem with understanding how the AI works.
The Double Gap
Reichheld also described a project where researchers looked for some of the bottlenecks in adoption and addressed them directly.
She pointed out that this study is also open source, that the data is out there for people to look at in the form of questionnaires, etc.
She noted the two gaps that researchers found, and how that informs ongoing work on this.
One was around transparency, where people wondered: is the data protected correctly, what are the privacy issues, and does it do harm?
The second, she pointed out, was on reliability, not a function necessarily, but whether people understand use cases.
Here’s some of the application for business – she said the group held events called “prompt-a-thons” showing people how to use the AI. They showed people how to download data in secure ways, and addressed their questions and concerns.
Daily usage, she said, went up 65%. Numbers of new users and repeat users also flourished.
The moral of the story?
Don’t wait for adoption.
“If you want people to use (AI), you have to make sure it has humanity, in the sense it’s good for (people) using it, it’s transparent, and of course, that it works reliably. Build it into the solution from the beginning, and that way, by the time you get to deploying and designing, you will be much more likely to drive the kinds of outcomes that you want, and get people to use the AI.”
Starting with Trust
In other words, businesses should build AI-native, and they should build trust-native, too. These trust principles should be included in the process before any prototype is Introduced. Businesses should have clear plans about how they will train staff, but not just training – how they will philosophically integrate AI into their workforce. One key idea I got from Reichheld’s presentation is that it’s not reasonable to think that you can just order workers to use AI, even though it represents a threat to their jobs. Leadership has to address the concerns in order to have this work well. That might be part of the disconnect that we have in business – where in the old days, you could just maintain hierarchy and seek to achieve any outcome that you want. It’s different now.
So there’s a cultural side to this, too. But understanding how the technology works is critically important. So many experts point out that a new tool can either help or hinder productivity, depending on the right fit – how it’s introduced, how it’s explained, and how people are in incentivized to use it.
So there are real takeaways for business here. Anytime you’re putting AI into your company’s workflows, make sure the trust is there.