Published Date:
June 19, 2025

Why Trust Is the Real Secret Ingredient for AI Adoption (+How)

Trust is the overlooked foundation of effective AI adoption. Discover why culture, transparency, and psychological safety are the real game-changers for AI success inside your organization.
Daan van Rossum
By
Daan van Rossum
Founder & CEO, FlexOS

AI adoption is low. Lower than most of us “in the bubble” could imagine.

New Gallup data from this week show that only 8% of US employees use AI daily:

In my last post, “​The AI Implementation Sandwich​”, I shared that scalable, sustainable AI adoption depends on three layers: clear executive vision, empowered team-level experimentation, and connective “AI Lab” tissue in the middle.

Well, a clear AI vision from the top is lacking in almost 80% of companies:

No vision, no adoption. Where is it going wrong?

During the 9th ​Lead with AI Executive Bootcamp​ last week, I hosted several candid, sometimes vulnerable conversations with leaders at the forefront of this transformation. The conclusion? One ingredient underpins AI success: trust.

Usually, we keep these conversations behind closed doors, but with the participants' approval, I’m excited to share some of the key insights.

AI Adoption Is a Cultural Issue

While AI platforms, tools, and models get the headlines, culture—and specifically trust—dominates the day-to-day reality of AI transformation.

As ​Stacy Proctor​, a seasoned CHRO, put it:

“Trust is the foundation of everything. Do we trust employees? Do employees trust leaders? Does anyone trust AI? Trust is always the foundation of everything. And so when you’re talking with your CPOs, your CHROs—as one myself—I think it’s important that we always have that as part of the conversation. What are we doing to build trust in our organizations?”

AI makes trust more essential than ever.

Especially with layoffs (not due to AI, but happening in the context of an AI-enabled future of work) looming or already happening. As ​Shlomit Gruman-Navot​ shared from her HR practice:

“There is a lack of trust by employees, because they’re saying, ‘Oh, you’re just gonna use it in order to reduce workforce. Let’s be real. This is all about reducing headcount.’ It’s a valid point, because we are transforming the work. And if AI reveals that some tasks are no longer needed, that’s okay, but it doesn’t mean that you can’t do it also in the most human-centric way possible.”

But this is not new. As ​Dean Stanberry​ reminded us:

“I go back to the 1980s, when we started shifting away from companies that had lifetime employment and humans became disposable. Do we have trust today in corporations? If you believe that they can dismiss you at a moment’s notice for no particular reason, and disrupt your entire life—basically, how do you trust an organization in the current environment?”

AI Adoption and the Culture of Psychological Safety

True AI-first organizations foster psychological safety, the ability to experiment, ask “stupid” questions, and even fail, without fear of judgment or reprisal. This is where many companies stumble.

​Alison Curtis​, a leadership trainer, sees this play out daily:

“AI creates psychological safety for us as humans to experiment with our thinking. And as humans, we haven’t quite got that right. So one of the biggest hindrances, I think, to workplace efficiency is fear, and the fact that people don’t come forward with their best thinking for fear of judgment or not being accepted.”

I see this even in many AI workshops and client projects: people use AI “in secret,” fearing that if they admit it, “I’m going to get more work, or be seen as someone who’s slacking or taking shortcuts.”

The culture has to change to one where ​ChatGPT is a compliment, not an insult​. People who use it should be celebrated and feel excited to share their successes and challenges.

Managers, Not Models, Shape Adoption

Organizational trust isn’t built by software, but by people, especially managers. As Stacy observed:

“If people at every level of the organization aren’t coming in—and saying, ‘How am I going to be trustworthy today?’—then we’re missing the opportunity to build trust, because trust has to be built over time and continually.”

This means trust-building is everyone’s responsibility, but it’s especially important for managers to model open-mindedness, transparency, and a willingness to learn with their teams.

Recent Gallup data show that ​leaders are twice as likely​ (33%) as individual contributors (16%) to use AI a few times a week or more, underscoring the need for them to create a culture that fosters further adoption.

Trust as the Key to Rethinking Work

If AI is the “crowbar” that’s opening up a long-overdue conversation about the nature of work, trust is the glue that will keep organizations together as they rebuild.

As Shlomit explained, trust doesn’t mean guaranteeing job security; it means being honest about change, encouraging lifelong learning, and “leading with empathy, transparency, and clarity.”

And as I summarized in our session:

“A lot of these themes … don’t have anything to do with AI, but it is exposing them. It’s all the same human opportunities and challenges, and all the troubles that we have in organizations. But it is definitely exposing it by a lot.”

From Technology to Trust: Practical Next Steps

So, how do you put trust at the heart of your AI adoption journey? Here’s what I’m seeing work:

  • Talk about AI openly: Address fears about automation and headcount directly. Don’t let rumors fester.
  • Model vulnerability: Leaders and managers should admit what they don’t know, experiment publicly, and share what’s working (and what isn’t).
  • Celebrate experimentation: Recognize the “​internal influencers​” who try new workflows or tools—even if every experiment doesn’t pan out.
  • Emphasize human value: Remind teams that AI is there to augment, not replace, their best work.

In the end, becoming an AI-first organization is much more about culture than it is about code.

The companies that get this right won’t just have the best AI adoption rates, they’ll be the places where people do their most meaningful work.

Exclusive Event: Why AI Rollouts Fail (And How to Fix Them)

Even great tech can fall flat without the right change strategy.

Join workplace strategist Phil Kirschner and AI coach Daan van Rossum as they reveal the overlooked reasons AI tools fail, despite being well-designed and well-intended.

You’ll learn how to spot silent resistance, apply behavioral frameworks like the Forces Diagram from the JTBD method, and build real momentum inside your organization.

📅 Date: July 8, 2025

🕙 Time: 8:00 AM MT | 9:00 AM CT | 10:00 AM ET | 3:00 PM BST | 4:00 PM CEST