Published Date:
June 12, 2025

The AI Implementation Sandwich: Three Layers of Successful AI Transformation

Most AI efforts stall between executive vision and frontline pilots. Learn how the AI Implementation Sandwich—strategic clarity, team experimentation, and connective infrastructure—enables true transformation at scale.
Daan van Rossum
By
Daan van Rossum
Founder & CEO, FlexOS

In the 18 months since I started teaching AI to business leaders, the landscape has transformed dramatically, from tech hype to a strategic imperative top of mind for most leaders.

According to McKinsey, ​75% of companies​ now use generative AI in at least one function, more than double versus last year.

In “​No AI, No Job​,” Danielle Abril of The Washington Post highlights a new wave of "AI-first" companies like Duolingo, Shopify, and Box.

Even Klarna, initially retreating slightly by rehiring human customer service reps, now generates an impressive ​$1 million per employee​, largely due to strategic AI integration.

But for every Klarna or Duolingo, ​99% of organizations​ remain stuck, still experimenting rather than truly scaling.

Why is full adoption so hard to come by? And how do companies move from AI-inspired to AI-first?

Successful AI transformation requires alignment between two critical forces: top-down strategic clarity and bottom-up practical experimentation, with a thriving connective layer that most companies overlook:

Top-Down: Strategic Clarity with Realism

While it’s the people actually using AI, a clear strategic direction from leadership is critical.

Executives must define exactly how AI supports their business strategy. Many companies start by committing to major platforms (ChatGPT, Copilot, Gemini), customizing and integrating them strategically for scale, security, and fit.

McKinsey names ​takers, shapers, and makers​, but advises that most companies are best off customizing and integrating existing platforms for scalability, security, and strategic fit.

But, as BCG rightly warns, generative AI isn't just about upgrading technology:

“Companies have treated GenAI like a typical technology upgrade or a collection of pilots, with tech teams leading the way. While this is fine for the technology side of the equation, it fails to achieve real bottom-line impact,” ​writes​ BCG in a September 2024 report.

Take Microsoft Copilot as an example: it might seem a safe choice when your data already sits within Microsoft's ecosystem. But about those team members who feel ChatGPT or Claude vastly improves their capacity, capability, and quality of work?

Do you block these or reconsider your platform strategy? Or do you, like one of the companies we’re working with on their AI transformation, broaden your approach:

“Different tools bring different strengths, and using ChatGPT alongside Copilot allows us to match the right platform to the task, and even to cross-pollinate thinking between them. It’s less about choosing one over the other, and more about building a collaborative ecosystem of AI agents.” – Andrew Currie, CEO, ​OUT-2 Design Group.

Winning leaders ​actively shape expectations​ because AI initiatives need iteration, especially if it is Microsoft Copilot (our live sessions sometimes feel like therapy for its users, and I should probably dedicate an article to the challenges with it) as ​less than half​ of users adopt their company’s AI platform.

Visible, practical support from executives is equally crucial, as ​Debbie Lovich from BCG emphasized​ to me: “Don’t ask your employees to do anything that you wouldn’t do yourself.” And indeed, her research shows that teams with AI-engaged managers are 4x more likely to adopt AI tools.

Bottom-Up: Empowered Team Experiments

As OpenAI ​notes​, much of AI’s value is realized in the day-to-day tools that teams use, which is why evaluation and experimentation must happen at the department or team level.

This makes sense, as AI professor Ethan Mollick ​writes​, because “People with a strong understanding of their job can easily assess when an AI is useful for their work through trial and error, in the way that outsiders (and even AI-savvy junior workers) cannot.”

To empower your teams effectively:

  • GED-RT: Using ​our GED-RT model​, any individual or team can assess which jobs are better suited for AI, so that they can focus on what matters. This turns forced AI into an opportunity to ​rethink entire roles​, as BCG’s Debbie Lovich recommends.
  • Fitness for Purpose: Roblox CTO ​Arvind KC​ ​highlighted​ the importance of selecting AI tools that naturally integrate into existing workflows, like GitHub Copilot for engineering or chatbots for customer service.
  • Pain Points: Start with pain points identified by each department, and then research AI tools that address those needs. Identify the use cases that are broadly available in the industry and evaluate which ones are high-value and applicable to your context.
  • Structured Pilots: Anthony Onesto suggests clearly defining success metrics aligned with real business outcomes. Additionally, rigorously gather feedback directly from end users.
  • Training: Matt Kropp from BCG X ​applies a "10/20/70" rule​—only 10% algorithm coding, 20% technology integration, and 70% on people and process adaptation. Teams that receive proper training see significantly higher adoption rates.

There are a few types of ‘internal influencers’ companies should pay special attention to for this ‘process rethink,’ ​new research​ from Asana’s ​Rebecca Hinds​ shows:

  • Bridgers: Employees who span roles and departments, seeing friction points clearly. When they build workflows, they’re 96 percent more likely to be adopted.
  • Domain Experts: Front-line experts whose workflows are designed from practical experience, avoiding unnecessary technical complexity.
  • Operations Specialists: People skilled in scaling solutions organization-wide, who rewire systems instead of fixing isolated issues by zooming out.

And, as PwC People Tech lead ​Marlene de Koning​ told me: ​enabling these influencers​ is key. For example, by holding office hours where influencers can answer questions, by publicly celebrating them, and by engaging them at every step of the AI adoption journey.

The Middle Ground: AI Labs and Sandboxes

Between high-level vision and front-line experimentation lies a messy middle.

It’s the part that’s often neglected and leading to fragmented, stalled initiatives. Bain research shows that ​up to 25% of AI pilots​ fail because of insufficient coordination here.

Plenty of solutions have been proposed, including Oracle’s ​secure sandbox​ and Deloitte's ​proposal for a cross-functional AI committee​.

But more interesting are ​Mollick’s proposed “AI Labs,”​ consisting of subject matter experts and a mix of technologists and non-technologists, mostly sourced from the employee base and with a strong focus on building, not analysis or abstract strategy.

These labs can distribute employees’ prompts and solutions widely and quickly, build AI benchmarks for the organization’s workflows, and build provocations to get the many people who haven't truly engaged with AI's potential on board through demos and visceral experiences.

When led by a strong leader who understands Human + AI, these labs stand out as a strong model that works for all sides, especially when Lab leaders are connected to other practitioners (for example, in our ​PRO community​).

The Bottom Line: The AI Implementation Sandwich

Ninety-nine percent of companies today are stuck between executive-level AI ambitions and frontline experimentation, with little connection between the two.

The solution is a layered approach:

  • Top-down: Clearly articulated, strategically aligned AI vision.
  • Bottom-up: Empowered, trained, experimental teams discovering practical solutions that fit their workflows.
  • Middle layer: The connective “AI Lab” tissue that unifies governance and experimentation.

This structured “AI Sandwich” means we approach AI cohesively while scaling innovation and reducing redundant pilots.

Most importantly, it’s the way to create a future of work where AI supports people to do their best work, not create more headaches.