In Friday’s Executive AI Boot Camp, one leader captured the paradox:
“If I'm taking notes actively, I'm more engaged, and I retain what's going on. So while I understand the power of AI, I'm reluctant to let it take notes for me, because then I feel like I can tune out.”
The tension between what’s possible and what’s a smart choice sits at the core of how AI is reshaping work.
The real debate is not whether AI enters our work, it already has, but how we decide where humans stop, where machines begin, and how we keep meaning in between.
With AI capabilities increasing week over week, the need for us as leaders to decide what humans must keep has never been higher.
So how?
Between Acceleration and Agency
It helps to know the landscape.
On one end are the accelerationists, who believe that AI should be pushed forward as fast as possible. They argue that AI progress brings productivity, scientific breakthroughs, and higher well-being.
Slowing down is seen as either pointless (since global competition means someone else will do it anyway) or harmful (since delays might stall potential cures, solutions, or prosperity).
On the other end are the decelerationists. They see rapid AI progress as reckless, with risks ranging from mass unemployment and social destabilization to catastrophic outcomes if powerful systems escape our control.
In their view, it’s better to slow down, regulate, and align AI before it races ahead of our ability to govern it. To them, speed is not neutral as it compounds unknowns, amplifies inequalities, and risks irreversible harm.

But most of us live in neither extreme.
I believe AI development cannot be slowed as companies will push forward and governments can’t or won’t block their momentum. And I’m thankful for those making an effort to do what we can to guide AI at the highest levels.
But my focus is always about the choices WE can, and need to, make for ourselves. For every individual to understand the significance of the changes that are underway and have a sense of agency (and urgency) to take control.
Because AI’s capabilities are already beyond our comprehension.
The question is not whether we stop it. The question is whether we keep our hands on the wheel as it accelerates.
Seven Principles for Adaptive Agency
If you’re with me, then here are some of the principles I apply to decide where and how AI integrates into my work:
1. Focus on Desirability, Not Just Capability
AI dazzles because of what it can do. But that’s not the real question. The real question is: should it do it?
In last Thursday’s Lead with AI PRO session on Identity in the Age of AI, “Happiness at Work” author Tracy Brower grounded the debate in sociology: “AI is all about technology, but I think it’s all about how we work and how it affects our work.”
She recounted a conversation with a CHRO who had engineers coming to her, protesting AI:
“They were bringing AI in really, really significantly. And she had these engineers come to her and say, Stop! We went to school for engineering. We love doing engineering. We don't just want to hit a button. Let's bring AI in, but let’s not have it take over all of our favorite work.”
Efficiency is seductive, and I truly get the focus on productivity, but meaning isn’t fungible. AI should be here to significantly increase our Impact Per Hour.
Offloading the wrong tasks erodes that sense of identity and can even shrink joy, as BCG’s Debbie Lovich also told me in our conversation on AI and Joy last year.
Do: Before delegating, ask yourself: Does this task give me meaning, growth, or connection? If yes, hold on to it.
Discuss: With your team, make a list of tasks AI could do. Then ask: Which ones give us pride, energy, or a sense of craft? Keep those human by design.