January 23, 2026

"People Copy People": AI Adoption Scales Socially

Citi scaled AI by embedding 4,000 peer champions, proving adoption spreads through people, not platforms.
Daan van Rossum
By
Daan van Rossum
Founder & CEO

People Copy People: AI Adoption Scales Socially

Presented by

When people talk about AI adoption at scale, the conversation usually starts with tools. New platforms. New copilots. New roadmaps. At Citi, the story starts somewhere quieter and far more effective: people.

Over the past two years, Citi has built an internal network of more than 4,000 AI Accelerators, supported by a smaller group of 25–30 AI Champions, reaching over 70% adoption of firm-approved AI tools across a workforce of 182,000 employees in 84 countries. The numbers were first reported by Business Insider, and they matter because they cut against a familiar enterprise pattern. Instead of central pilots stalling at the edges, Citi embedded AI into everyday work through peers.

This is not a story about experimentation. It is a story about how AI spreads inside large organizations. The most interesting part is not the tools themselves, but the structure behind them.

I have seen this pattern before in early AI Champions programs we helped design long before the current hype cycle. What works consistently is a three-layer implementation sandwich.

Strategic direction and guardrails at the top. Real experimentation inside teams at the bottom. And in between, AI Champions as connective tissue. In practice, Champions succeed because they act as workflow translators, helping teams understand where AI fits, where it adds value, and how it reshapes decisions, handoffs, and accountability.

That is why our AI fluency matrix focuses on upgrading Champions into operators, coaches, and internal reference points, rather than leaving them at the level of early adopters.

What Citi’s AI Champions Actually Do (And How Much Time It Takes)

One of the most overlooked details in Citi’s AI adoption story is that AI Champions are not full-time roles. They are embedded operators, contributing alongside their day jobs.

In practice, Champions typically spend 30–60 minutes per week on the role.

That time is not spent teaching classes or pushing tools. It is spent inside real work.

AI Champions at Citi focus on a small set of repeatable behaviors:

  • Showing AI in action inside live workflows, such as summarizing documents, drafting internal updates, analyzing datasets, or supporting development tasks
  • Helping peers get unstuck, answering questions in context rather than delivering generic training
  • Sharing concrete examples of what worked, why it worked, and how others can adapt it
  • Surfacing friction and risks back to central teams, helping refine guardrails and approved use cases

This matters because Champions are not acting as instructors. They are acting as credible peers, translating AI from possibility into practice.

Their influence compounds because it travels socially. A single example shared in a team meeting or chat often leads to multiple reuses, adaptations, and follow-on improvements, without any central coordination.

The result is a system where learning stays lightweight, adoption stays voluntary, and usage stays grounded in real work, which is exactly why Citi reached over 70% employee adoption without mandating AI use or turning it into a performance requirement.

Flagship AI Newsletter
The AI Newsletter That Makes You Smarter, Not Busier
Join over 30,000 leaders and receive our insights on AI platforms, implementations, and organizational change management.
FlexOS Course - AI Content Accelerator - Testimonial Badge

Adoption Scales Socially, Not Technically

Citi’s program works because it recognizes a simple truth. People copy people. AI Champions and Accelerators are embedded in teams across functions, acting as local guides rather than centralized trainers. They show colleagues how AI supports real tasks such as summarizing documents, drafting internal notes, analyzing datasets, or supporting development work.

This peer-led transmission lowers psychological friction. AI feels practical, accessible, and relevant. The outcome is measurable. More than 70% of employees now use approved AI tools, a level many enterprises struggle to reach even with significant investment.

This approach also distributes usable capability, allowing thousands of employees to support others, rather than concentrating AI knowledge in a small expert group.

Structure Prevents Shadow AI And Fear-Based Adoption

Citi paired bottom-up enthusiasm with top-down guardrails. Employees use firm-approved tools. Data boundaries are explicit. Outputs are governed. In regulated environments, this structure creates the confidence required for access to expand.

Many organizations slow down adoption by hesitating on governance. Citi made a clear choice. By prioritizing safety and clarity, managers felt comfortable extending AI usage across teams. That confidence enabled adoption to expand across 11 countries, up from 8 the year before, without widespread backlash or quiet avoidance.

This structure also reduced unsanctioned AI use, giving employees a clear, supported path to learn, share practices, and improve work quality openly.

Champions Succeed When They Redesign Work

The most important design choice is what AI Champions are trained to do. In strong programs, Champions are evaluated on whether work actually changes.

Effective Champions help teams rethink workflows end-to-end. They identify friction points. They test where AI saves time and where it introduces risk. They help decide what remains human and what can be automated. This places Champions squarely in the middle layer of the implementation sandwich, translating strategy into practice and surfacing real constraints back to leadership.

This aligns with guidance from the OpenAI Academy, which frames the AI Champion role as the driver of behavioral change inside teams, where credibility, context, and consistent in-work examples matter more than tool access. Champions influence adoption by making new ways of working visible and usable, surfacing repeatable patterns, removing friction, and aligning AI usage with team priorities.

This mirrors guidance from GitHub’s internal playbook on activating AI advocates, which treats AI adoption as a change management challenge, not a technology one, and positions grassroots Champions as the human bridge between strategy and day-to-day execution. In GitHub’s model, peer-led advocates accelerate adoption by grounding AI in real workflows, sustaining learning over time, and feeding practical insight back into the organization.

Citi’s scale shows the impact of taking this role seriously. Thousands of employees became capable of using AI responsibly and consistently, without needing to become experts.

Recognition supported this outcome. Internal badges and visibility created credibility without turning AI usage into competition, reinforcing learning and shared progress.

Why This Matters Now

Across industries, most AI programs stall between executive ambition and frontline reality. Tools are deployed, but behavior remains unchanged. Citi’s approach demonstrates a different path. Scale comes from enabling people who understand the work, then supporting them with structure, recognition, and guardrails.

This is about sustainable adoption, not acceleration for its own sake.

The Bottom Line: Build AI Champions To Make AI Stick

  1. Design AI Champions as a formal role, with clear expectations tied to workflow change.
  2. Train Champions to translate work, ensuring AI improves decisions, quality, and outcomes.
  3. Embed Champions inside teams, where trust and context already exist.
  4. Support learning with guardrails, so adoption feels safe and repeatable.
  5. Measure success through changed workflows, not pilot counts or tool licenses.

Citi’s 4,000-person AI Champions network offers a clear lesson for leaders chasing scale. AI adoption falters when organizations focus on tools alone. It succeeds when they build human systems that help technology spread.

🚨 NEARLY SOLD OUT: 5 AI ASSISTANTS IN TWO WEEKS  

Never before have we sold out so quickly as for the February 6 cohort of the new flagship AI Leader Advanced program.

If you want to:

  • Deploy 5+ AI assistants in 2 weeks (built for your real workflows)
  • Get a personalized plan that adapts to your role, goals, and industry
  • Study alongside a curated group of global senior leaders from companies including PepsiCo, Harvard, Google, and more

Then now is the time.

In just 2 weeks, you’ll design, build, and deploy at least five AI assistants tailored to your role, workflows, and industry. No code. No theory overload.

Want to reach 30,000+ business leaders applying AI in their work, teams, and organizations?​
Advertise with us​​.

AI Leaders, Pay Attention to This 📍

  • When Agents Become Org Infrastructure: The next AI productivity leap comes from context consolidation, not faster prompting. As Ivan Zhao’s (CEO of Notion) “10× to 30–40× engineer” example shows, leverage now comes from orchestrating and supervising multiple agents, a leadership shift inside individual contributor roles. Agents win where work is verifiable and context is shared, pushing organizations to invest in workflow instrumentation and evaluation, not more prompts. The breakout moment arrives when companies redesign operating rhythms around agents, treating them as a continuous workflow layer rather than add-ons.
  • Davos Reveals the Real AI Divide: At Davos 2026, leaders agreed AI will reshape work and power, but split on acceleration versus restraint. Jensen Huang framed AI as industrial infrastructure tied to energy and manufacturing scale, while Dario Amodei and Yuval Harari warned of control and stability risks. Work narratives diverged sharply, from job creation in trades to losses in entry-level white-collar roles. As Satya Nadella emphasized, AI impact will hinge less on models and more on states, systems, and infrastructure.
  • Why Claude Is Trained for Judgment: Anthropic’s Constitutional AI reframes safety as judgment-building rather than rule enforcement, arguing that rigid controls break down in novel situations. Claude is trained to weigh safety, ethics, and helpfulness holistically, while preserving human oversight and corrigibility even when confident. “Genuine helpfulness” is treated as high-stakes, with overcautious AI seen as potentially harmful. The implication for enterprises is clear: governance should focus on how agents make tradeoffs and deliver value, not compliance theater.
  • Exclusive for PRO Members: Key Principles to Master Vibe-Coding

    In this Masterclass, Wyatt Barnett, Vice President of Technology Enablement at NCTA, will demystify what’s really happening behind the scenes when you “just ask AI to build it,” and give you the practical mental models you need to vibe code with confidence — even if you’ve never considered yourself technical.

    This session is designed for leaders, builders, and curious professionals who want to move faster with AI-powered development without getting lost in error messages, broken environments, or black-box magic.

    You’ll walk away with

    • A clear mental model of how modern software development actually works
    • Practical command-line techniques that make AI coding far less intimidating
    • A simple, usable understanding of Git without the jargon
    • Better instincts for diagnosing problems when AI-generated code breaks
    • The confidence to vibe code without flying blind

    The session takes place on 📍January 29th at 8 AM PT/11 AM ET/4 PM GMT/5 PM CET.

    Members, the invite is on your calendar.

    Not a member yet and want to join this exclusive Masterclass?

    Want to reach 30,000+ business leaders applying AI in their work, teams, and organizations?​
    Advertise with us​​.