When a global financial services firm sought Sam’s guidance, the problem seemed familiar. The firm had deployed AI tools across its business. Adoption was uneven, and the gap between teams was growing.
In some corners of the organization, people were already using AI to draft client materials, summarize research, and speed up analysis. In others, they avoided it entirely: unsure what was permitted, worried about quality, or skeptical that leadership really meant it. Managers were fielding questions they weren’t equipped to answer. If my team uses AI, what changes in our standards? What happens to accountability?
The leadership team quickly realized the problem wasn’t the technology. It was the people around it. The evidence is clear. BCG’s 2024 research finds top AI-performing companies invest 70% of their transformation resources in people and processes, not technology. Mercer’s Global Talent Trends 2026 finds that employee concern about AI-driven job loss has surged from 28% to 40% in two years—anxiety that impedes value creation unless leaders address it directly. The World Economic Forum’s Future of Jobs Report 2025 projects 39% of core workforce skills will change by 2030. AI has not made human development less important. It has made it the primary lever for competitive advantage.
Based on our work with senior executives—Jenny as an executive coach and leadership development expert, Sam as a global transformation leader who helps organizations redesign how they develop and deploy talent—we have identified four strategies for building the learning culture that makes AI investments work.
1. Make It Safe to Try
The first capability is cultural, not technical. Mercer’s research finds that for innovation to succeed, employees must feel safe to experiment, ideate, and face potential failure. McKinsey’s research on psychological safety finds that a positive team climate is the single most critical driver of willingness to experiment. Yet McKinsey’s research found fewer than half of employees report one. That gap is where most AI adoption efforts quietly die.
“Michael,” a senior marketing and sales leader Jenny worked with at a global consumer packaged goods company, worked with his team to define what good experimentation looked like, named the behaviors that signaled progress, and made clear that early mistakes were expected, not penalized. Within six months, voluntary AI tool usage across his team had increased by more than 40 percent, and managers who had previously avoided AI began openly sharing what they were testing in team meetings—modeling the curiosity the culture needed. “We can buy the best AI on the market,” he told Jenny. “But if our managers don’t know how to lead differently, the tools are just expensive noise.”
- Provide access to tools, focused training, and human–AI coaching at every level
- Model the right behaviors from the top: leaders who use AI openly and share what didn’t work give others permission to do the same
- Make AI fluency visible in promotion and talent decisions
- Treat adoption as a change management effort, not an IT rollout
Pro tip: Run a “psychological safety audit” before your AI rollout. Ask managers: Do your team members feel safe admitting they don’t know how to use a new tool? If the honest answer is no, address the culture first. No training or tooling will overcome a team that’s afraid to try.
