Everyone is asking the wrong question about AI agents.
"Which tool should we use?" "Should we build or buy?" "What's the ROI model?"
These are fine questions. They're not the right ones.
The right question is: do your people trust you enough to go through a messy learning curve with you?
If the answer is no, it doesn't matter which AI agent you pick. It won't stick.
The Number Nobody Talks About
The 2026 State of AI Agents Report asked enterprise leaders about their biggest scaling challenges. The headline results:
- 46% cite integration with existing systems
- 42% point to data access and quality
- 39% cite change management needs
Technology gets the top two spots, and everybody nods along. Makes sense. Integration is hard. Data is messy.
But 39%, the change management figure,deserves more attention than it gets.
Integration and data problems are solvable with engineers and budget. Change management is a leadership problem. You don't fix it with a vendor contract.
And it understates the real scale of the issue. Employee resistance doesn't show up cleanly in survey data about "challenges." It shows up as AI licenses nobody opens, tools deployed once and abandoned, pilots ending before they reach production.
Your Employees Are Using AI While Fearing It
According to HBR's February 2026 analysis, 86% of employees believe AI will improve their work. Yet 80% carry what the researchers call "AI angst": significant personal concerns about what AI means for them.
People are using the tools and fearing them simultaneously.
What does this produce? Compliance, not engagement. Employees going through the motions because a manager mandated it, not because they see genuine value. Compliance-driven adoption delivers none of the ROI you're projecting.
The same research found 88% of companies report regular AI use but struggle to convert it into measurable ROI. The tools are deployed. The results aren't there.
The gap between deployment and results? Leadership owns it.
Why People Resist
Kyndryl's research found 45% of CEOs say their employees are reluctant or outright hostile toward AI adoption. When researchers dug into why people resist organizational change, the top reason wasn't fear of technology:
- 41% said lack of trust in leadership
- 39% said they didn't understand why the change was happening
- 38% said fear of the unknown
- 27% worried about changes to their job roles
Notice what's not on the list: "the technology was too complicated."
People follow leaders they trust into uncomfortable territory. They don't follow mandates. If your team doesn't know why you're introducing AI agents, or doesn't believe you're being straight with them about what it means for their roles, no amount of tooling investment will move the needle.

What Bad AI Leadership Looks Like
I've watched this pattern play out in technology teams for years. The failure modes are predictable.
Mandate without explanation. "We're moving to AI agents by Q2." No why. No explanation of what changes. No space for questions. The team nods and goes back to the old way whenever nobody's watching.
Measure adoption by licenses. 200 Copilot seats purchased. 40% active users. Leadership declares success. The active 40% are using autocomplete on emails. The vision of transformed workflows is nowhere in sight.
Don't use the tools yourself. If you're asking your team to rethink how they work with AI, and you haven't done it yourself, they know. You lose credibility the moment someone asks you a specific question about your own workflow and you deflect.
Treat mistakes as failures. AI agents fail. They hallucinate. They miss context. They produce outputs needing heavy editing. If the first time someone on your team uses an agent and it goes wrong they get embarrassed in a meeting... they won't experiment again. Experimentation shuts down. The learning curve never gets climbed.
What Good AI Leadership Looks Like
The leaders getting real value from AI agents right now are doing something different.
They're building trust before deploying tools. Being honest about what AI adoption means for roles and workload. Admitting they don't have all the answers. Creating enough psychological safety for people to say "this didn't work, here's what I tried."
They're explaining the why. Not "because AI is the future"... not a reason. A reason sounds like: "Our support queue takes 4 hours to triage manually. An agent does it in 20 minutes, and I want our team working on the hard cases needing real human judgment."
They're modeling the behavior. The best AI adopters I know spend time each week going through what went wrong with their agents. They share failures openly. They treat it like a skill under construction, not a switch they've flipped.

They're not measuring seat licenses. They're measuring workflows changed, hours reclaimed, problems solved. And they're measuring the feedback loop: are people learning? Are they getting better?
The Feedback Loop Problem
Here's what I keep coming back to, from my work on Step It Up HR and the BAT framework: AI agent adoption is a team performance problem. And team performance lives or dies on feedback culture.
A team with strong feedback habits (where people speak up when something isn't working, where managers listen and adjust, where failure is treated as data) adapts to new tools faster. They iterate. They share what works. They build collective capability instead of isolated pockets of competence.
A team where feedback is dangerous, or where managers take criticism personally, freezes when change happens. Problems with AI tools don't get surfaced. Adoption stalls.
My research found 99.5% of survey respondents have experienced one or more types of bad bosses. The behaviors defining a bad boss: avoiding hard conversations, managing by fear, taking credit and assigning blame... are precisely the behaviors making AI adoption fail.
The Technology Is Ready
To be clear: the technology works. The 2026 State of AI Agents Report notes 80% of organizations already report measurable economic impact from AI agents. 88% expect ROI to continue growing. The tools have moved beyond proof-of-concept. The era of "let's run a pilot" is ending.
The question isn't whether AI agents deliver value. They do. The question is whether your organization is set up to capture it, and it's a leadership and culture question, not a technology one.

The companies pulling ahead on AI adoption right now aren't the ones with the best tooling. They're the ones with the strongest feedback cultures. The ones where people feel safe saying "I tried this and it failed, here's what I learned."
Cisco's CHRO Kelly Jones put it directly: "Soft Skills are the New Hard Skills." Emotional intelligence, clear communication, and the ability to build trust aren't nice-to-haves when scaling AI. They're the foundation everything else rests on.
The Question Worth Asking
Before you decide which AI agent platform to standardize on, ask yourself this:
Does my team trust me enough to look stupid in front of me while they learn something new?
If yes, almost any tool will work. If no, the tool doesn't matter.
AI adoption isn't a technology problem. It's a leadership problem wearing technology's clothes. The sooner your organization figures it out, the sooner you start getting real results.
What does your team's feedback culture look like? Is it ready for the pace AI requires?