Here's a number worth sitting with: 97% of enterprises have deployed AI agents. Only 23% report meaningful ROI.
Not a technology problem.
If 97% of companies installed the same piece of software and 77% got nothing from it, we'd call it bad software. But AI agents work fine at companies with the right foundation. The gap isn't in the code. It's in the corner office.

The Strategy Built for the Press Release
I've watched this play out dozens of times. A company announces an AI transformation initiative. Press release goes out. All-hands meeting happens. Slides get shared. And then... not much.
Writer's 2026 Enterprise AI Adoption Survey captured responses from 2,400 global leaders and found 75% of executives admit their AI strategy is "more for show than actual guidance." Seventy-five percent.
Not a niche problem. The norm.
The same report found 79% of organizations face significant AI adoption challenges and 48% of executives describe their adoption efforts as a "massive disappointment." They spent the money. Made the announcement. Never did the leadership work.
The leadership work includes redesigning processes, not layering AI on top of broken ones. Redefining roles before people become threatened by the technology. Building genuine trust before asking people to change how they work. Measuring outcomes instead of activity metrics.
Instead, most companies pick a vendor, run a pilot, declare success, and push for adoption. Then they're surprised when nothing sticks.
Your Employees Are Pushing Back
The most uncomfortable finding from the same report: 29% of employees admit to actively sabotaging their company's AI strategy. Among Gen Z workers, 44%.
Nearly a third of the workforce working against tools their employer paid for. Not from laziness. Not from ignorance. Because they don't trust the leadership behind the rollout.

I've spent years working with leaders on this exact pattern. Leadership decides AI is the answer. They don't involve the people doing the work. They roll it out top-down. They measure tool usage. They wonder why adoption numbers are low.
There's another dynamic making this worse. The same survey found 92% of C-suite leaders are deliberately cultivating "AI elite" employees who get prioritised for raises, promotions, and opportunities. Three times more, according to the data.
So on one side, you have an elite group getting rewarded for AI adoption. On the other, 60% of executives plan layoffs for non-adopters. This isn't a transformation strategy. It's a pressure campaign. And pressure campaigns produce sabotage.
Gallup data shows only 21% of employees are currently engaged at work. When you launch a major change initiative on top of an already-disengaged workforce, you're not starting from zero. You're starting from significantly negative.
People don't resist AI. They resist feeling replaced, ignored, or manipulated. It's a leadership problem dressed up as a technology problem.
What Agentic AI Requires
MIT Sloan's research on the emerging agentic enterprise puts it plainly: "Agentic AI is spreading across enterprises faster than leaders redesign processes, assign decision rights, or rethink workforce models."
Organizations succeeding with AI agents share one common factor: they made the organizational decisions before scaling the technology. Not after.
It starts with answering hard questions first:
- Who has authority to approve what an AI agent does?
- How do we handle errors when an autonomous system makes them?
- Which roles change, and how do we support the people in those roles?
- What does good look like, and how will we know when we get there?
None of those are technology questions. They're leadership and culture questions. Most companies skip them because they're harder than buying a license and less satisfying than demoing the tool to the board.
The research has good news too. At companies getting this right, 95% of employees report AI positively impacts their job satisfaction. The technology isn't the problem. The sequence is.

The Leadership Work You're Skipping
I'm not going to tell you which platform to buy. Here's what to do before the procurement order:
Diagnose your trust baseline first. If your team doesn't trust leadership now, an AI rollout won't fix it. It will expose it. Get honest feedback before you start. The 360-degree feedback tool at Step It Up HR is one starting point, but any honest self-assessment beats none.
Define the problem, not the solution. "We need AI agents" is a solution. "Our support team spends 60% of their time on tier-1 requests requiring no human judgment" is a problem. Solve the problem. Let the solution follow from there.
Involve the people doing the work before rollout, not after. They know the edge cases, the exceptions, the places things break. Skip their input and you're engineering for a process shown only in PowerPoints.
Measure outcomes, not adoption rates. Tool usage is not a success metric. What problem were you solving? Did you solve it? By how much? Those are the only questions worth asking.
Plan for errors before they happen. AI agents make mistakes. What's the escalation path? Who reviews edge cases? Who's accountable when the agent does something unexpected? If you haven't answered these before go-live, you're setting your team up to clean up after a system they never trusted in the first place.
The Real Question
The AI is ready. It's been ready for a while. The limiting factor isn't model quality or agent capabilities or integration complexity. It's whether leaders are willing to do the unglamorous organizational work required for technology adoption to stick.
79% aren't. It's why 79% are disappointed.
Companies cracking this aren't the ones with the most sophisticated AI stack. They're the ones where leaders sat with their teams, worked through the hard questions, and built the kind of trust where change is possible.
The work doesn't start with a vendor. It starts with a conversation.
If you're not sure where your team's trust stands right now, start there. Figure out where you are before deciding where you're going. Everything else comes after.