A corporate executive staring blankly at an AI interface, looking passive and detached

Earlier this year, economists started noticing something odd about the formula behind Trump's "Liberation Day" tariffs. The numbers had a distinct pattern. Someone ran the same prompt through ChatGPT, then through Gemini, then through Grok. According to TechRepublic, every major AI model produced the same formula the White House used to calculate billions of dollars in trade policy.

Nobody confirmed AI wrote the tariffs. The White House denied it. But the story spread because it was believable. Because everyone who works with executives has seen a version of it: the slide deck built from AI bullet points, the strategy document feeling generic, the email no human seems to have written.

This story is a leadership warning. And the warning has nothing to do with politics.

The Machine Is Designed to Pull You Back

My friend Ben Morton spent years as a military officer before becoming a leadership coach. When I talked to him about AI, he made a point worth sitting with: AI tools are built on the same psychological architecture as social media. Variable reward schedules. Intermittent reinforcement. The same mechanics keeping you scrolling at midnight keep you asking the chatbot one more question.

Social media hijacks the brain's dopamine pathways by delivering unpredictable rewards. Sometimes the post gets 200 likes, sometimes three. The unpredictability is the feature, not a bug. AI does something similar: sometimes the answer is brilliant, sometimes mediocre, but you keep going back because it's fast, frictionless, and good enough.

Ben's warning wasn't "don't use AI." It was: understand what you're dealing with. You're not using a neutral tool. You're engaging with a system designed to make you return. Built on the same psychology as the slot machine.

What Happens to Your Brain

A brain split in half: one side vibrant with neural connections, the other dim and replaced by circuit boards

Here's where it gets uncomfortable.

A 2024 study published in the journal Societies found frequent reliance on AI tools weakens critical thinking skills. The mechanism is cognitive offloading: when you delegate thinking to an external system, your brain stops building the mental muscles doing the thinking. PsyPost reported people who used AI tools more frequently showed lower performance on structured critical thinking tasks.

This is not science fiction. This is the same pattern we saw with GPS navigation. People who stopped using a physical map got measurably worse at spatial memory and wayfinding. Your brain prunes pathways it doesn't use. When you stop using a muscle, your body stops investing in it.

And in March 2026, Harvard Business Review published something worth pinning to every executive's wall. A BCG study of 1,488 US workers found overuse of AI causes what researchers now call "AI brain fry." Mental fatigue from excessive use or monitoring of AI tools beyond your cognitive capacity. Workers described a "buzzing" feeling, difficulty focusing, slower decision-making.

You thought AI would free up your mental bandwidth. For many leaders, it's consuming it.

I've Felt This Myself

I'll be honest. There have been days recently where I noticed myself reaching for the chatbot before spending two minutes thinking about the problem. Not out of laziness. Because the tool is frictionless and my brain is looking for shortcuts.

This is the trap. Not evil intent. Not stupidity. The steady, quiet erosion of a habit.

I've caught myself asking AI to draft an email I've written a thousand times. Asking it to summarise a document I should read myself. Asking it to generate options for a decision I've been paid to make. And each time I do it, I get the output I needed in the moment, but I've practised a little less, wrestled a little less, owned a little less.

The leaders I worry about aren't the ones who refuse to use AI. They're the ones who've stopped noticing the difference between "AI helped me think through this" and "AI thought through this for me."

The Patterns Showing Up

There are some specific patterns I've started noticing in leaders who've drifted too far down the dependency curve.

They struggle to think out loud. Put them in a room without a screen and ask them to reason through a problem verbally. It's harder than it used to be. The ideas feel foggier. The confidence is thinner.

Their writing sounds like everyone else's. Because it was written by the same tool everyone else uses. The voice is gone. The opinions are muted. Leadership communications start feeling like press releases.

They defer on their own domain. A CTO who's been writing software for twenty years starts second-guessing their own instincts because the AI said something different. A senior HR leader runs every policy decision through a chatbot before trusting their own experience. This is a red flag. Your twenty years of experience is worth something. Use it.

Their reasoning is invisible. Ask them why they made a particular call and they're vague. Because the AI made it and they rubber-stamped it. They didn't internalise the reasoning, so the explanation isn't there.

The Tariff Problem at Scale

A confident business leader working through a decision matrix independently at a whiteboard

Go back to the tariff story. Whether AI was used or not is almost beside the point. The story spread because it was believable. And when leaders outsource their judgment, they don't get worse answers. They lose accountability.

If the AI made the call, who owns the outcome?

Forbes covered this tension in April 2026, noting AI requires "human-led, AI-powered strategies" where leaders maintain agency over ethical decisions. The framing matters. AI-powered means AI is the tool. AI-led means you've handed over the wheel.

What You're Losing

Let me be direct about what cognitive offloading costs you as a leader.

Your judgment. Good leadership judgment comes from wrestling with hard problems over time. Pattern recognition, risk assessment, reading people: these are built through repetition. When you outsource the wrestling to AI, you atrophy the muscle.

Your accountability. If you're unable to articulate why you made a decision in your own words, you don't own it. You're executing someone else's output.

Your credibility. Your team knows. They tell the difference between a leader who's thought something through and one briefing from a chatbot. The people who've been in your organisation for years especially.

Your instincts. This is the long-term risk nobody mentions. Instinct isn't mystical. It's compressed pattern-matching from thousands of past decisions. When you stop making decisions, wrestling with them, owning them, you stop feeding the system. The instinct atrophies too.

This Isn't About Not Using AI

I use AI every day. It's useful for first drafts, for surfacing research, for challenging my thinking when I'm stuck. Ben Morton doesn't say avoid AI. He says AI should inform your decisions, not make them. Deloitte's 2026 research on AI-powered decision-making makes the same point: the goal is "quality decisions anchored in human agency."

The distinction matters. AI as a thinking partner is fine. AI as a replacement for thinking is the problem.

One test I use: before asking AI anything important, I spend five minutes writing down my own position. Then I ask. Then I see how much the AI answer shifts my view. If I'm changing my mind based on AI output without being able to articulate why, something's off. I'm not learning. I'm outsourcing.

Another test: would you be comfortable explaining your reasoning to your team without mentioning AI at all? If the answer is no... if the decision only makes sense because the AI said so... go back and do the thinking yourself.

The Question Worth Sitting With

What's the last important decision you made... about a person, a strategy, a risk... where you did the thinking yourself? No AI summary. No chatbot draft. You, the problem, and some time.

If you're struggling to remember, you might already be further down the dependency curve than you thought.

The risk isn't AI replacing your job. The risk is letting AI replace your judgment, and then wondering why your career has stalled.

Think first. Then ask the machine.