Nobody reads the handbook. We've always known this.

HR teams spent months writing those documents. Benefits summaries. PTO policies. Performance review procedures. All of it sitting on an intranet loading slowly, looking like 2009, requiring three clicks before you realize you're looking at the wrong version.

Employees figured out long ago it's faster to ask a colleague. Or a manager. Or... increasingly... ChatGPT.

The Shift Nobody Is Talking About

Here's what's happening in your organization right now.

Someone wants to know how parental leave works. They don't open the intranet. They open ChatGPT, type "how does parental leave usually work," and get an answer in 10 seconds.

Is it your answer? No. Is it accurate for your company? Rarely. Does the employee know the difference? They don't care.

According to SHRM's 2026 State of AI in HR report, 39% of organizations have deployed AI in HR functions officially. The unofficial number is far higher: every employee with an internet connection is already using AI for workplace questions, and they're not waiting for HR to approve it.

I write a lot about the gap between what leaders think is happening and what's happening at the desk level. This is a big one.

The Questions HR Never Sees

The frustrating part isn't employees using AI. It's HR having no idea what questions are being asked, what answers are being received, or whether those answers align with company culture.

If 50 people asked ChatGPT about your parental leave policy last month, you'd never know. No ticket in your system. No email trail. No Slack message. The employee got an answer from a general-purpose AI trained on millions of documents... knowing nothing about your specific company, your specific values, or how your specific manager interprets the policy.

Employee at a laptop with an AI chat interface open in a warm, modern open-plan office

For simple factual questions? Fine. But what about the employee who asked ChatGPT whether to report their manager for inappropriate behavior? What answer did they get? Was it aligned with your actual process, your psychological safety culture, your specific HR team's approach... or was it a generic response based on average corporate practice?

You don't know. Not knowing is the problem.

The Same Shift Happened to Google

Think about what happened to web search.

For 20 years, you needed to rank on Google to be found. Companies invested in SEO. They built content, acquired backlinks, optimized metadata. Page one was the goal.

Then large language models arrived. Millions of people now get their answers directly from AI... and never visit the underlying source. Your carefully crafted web pages? Bypassed entirely.

The same shift is happening inside your organization. Your intranet is Google. Your employees abandoned it for AI. The question is: what is AI telling them?

A cluttered old corporate intranet on a monitor contrasted with a clean modern AI chat interface

Uber engineers reduced time spent searching internal documentation by over 40% after implementing AI-powered internal knowledge systems. Internal AI, trained on internal data. The answers those employees get are about Uber's processes, Uber's culture, Uber's way of doing things.

Your employees are getting similar efficiency... from a tool trained on the general internet. The answers are generic. Your culture, your policies, your values are nowhere in them.

The Culture Risk Nobody Measures

Here's what worries me about the current state.

When an employee asks ChatGPT "how should I handle a conflict with my manager," they're getting the internet's average answer. An average answer based on millions of workplaces, thousands of HR policies, and no knowledge of your specific culture.

Your culture is not average. Or it shouldn't be.

If you've spent years building psychological safety, you want employees to understand your specific process for raising concerns. If you've built a feedback culture, you want employees to know what "good feedback" looks like in your organization specifically.

Generic AI answers flatten all of this. "Here are five steps to handle conflict at work" is not your five steps. It's everyone's five steps. Some of those steps might actively contradict the culture you've built.

This is especially true for organizations with strong cultures... the ones with unusual onboarding, unconventional management structures, or values going against industry defaults. Those cultures are the hardest to communicate, and the most damaged by generic AI answers.

The Organizations Getting This Right

Some organizations figured this out. They're not fighting the AI wave. They're feeding it.

Microsoft Copilot, Glean, Guru AI... these platforms let you build an AI layer answering employee questions using YOUR data. Your policies. Your culture documents. Your values statements. When an employee asks "how do I request time off," they get YOUR answer, not a generic one.

87% of HR professionals using AI report meaningful efficiency gains, per SHRM's 2026 research. The organizations seeing those gains aren't the ones blocking AI. They're the ones integrating it deliberately, on their own terms.

The organizations doing this well are treating internal knowledge as a product. They ask "what questions are our employees asking?" and make sure AI has the right answers before employees go looking elsewhere.

This isn't complicated in principle. It is complicated in execution, because it requires HR and tech teams to sit in the same room and genuinely agree on what the source of truth is. Most organizations skip this conversation entirely.

HR leader and tech team member reviewing knowledge analytics on a large screen together

What Bad Leaders Don't Want You to Know

Here's something worth saying directly.

Transparency through AI is uncomfortable for bad leaders. If an employee asks ChatGPT "is my manager required to give me regular feedback," they'll get a yes. If they ask "what are my rights if my manager is creating a hostile environment," they'll get a clear answer.

Bad leaders benefit from information asymmetry. They rely on employees not knowing their rights, not knowing the process, not knowing what normal looks like. When employees don't know what good leadership is, bad leadership goes unchallenged.

AI is dismantling this asymmetry. Employees are better informed than ever. I don't see this as a problem to manage. I see it as a feature of a healthy workplace.

At Step It Up HR, the data we see consistently shows bad manager behavior thrives in environments where employees don't know what good looks like. My research found 99.5% of survey respondents had experienced one or more types of bad boss. For most of them, the barrier wasn't courage... it was information. They didn't know what they were experiencing wasn't normal.

AI is changing this. Every employee now has access to a baseline of "what should this look like?" Your job as a leader is to make sure your organization shows up in their AI answers... not as an afterthought, but as the authoritative source.

What You Should Do

A few things, in order of importance.

Audit the gap first. What questions are your employees most likely to ask AI about? Start with the obvious ones: benefits, leave, performance, pay, disciplinary procedures. Then check whether your internal documentation answers them clearly. Most organizations have documentation which is technically accurate but practically useless. Bureaucratic language. PDFs from 2018. Links going to pages no longer there. Plain-English answers to real employee questions are scarce.

Feed the machine deliberately. If you're not yet investing in an AI layer over your internal knowledge base, start scoping it now. Microsoft Copilot for enterprise, Glean, Notion AI... there are options at every budget level. The goal is simple: when your employee asks, they get your answer, not a generic one. This requires good source material. If your documentation is weak, no AI tool fixes it.

Treat documentation as a product, not a chore. Your internal knowledge base needs the same attention a good product gets: regular updates, user testing, someone who owns it. Most intranets have an owner on paper and nobody in practice. Change this.

Stop treating the handbook as finished. Your internal documentation is a living thing. Employees are asking questions you haven't thought to answer. The only way to know which ones is to ask. Run a quarterly review. Ask your managers what questions they hear most often. Then answer those questions in plain English and make sure your AI system knows about them.

Use the discomfort. If AI transparency is uncomfortable in your organization... find out why. The answers employees are getting from ChatGPT are often more honest than the answers they're getting from their managers. Name why this feels threatening. Then fix it.


The intranet isn't coming back. Employees found something faster, cleaner, and always available.

The question for every HR leader and tech leader reading this is simple:

Are you the source of truth for your employees... or is ChatGPT?