AI is not just making content. It is making a mess.
- index

- Feb 10
- 5 min read

I saw a post doing the rounds quoting Ted Chiang. The gist is sharp: generative AI is brilliantly good at lowering our expectations of what we read and what we write. It makes “good enough” feel normal. It reduces intention. Less craft, less care, less meaning.
It is a strong critique and it is worth sitting with. But I think the real damage is not only cultural or artistic. The more immediate, expensive damage is operational.
AI turns every organisation into a content factory, and most organisations were already struggling to keep their knowledge bases accurate before this wave hit. Now we are pouring petrol on it.
The uncomfortable truth: AI makes it easy to create, not easy to know
If you want a scary sentence for leaders, it is this:
AI will massively increase the volume of knowledge your business produces, while making it harder to tell what is true, current, approved, and safe.
That is the bit people miss while they chase adoption metrics.
Because the story we tell ourselves is comforting. AI saves time. AI reduces workload. AI clears backlogs. AI helps everyone communicate better.
Then reality shows up:
Every “quick draft” becomes a document that might get shared, pasted into a wiki, or sent to a customer.
Every team spins up its own versions of policies, processes, templates, FAQs, guidance notes.
Every new AI assistant encourages more content creation because it always needs “just one more page” to ground itself.
People stop hesitating. The blank page fear disappears. So content spills into lunch breaks, evenings, “just one more tweak”.
The result is not a smarter organisation. It is a louder one.
Why the Chiang critique is right, but incomplete
Where I agree with the spirit of it: AI can flatten standards. It can make mediocre writing feel acceptable because it is fluent and fast. It can make us less deliberate because there is always another instant answer.
Where I would push back: the technology is not inherently “dehumanising” in some mystical way. The problem is incentives and friction.
When it becomes near-zero effort to generate plausible text, we will generate a lot of plausible text. And then we will store it. And then we will search it. And then we will ask AI to summarise it. And then we will act on it.
That is not a philosophy problem. That is a knowledge operations problem.
This is how knowledge bases get polluted
Most enterprise knowledge bases were never designed for an infinite supply of new content. They were designed for humans writing slowly, with some level of review, with natural limits.
AI removes those limits.
So you get predictable failure modes:
Duplication explodes - Ten versions of “the process” across Confluence, SharePoint, PDFs, ticket comments, and someone’s Teams chat paste.
Contradictions multiply - One page says do X, another says do Y, and both sound confident. Now add AI rewriting both of them and spreading the contradiction faster.
Ownership gets fuzzier - If something was AI-assisted, who owns it? Who approves it? Who is accountable when it is wrong?
Recency starts masquerading as correctness - The newest doc wins because it looks current. Except it might be the newest wrong doc.
The verification tax goes up forever - Every reader now has to ask “is this right?” and the fatigue builds. People stop checking. Mistakes become normal.
And once AI assistants sit on top of that mess, the mess becomes an output problem.
Hallucinations get all the blame, but the bigger issue is that the model is faithfully reflecting a chaotic, ungoverned knowledge estate.
Garbage in, plausible garbage out.
This is why AI becomes a KB problem
When leaders say “we need more employees to use AI,” what they often mean is “we want faster output.”
But faster output is not the same as better decisions, better customer outcomes, or lower risk.
If your knowledge base is drifting, AI adoption amplifies the drift. It accelerates it. It makes the wrong thing easier to produce, easier to find, and easier to repeat.
So yes, AI creates a knowledge base problem, even if you never intended to build one.
And this is exactly where index exists.
Why index is a necessity when knowledge is getting out of control
index is built around a simple premise:
If AI is going to sit inside your business, your knowledge has to be treated like a governed asset, not a content dump.
That sounds obvious, but it is rare in practice. Most companies either:
invest in search, chat, tagging, and “findability”, or
run one-off cleanups that drift back into chaos six months later.
index is different because we focus on the health of the knowledge itself, with evidence, governance, and continuous improvement.
Here is what that means in plain English:
We map what knowledge you have, where it lives, and where the risk is hiding (Scope).
We detect the rot at scale: outdated content, duplicates, contradictions, missing owners, missing review cycles, policy drift (Scan).
We fix it through governed workflows with human accountability, approvals, and audit trails, not random edits and good intentions (Solve).
We help you move and consolidate knowledge safely when platforms change, because they always do (Shift).
We keep it healthy, because knowledge is not a project, it is an operating system (Sustain).
The point is not to create more content. The point is to stop content becoming pollution.
If you want the short version of our philosophy, it is in this post:
The pub test: would you trust this if it was about safety?
Here is a simple way to cut through the hype.
If your AI assistant gave an answer about something important, say safety, compliance, customer commitments, financial controls, or clinical guidance, would you trust it?
If the honest answer is “only after someone checks it,” then you have not solved AI. You have just created a faster way to generate drafts that require human verification.
That is not transformation. That is a verification treadmill.
The way off the treadmill is not “better prompts” and it is not “more training”.
It is knowledge governance. Ownership. Approval paths. Version control that means something. And a system that detects drift before it becomes normal.
So what should leaders do right now?
If you are rolling AI out internally, do these three things before you chase usage:
Decide what “approved” means, and where approved knowledge lives.
Put owners and review cycles on the knowledge that matters most.
Measure knowledge health like you measure security or uptime.
Then you can build AI on top of something stable, instead of building AI on top of a landfill and wondering why it smells.
AI is not going to reduce work by default. It will turn the volume up. If your knowledge is not ready, it will turn the chaos up too.
index exists for the part everyone tries to ignore until it bites them: keeping enterprise knowledge clean, governed, and trustworthy in a world where content is becoming infinite.



Comments