top of page
Blog


The Most Common Ways AI Fails When Governance Is Too Weak
AI governance often fails in the same places: weak controls, poor knowledge, unclear ownership, and limited oversight. Here’s where AI systems break, and how to prevent it.

index
Mar 137 min read


index - why us? why now?
GenAI is massively increasing the volume of content businesses produce: policies, procedures, work instructions, customer guidance, internal comms, “quick drafts” that quietly become “official.” That’s the part everyone celebrates. The part they miss is what happens next: the more content you create, the harder it becomes to know what’s correct, current, approved, and safe.

index
Feb 112 min read


AI Won’t Reduce Work. It Will Turn the Volume Up, Unless Your Knowledge Is Ready
Everyone’s obsessing over the same question right now: “How do we get more people in the business using AI?” Because on paper it’s brilliant. It can bash out first drafts, summarise a mountain of info, untangle code, and generally take the boring weight off people’s shoulders so they can do the higher-value stuff.

index
Feb 103 min read


Honestly, If a Copilot Fixed This… We’d Be Out of a Job.
index is not “just an AI layer on top of your docs”
If you’ve taken some time to look at what we do and thought, “Hang on… can’t we just add a copilot / RAG / search upgrade and call it done?”, you’re forgiven, because we've heard this a few times now - hence the blog post!

index
Feb 55 min read


Video: Clean Knowledge In. Trusted Answers Out.
AI rarely fails because the model is “dumb”, it fails because the knowledge it’s pulling from is messy. If you’ve ever had an AI answer at work that sounded confident but made you think “hang on… is that actually true?”, you’ve hit the real issue: shaky inputs create shaky outputs. In this short video I explain why, in 2025/2026, enterprise AI is only as trustworthy as the knowledge base behind it, and why duplicates, contradictions and outdated docs quietly turn into errors,

index
Feb 21 min read


Clean Knowledge In. Trusted Answers Out.
Here’s the reality in 2025/2026: there isn’t one universal “where does AI gets its facts from?” because most frontier labs don’t fully disclose their training mixes anymore. But we can anchor this in what’s publicly documented and what regulators/researchers keep pointing at. You know what people get wrong about AI? They think it “looks things up” like a clever librarian. Most of the time it doesn’t. Most of the time it’s answering from whatever got baked into it during trai

index
Jan 292 min read


From AI Hype to Real Value: Why We Must Keep Humans in the Loop
I recently saw a LinkedIn post that used an image as its eye-catcher to underline the article’s main point: a switch from hype to disciplined work creating real value. The switch depicted in the image, however, was physically incapable of making that change . The labels for what it could be flipped to were on the left and right, while the type of switch shown could only be flipped up and down. Do we switch it up, down, left, right? Does it always turn the writing to green and

index
Jan 282 min read


Why the Future of AI Depends on "Vector Databases" (in Laymans terms)
If you’ve been following the AI boom, you’ve probably heard the term Vector Database tossed around. It sounds like something straight out of a physics textbook, but it’s actually the "secret sauce" making modern AI - like ChatGPT or advanced recommendation systems - work so efficiently. But what are they, and why should you care? Let’s break it down without the jargon. What is a Vector Database, anyway? Traditional databases (the ones we’ve used for decades) store informatio

index
Jan 143 min read


Why AI hallucinates and why the perfect answer can be dangerously wrong
You ask an AI a question (ChatGPT or other - they're all as guilty as one and other) and it replies instantly with something that sounds like it came from a well read expert who also happens to be polite, structured, and suspiciously confident. It's fluent. It's tidy. It even gives you bullet points.
And then you discover one awkward detail. It is wrong.

index
Dec 20, 20256 min read


Stop Blaming the Bot: The Real Problem Is Your Knowledge
We’ve all spent the last 18 months talking about “AI for customer service” like it’s a magic trick. Better bots. Smarter assistants. RAG for everything. And yet… handle times barely move, recontacts creep back up, agents still ask in Slack: “Which article is actually correct?” It’s not that the AI is bad. It’s that we’re asking it to reason on top of knowledge that’s fundamentally broken. Messy, contradictory, duplicated, outdated, scattered across Confluence, SharePoint, Se

index
Dec 17, 20254 min read


Ontologies won’t save you from a messy SharePoint: why the ‘Truth Layer’ matters for AI
"Ontology is lining up to be the buzzword of 2026" I'm hearing that everywhere now, but really? Palantir’s rise has put ontological modelling back in the spotlight – their Foundry platform is built on it. Microsoft is now moving ontology into Fabric. The race is on. It makes sense. Ontologies give generative AI something it desperately needs: grounding . LLMs are brilliant at pattern-matching and language, but terrible at enforcing logic. They’ll happily smooth over contradi

index
Dec 17, 20254 min read
bottom of page
