top of page

Ontologies won’t save you from a messy SharePoint: why the ‘Truth Layer’ matters for AI

  • Writer: Dev index-ai
    Dev index-ai
  • 2 days ago
  • 4 min read
ree
"Ontology is lining up to be the buzzword of 2026"

I'm hearing that everywhere now, but really?


Palantir’s rise has put ontological modelling back in the spotlight – their Foundry platform is built on it. Microsoft is now moving ontology into Fabric. The race is on.


It makes sense. Ontologies give generative AI something it desperately needs: grounding.

LLMs are brilliant at pattern-matching and language, but terrible at enforcing logic. They’ll happily smooth over contradictions or hallucinate details with total confidence. Ontologies provide a formal structure –> a semantic map, that tells your systems what’s related to what, and under which rules.


When you need precision over plausibility, ontologies are one of the most reliable ways to keep AI from going off the rails.


But there’s a hard truth we see every week:

A perfect ontology sitting on top of contradictory, decaying knowledge will still produce confident nonsense.

And that’s where most enterprises are right now.


Ontology is the map. Your knowledge is the terrain.

Simple analogy:

  • Your ontology is the city map – clear streets, zones, rules of the road.

  • Your knowledge bases (SharePoint, Confluence, ServiceNow, email, PDFs) are the actual city – buildings, signs, traffic lights, roadworks.


If the map says “one-way street” but the physical street has no sign, two lanes of traffic, and roadworks everywhere, the drivers don’t care how elegant the map is. They’ll still crash.

That’s what happens when you:


  • Update a refund policy in one knowledge base but not the others

  • Translate a procedure into three languages but only maintain one

  • Let critical articles rot with broken links and stale owners

  • Split responsibilities between KM, CX, ops and data… with no single “truth owner”


Ontology helps you model the concepts: RefundPolicyCustomerTypeChannelRegion, their relationships and constraints.


But if the actual content behind those concepts is contradictory, outdated or missing, the best your AI can do is consistently reflect the mess.


The quiet problem: the unstructured “truth layer”

Most AI governance conversations are happening in data and analytics:

  • Data catalogues

  • Master data

  • Lineage

  • Fabric / Lakehouse design

  • Now: ontologies


Meanwhile, the unstructured knowledge layer that AI actually reads – the stuff agents and customers use – sits in KM, CX, ops, or “owned by everybody so actually owned by nobody”.

That layer is usually:


  • Spread across SharePointConfluenceServiceNow, internal portals and legacy sites

  • Full of ROT (Redundant, Outdated, Trivial) content

  • Littered with broken links, duplicates and conflicting versions

  • Missing clear owners, SLAs and governance


From an AI point of view, this is the ground truth. This is what your RAG pipelines, copilots and virtual agents are pointing at.

And it’s exactly the layer most ontology conversations skip.


Where index AI sits: maintaining the terrain so the map actually works

At index AI Ltd, we don’t build ontologies.


We make sure the knowledge your ontologies (and AI) depend on is clean, consistent and explainable.


Think of what we do as a “truth maintenance loop” for unstructured knowledge:


1. index Scan: always-on knowledge health

index Scan connects to platforms like SharePoint, Confluence and ServiceNow and continuously looks for:

  • Contradictions in policies and procedures

  • Duplicates and near-duplicates across teams and regions

  • ROT and stale content

  • Broken links and missing references

  • Gaps in ownership and governance


Crucially, we prioritise by impact – traffic, risk, regulatory exposure – not just volume. So teams don’t drown in a 20,000-item issues list.

This is where you discover whether your knowledge layer is stable enough to support the ontology you’re designing.


2. index Solve – from findings to governed fixes

Finding problems is easy. Fixing them at scale, with governance, is not.

index Solve turns Scan’s insights into structured, auditable remediation:

  • Create merge/retire/update actions with preview

  • Route them into tools like Jira or ServiceNow for owned workflows

  • Bake in approvals, change history and one-click rollback

  • Track what was changed, why, and by whom


This matters for ontology work. If your model says “there is one canonical refund policy per market”, index Solve is how you actually move the messy content estate toward that reality – safely and visibly.


3. index Shift – moving knowledge without losing trust

Many ontology programmes go hand-in-hand with platform change:

  • ServiceNow → Confluence

  • Legacy portals → modern KBs

  • Regional consolidation


index Shift handles governed migration of knowledge between platforms:


  • Move cleaned content into the right target structures

  • Preserve permissions, redirects and audit evidence

  • Align content with new taxonomies or ontological concepts


This is how you avoid a “greenfield ontology, brownfield content” situation – shiny semantic model, same old chaos underneath.


Ontology + Knowledge Health: both, not either/or

So how should CDOs, Heads of KM and AI leaders think about this?

A few simple rules of thumb:

  1. Don’t skip the terrain. Ontology design workshops are pointless if your knowledge base is so inconsistent that you can’t answer “which document is right?”

  2. Measure knowledge health. Before patting yourself on the back for your new ontology, ask: How much of our content is ROT (redundant, outdated, trivial)? Where are our contradictions across products, regions and channels? How many critical links are broken or pointing to stale content? Do we have clear owners and SLAs for the knowledge AI actually reads?

  3. Close the loop. Ontology gives you the model of how things should relate. Tools like index Scan, index Solve and index Shift give you the operational loop to make reality match the model – and keep it there.


If you’re building ontologies for GenAI, start here:

If you’re about to invest serious time and money in ontologies for GenAI, I’d start with three questions:

  1. Can you see where your current knowledge is contradicting itself?

  2. Do you have a governed way to fix those issues at scale?

  3. When you move or re-platform content, do you keep redirects, permissions and audit intact?


If the answer to any of those is “not really”, your ontology project is going to be doing a lot of heavy lifting over a shaky foundation.

That’s exactly the gap we built index Scan, index Solve and index Shift to close.

If you’d like to see what this looks like in practice, I’ve put together a 2-minute walkthrough video of Scan in a real enterprise knowledge estate – happy to share the link or talk through how this supports your ontology and GenAI roadmap.

Comments


bottom of page