top of page
Blog


Video: Clean Knowledge In. Trusted Answers Out.
AI rarely fails because the model is “dumb”, it fails because the knowledge it’s pulling from is messy. If you’ve ever had an AI answer at work that sounded confident but made you think “hang on… is that actually true?”, you’ve hit the real issue: shaky inputs create shaky outputs. In this short video I explain why, in 2025/2026, enterprise AI is only as trustworthy as the knowledge base behind it, and why duplicates, contradictions and outdated docs quietly turn into errors,

index AI
3 days ago1 min read


The Internet Is Filling With AI Slop. Your Company Might Be Too.
AI Slop Is Everywhere. The Fix Is Not “Better AI”, It’s Better Provenance. You know what’s changed in the last year or two? It’s not just that AI can make content, it’s that it can make plausible content at industrial scale, and the incentives online reward whatever gets clicks and shares. That might be harmless entertainment, but it becomes a serious issue the moment it influences business decisions, customer conversations, internal comms, or even what teams treat as “true”

index AI
3 days ago4 min read


Clean Knowledge In. Trusted Answers Out.
Here’s the reality in 2025/2026: there isn’t one universal “where does AI gets its facts from?” because most frontier labs don’t fully disclose their training mixes anymore. But we can anchor this in what’s publicly documented and what regulators/researchers keep pointing at. You know what people get wrong about AI? They think it “looks things up” like a clever librarian. Most of the time it doesn’t. Most of the time it’s answering from whatever got baked into it during trai

index AI
7 days ago2 min read
bottom of page
