Is Your AI Investment Building on Quicksand?
- Barry Thomas
- Jul 28
- 5 min read
Have you been underwhelmed by AI in your business? Perhaps your AI systems produce responses that sound impressive but miss the mark. They confidently state things that aren't quite right. They seem to have learned your organisation's bad habits rather than its best practices.
In many case the problem isn't AI technology per se. It's what you're feeding it.
As Phil Schmid from Google DeepMind has observed, "Most agent failures are not model failures anymore, they are context failures."¹ The most sophisticated AI in the world can't save you from garbage in, garbage out.

The Hidden Foundation Crisis
Most organisations approach AI implementation with a dangerous assumption: that pointing sophisticated models at existing knowledge repositories will somehow produce intelligent outcomes. "Just give it access to our SharePoint," they say. "The AI will figure it out."
This reveals a fundamental misunderstanding of how Large Language Models work. LLMs are exquisitely sensitive to the context they're given. They don't "figure things out" the way humans do through years of accumulated experience and judgment. Instead, they synthesise patterns from whatever information they can access.
Here's the crucial difference: when a human encounters outdated procedures, they often compensate with judgment. When an LLM does, it confidently synthesises fiction from fragments. As one developer put it, "Too little context (or missing pieces) and the AI will fill gaps with guesses (often incorrect)."⁵
The Amplification Effect
Poor documentation has always been a business problem. We've all learned to work around outdated process guides, conflicting policy documents, and knowledge bases that haven't been updated since 2019. Humans navigate this chaos through tribal knowledge, asking colleagues, and applying common sense.
But AI can't (yet) knock on someone's door to ask for clarification. It can't recognise that the procurement policy from 2018 has been superseded by informal practices. It simply ingests everything it's given and attempts to create coherent responses from incoherent inputs.
The result? Your AI amplifies every flaw in your documentation:
Outdated procedures become confidently stated current practice
Conflicting policies get blended into nonsensical recommendations
Gaps in documentation get filled with plausible-sounding fabrications
Edge cases documented once become standard operating procedure
The Dilution Problem
Even when organisations recognise the need for better documentation, they often make another critical error: more is not better. Flooding your AI with every document ever created doesn't improve its performance—it destroys it.
Context engineering, the deliberate curation and structuring of information for AI consumption, requires understanding that too much context is almost as problematic as wrong context. Every irrelevant document, every outdated version, every conflicting piece of information dilutes the influence of accurate, current knowledge.
Why You Can't See the Failure
The most insidious aspect of this problem is its invisibility. When your AI produces a response, it sounds authoritative. It uses the right terminology, references familiar concepts, and presents information in perfectly structured prose.
But subtle errors compound. The AI that learned from your outdated sales playbook is now generating customer proposals. The model trained on conflicting HR policies is answering employee questions. These outputs sound plausible enough to pass initial review, but they're building your future on quicksand.
Even worse, as AI-generated content begins to feed back into your knowledge systems, you're creating an echo chamber of compounded errors. Today's hallucination becomes tomorrow's training data. Developers have coined a term for this phenomenon: "context rot," where context quality degrades over time as it accumulates distractions, dead ends, and low-quality information.⁴
The good news is that the same precision required to make AI work properly also points to the solution.
The Path Forward: Context Engineering
The solution isn't to abandon AI or to embark on a years-long documentation cleanup project. Instead, it's to recognise that in the age of AI, knowledge curation has transformed from a nice-to-have to a critical business capability.
This emerging discipline has a name: context engineering. Tobi Lutke, CEO of Shopify, describes it as "the art of providing all the context for the task to be plausibly solvable by the LLM."² Andrej Karpathy puts it even more precisely: it's the "delicate art and science of filling the context window with just the right information for the next step."³
This is where AI itself offers a path forward. The same technology that's sensitive to poor context can also help create better context. But it requires human expertise: professionals who understand both the mechanics of LLMs and the principles of effective knowledge management.
Context engineering involves:
Identifying and isolating your authoritative sources of truth
Ruthlessly pruning outdated, redundant, or conflicting information
Structuring information specifically for AI consumption
Creating feedback loops to identify when AI outputs reveal documentation gaps
Establishing governance processes that maintain context quality over time
A skilled context engineer working with AI can create and maintain documentation at a pace and quality level that was economically impossible just two years ago. They can transform your organisational knowledge from a liability into a competitive advantage.
I speak from some experience here. My background spans creating FDA-compliant documentation at Cochlear in the 90s, to co-founding LIXI Ltd to establish data standards that transformed the Australian mortgage industry, and leading the technical standards team for Australia's Consumer Data Right at Federal Treasury. Each role taught me that the difference between information and actionable knowledge lies in structure, governance, and ruthless curation.
Now, with AI assistance, I can accomplish in a week what once would have taken months. The productivity gains are real, but only when AI is paired with the curation "taste" of a skilled human
.
The Competitive Advantage
The good news? Getting context engineering right creates a compounding advantage. Organisations with clean, current, carefully curated context make better decisions faster because their AI actually understands their business. Their customer service AI accurately quotes this month's pricing, not last year's. Their contract generation AI includes the latest compliance clauses, not the ones legal revised six months ago. Their analytical AI spots real trends in current data, not patterns from stale reports.
Meanwhile, organisations with poor context engineering are teaching their AI systems to be confidently wrong at scale. But this gap represents an opportunity: those who act now to build proper foundations will find their AI investments consistently outperform expectations.
A New Core Competency
In the pre-AI era, poor documentation was an operational inefficiency. In the AI era, it's can rise to the level of an existential risk. The organisations that will thrive are those that recognise context engineering as a new core competency, as fundamental to AI success as choosing the right tools or platforms.
This isn't just our view. The industry is rapidly converging on this understanding, with major AI frameworks like LangChain and LlamaIndex positioning context engineering as the most important skill an AI engineer can develop. Academic researchers are publishing papers on it.⁶ Companies from startups to enterprises are hiring for it.
The question isn't whether you need context engineering. It's whether you'll recognise its importance before or after your AI investments underperform.
Your AI is only as good as the ground it stands on. Is yours building on quicksand or solid foundations?
References
Phil Schmid, "Context Engineering is the new skill in AI" (2025). Available at: https://www.philschmid.de/context-engineering
Tobi Lutke, quoted in "The rise of 'context engineering'" by LangChain (June 23, 2025). Available at: https://blog.langchain.com/the-rise-of-context-engineering/
Andrej Karpathy, quoted in "Context Engineering for Agents" by LangChain (2025). Available at: https://blog.langchain.com/context-engineering-for-agents/
Workaccount2 on Hacker News, "Context rot" concept discussed in "Context Engineering: Bringing Engineering Discipline to Prompts" by Addyo (2025). Available at: https://addyo.substack.com/p/context-engineering-bringing-engineering
"Context Engineering: Bringing Engineering Discipline to Prompts" by Addyo (2025). Available at: https://addyo.substack.com/p/context-engineering-bringing-engineering
Lingrui Mei et al., "A Survey of Context Engineering for Large Language Models" (July 2025). arXiv:2507.13334. Available at: https://arxiv.org/abs/2507.13334