Useful AI is a context problem
The difference between useful AI and dangerous AI is almost entirely about the context it has; output quality is bounded above by input quality.
Ask AI a question without the right background and you get a generic, often subtly wrong answer. Give it the relevant client files, the history, the specific constraints, the firm’s standards and preferences, and the output can be genuinely useful. The difference between those two scenarios is not the model. It is the context the model has been given to work with.
Why the mechanism matters
The ceiling on what AI can do for an organisation is set by the inputs it is given. Models improve over time, but for a given organisation and a given task, today’s usable output is a function of today’s available context. That makes context the leverage point.
The framing matters because most organisational conversations about AI treat the tool as the variable. “Which AI should we buy?” is the question; the quality of the context the tool will have access to is assumed, or deferred to later. In practice, the tool choice is often secondary. A well-chosen tool on top of fragmented, outdated, human-formatted documents produces confident wrong answers faster. A modest tool on top of well-structured, current, AI-accessible context produces genuinely useful output.
How the context problem manifests
Context problems produce four characteristic failure modes. First, hallucinations from hidden information: when the material the AI needs is present in the organisation but inaccessible to the tool — wrong folder, wrong format, buried in an attachment — the AI confidently produces an answer that omits or contradicts what the organisation actually knows. Second, overly generic outputs: absent specific context, the AI falls back on plausible-sounding general answers that do not reflect the organisation’s actual position, standards or history. Third, noise pollution: when the context includes irrelevant, out-of-date or contradictory material, the AI may draw on it indiscriminately and produce output that references defunct policies or superseded decisions.
Fourth, dilution from over-inclusion: giving the AI every document the organisation has ever created is not safer than giving it too little. Too much context is almost as damaging as wrong context. Irrelevant, outdated or contradictory material dilutes the influence of accurate current knowledge, and the AI synthesises across the whole set. The result is confident output that blends correct and incorrect material without distinguishing them.
The four compound. A generic output may pass because nobody recognises the specifics that are missing; a hallucination may pass because it sounds right; noise-polluted output may pass for all of the above reasons; and dilution makes each of the first three more likely because the signal of the correct material is weakened. Each is a vector for AI’s most dangerous failure mode is confident wrongness.
What follows
The practical corollary is that AI utility is approximately as much a knowledge management question as a tool question. When a client asks which AI to procure, the more useful first conversation is about the state of the context they would be pointing it at.
That conversation tends to surface the usual uncomfortable truths: content scattered across systems, outdated versions, tacit knowledge never written down, human formatting that obstructs AI consumption (see Structure documents for AI consumption, not just human reading). The sequencing argument in Start with knowledge management, not tools follows from this heuristic: if the ceiling is set by context, invest in the context before investing in the tool that will hit the ceiling.
Context as a discipline
The practice of getting the context right — identifying authoritative sources, pruning what should not be there, structuring the rest for AI consumption, maintaining quality over time — is starting to be named as its own discipline. “Context engineering” is the industry’s working term, and major AI frameworks are positioning it as foundational. The label is useful because it distinguishes the activity from document management on one side and from prompt engineering on the other: it is neither storage nor instruction, but the deliberate curation of what a model gets to see. The related time-dependent degradation mechanism is set out in Context rot.