An 'Ask the Org' knowledge-base rollout in a mid-sized organisation
A mid-sized national organisation deploys an "Ask the Org" Claude project as a retrieval layer over its existing knowledge stack rather than migrating platforms; the architecture and pilot decisions, and the cluster of principles they instantiate.
knowledge-managementai-adoptionorganisational-readinessai-governancestrategic-framing
This case study describes the rollout of an “Ask the Org” project — a shared Claude project that staff query as the first place to ask any internal question — in a mid-sized national organisation of roughly 70 staff. The deployment is one operational instance of the pattern in Make the firm itself a Claude project; this account documents how the architecture and pilot were structured, and the decisions that shaped them.
The organisation and the starting position
The organisation operates a familiar enterprise stack: Microsoft 365 with SharePoint as the document store, Atlassian Confluence and Jira, Slack for messaging, and a long-standing operational platform handling its core record-keeping. The technology was working but the working was hard. Information sat in three to five places at different vintages; canonical documents were difficult to identify; staff routinely asked colleagues rather than searching, because searching surfaced more noise than signal. A documentation audit found genuine pockets of excellent practice alongside structural dysfunction — abandoned migration trees never cleaned up, sensitive material in personal storage, training resources years out of date.
Two contextual constraints shaped the approach. The organisation had a multi-year history of partially-completed technology programmes, and the resulting change fatigue was real. And the operational team responsible for the technology was capacity-stripped — running at a level the team itself described as well above sustainable load. Any new programme had to absorb that rather than add to it.
The decision: retrieval, not migration
The default pitch in this situation would have been a new knowledge management platform. The decision instead was to leave the existing document estate in place and build a retrieval layer over it. Three reasons. First, adding another platform would have made the fragmentation worse, not better — see A document store is not a knowledge management system. Second, the organisation’s history made any platform-migration narrative harder to land than a “your existing tools, just better” one. Third, the retrieval substrate was already in place: Confluence for structured wiki-style content, the AI vendor’s connector well-tested in the early trials, and a viable path for SharePoint via a navigable surface layer per Architect AI around principles, not vendors.
The architecture
Three working components. Structured first-stop: a single Confluence space — the operational knowledge base — became the primary retrieval target for the AI. New content followed a documentation standard mandating a two-sentence summary at the top of every page, on the basis described in Structure documents for AI consumption, not just human reading. Unstructured fall-through: SharePoint was retained for project files and archival material, with a thin AI-readable surface layer planned to provide navigation pointers without requiring a tenant-wide restructure. Policy via skill: AI policies sat in a designated SharePoint folder behind a dedicated skill, separated from operational content. The retrieval skill itself scanned every query for policy-trigger topics and surfaced relevant policy alongside the operational answer — see Embed the AI policy in the AI itself. Each response ended with a “Commentary on Sources” section rating its own confidence and naming gaps, on the principle that the tool should give itself maximum information about what it does and does not know.
The pilot
A single front-line team was selected — process-oriented work, well-bounded scope, a team that would notice quickly if the AI was useful and equally quickly if it wasn’t. Documentation migration ran first; the AI was not put in front of staff until the content was ready, on the launch-readiness rule in Measure adoption, not just implementation. A structured evaluation framework defined four must-have success criteria — staff locate guidance without asking; staff use the AI as primary access route; staff report responses fast enough to be useful in the moment; the flagging mechanism actually gets used — and explicitly captured bypass behaviour as data per Treat AI-pilot bypass behaviour as evaluation data. Three feedback channels ran in parallel: passive Confluence page comments, a fortnightly Slack check-in with the team lead, and a structured pulse survey at week four. The success bar was deliberately set as “working well enough to replicate”, not “perfect”.
Adoption built organically. New users started with a deliberate onboarding ritual: the first task was to ask the AI about the AI policy, which simultaneously introduced the tool and the rules.
What the case study illustrates
The shape of the engagement is reproducible. Several existing principles compose into a single delivery pattern: choose retrieval over migration when the existing estate is recoverable; structure the AI-accessible substrate as a wiki with summary metadata rather than as a document store; layer over change-resistant structures rather than restructure them; embed governance in the tool rather than train it separately; pilot one team and measure adoption rather than deployment; calibrate the governance ceremony to organisational scale. The composition is what makes this an “Ask the Org” engagement rather than just a Claude rollout.
Two structural notes worth flagging. The Knowledge Manager role described in Define a dedicated AI-facing knowledge manager role was identified as the long-term foundation but did not exist at pilot launch — included as a forward planning item rather than a precondition, on the explicit acceptance that a content sprint without a maintainer is exactly the failure mode AI treats documentation as authoritative predicts. And the post-approval governance work described in Policy approval is the start, not the end was sequenced explicitly, with the impact-assessment process tested on a real tool inside the pilot phase rather than deferred until later.
The deliberate slowness was the design. The case study is a counter-example to enterprise-style AI rollouts that put tools in front of staff first and work out the substrate later. The order — substrate first, structured first-stop, single pilot team, organic adoption, evaluation through use — is the load-bearing variable.