All wiki notes
Case study

A tools-first AI rollout that plateaued

An abstracted composite showing what happens when a mid-tier firm buys AI tools without putting its information in order first.

Last updated 24 April 2026 First captured 24 April 2026

knowledge-managementai-adoptionprofessional-services

This case study is an abstracted composite drawn from our work with several mid-tier Australian professional services firms in 2025. It narrates a pattern we saw repeatedly across engagements, not a single firm’s experience. Identifying details have been removed and the specifics generalised.

The firm

A mid-tier firm of around 120 staff, split roughly two-thirds compliance work (audit, tax, statutory) and one-third advisory. Two founding principals, a small partner group, a decade-plus in market, profitable. Reputation for diligent delivery, not for innovation.

The rollout

In early 2025, responding to board pressure and visible industry movement, the firm committed to an AI adoption programme. Over eight months they licensed Microsoft 365 Copilot firm-wide, stood up ChatGPT Enterprise for a subset of advisory staff, trialled a document-summarisation product from a local vendor, and briefly piloted an AI-enhanced tax research platform. Individual tool choices were defensible. The sequencing was the issue.

The first six months

There were early wins. Several senior staff became visibly more productive with meeting summaries, email drafting, and first-pass analysis. The firm captured a handful of anecdotes that made good partner-meeting material. Training attendance was high, and the enthusiasm, early on, was real.

The plateau

By month nine, the picture had changed.

Adoption stalled at the staff who had been most enthusiastic to begin with. New joiners picked up the tools in narrow, predictable ways — email help, mostly — and stopped there. When asked to use AI for substantive work such as drafting a tax memo, preparing an advisory note, or analysing a client’s prior-year filings, staff found the tools produced confident but generic output that needed heavy rewriting. Several people quietly stopped using the tools for anything beyond the drafting shortcuts they had found useful in month one.

Partners noticed the shift but found it difficult to diagnose. Licences were in place, training had been delivered, and internal communications were positive. From the outside the programme looked successful. From the inside it was not landing.

The diagnosis

What the firm discovered on honest examination was that its knowledge base was not in a state AI could usefully work with. Client files were fragmented across email threads, SharePoint folders, local drives, and the heads of senior staff. Prior-engagement context was difficult to reassemble without interviewing the partner who had led it. Working papers used firm-specific shorthand that a generic model could not interpret. Regulatory memos were stored but not structured. Templates were numerous and, in many cases, out of date.

Put plainly, the firm had bought intelligence without having put its information in order. The tools were performing at the level their inputs allowed. The inputs were the ceiling.

The pivot

The firm paused further tool procurement and began a slower, less visible programme of knowledge restructuring. They audited what they actually knew. They decided which knowledge was worth investing in and which was not. They converted a subset of critical documents into structured plain-text formats that AI could consume efficiently. They put lightweight capture processes in place to prevent the next wave of institutional knowledge from walking out the door when staff turned over.

Progress was slow. Senior leadership found it harder to narrate to the board than the initial tool rollout had been. Twelve months into this second phase, the firm was beginning to see the AI tools produce genuinely differentiated output on specific client-facing tasks — not because the tools had changed, but because the context had finally caught up.

What the composite shows

Tool selection without knowledge foundations produces a predictable plateau. The plateau arrives later than the initial enthusiasm and later than the first wins, which is part of what makes it dangerous: the signals look positive for long enough that the underlying gap is easy to miss.

The heuristic to extract is the one set out separately as Start with knowledge management, not tools. When we meet firms eighteen months into a disappointing rollout, almost universally the cause traces to a foundations gap that predates the tooling decision.