Shepherd Thomas · Wiki
Lessons from the work, published as we learn them.
A living set of patterns, heuristics and abstracted case studies from our AI adoption work with Australian mid-tier organisations. Curated, not comprehensive. About this wiki.
Patterns
The mid-tier AI adoption threshold In mid-tier organisations, the daily pressure of business-as-usual sets a payoff threshold that typical AI gains do not clear, so adoption stalls even when tools and training are in place. AI as a labour service bypasses the adoption problem A delivery model in which vendors sell finished work product, not AI tools, removes internal-adoption friction from the buyer side and accelerates displacement timelines. AI commoditises general expertise AI is making publicly codified expertise abundant; the gap between an expert and a competent AI-equipped generalist is narrowing, and that gap is where professional fees live. AI as an operational interpreter of purpose, vision and values AI may offer a different mechanism for translating stated purpose, vision and values into daily operational decisions — continuous rather than episodic, contextual rather than general, and individually available rather than programme-delivered. Whether the mechanism proves durable in practice is an open question. AI removes the practical ceiling on workplace surveillance Comprehensive workplace monitoring was always theoretically possible but practically capped by human review capacity; AI removes that cap, and the capability itself reshapes behaviour whether or not it is used. Compliance revenue is structurally threatened Professional services firms that depend on recurring compliance revenue face structural margin compression as AI commoditises the underlying work. AI's most dangerous failure mode is confident wrongness AI's most dangerous failure is not silence but fluent, authoritative output that is wrong — making error detection a skilled, human task that cannot be deferred to the tool. Context rot As AI-generated content feeds back into the organisation's context — documents, transcripts, summaries — today's hallucinations become tomorrow's training data, and the quality of the context degrades over time unless the cycle is actively broken. Defensibility lives in what AI can't access What survives AI disruption sits in three categories AI cannot access without human participation — privileged client knowledge, trust, and institutional memory. The first reader is an AI A growing share of inbound material at mid-tier firms is first read by an AI before a human sees it; the human who engages does so through the AI's rendering, changing what the deliverable has to carry and how the sending firm should produce it. Human work becomes relatively expensive as AI trends to free As AI-generated work trends toward zero marginal cost, the relative price of human involvement rises; the value delivered by humans must visibly exceed the AI alternative for the premium to hold. AI interfaces are generated on demand rather than fixed by design The user interface layer, built historically as fixed buttons and menus that bridge human intent and machine execution, is being replaced piecemeal by AI-generated surfaces built at runtime in response to specific requests; wrappers that sit between user and base model are increasingly a liability rather than an aid. Knowledge management becomes an M&A and partnership signal As AI pervades professional services, acquirers and partners are likely to treat the target's knowledge management as a due-diligence signal because poor KM implies unreliable AI-assisted work product downstream. Retrieval middleware is being absorbed into platforms at mid-tier scale The middleware layer that vendors and consultants propose to build around frontier models — retrieval pipelines, evaluation harnesses, observability — is being absorbed into the platforms themselves at mid-tier scale; work commissioned to build it now is liable to be stranded by the vendor's own roadmap within an acceptable timeframe. The relationship is the product When the codifiable layer of professional work commoditises, the enduring product of a services firm is the relationship itself — the privileged context and the trust attached to it. Surveillance-chilled collaboration degrades knowledge work The collaborative behaviours that produce good knowledge work — thinking aloud, proposing imperfect ideas, showing uncertainty, offering dissent — depend on low-observation conditions that AI-enabled monitoring degrades. Unvoiced staff resistance is the primary failure mode of AI initiatives The most insidious threat to AI adoption is not technical or budgetary but behavioural — staff publicly support the initiative while privately declining to adopt it, expressing resistance through plausible non-compliance rather than open challenge.
Heuristics
AI literacy is not a training problem Treat AI literacy as a durable mental-model shift, not an event — the judgement required to use AI well cannot be installed through a workshop. Expect AI to surface authenticity gaps between stated and actual values An AI system that takes an organisation's stated values seriously will quickly surface where stated and actual behaviour diverge; leadership should expect and plan for these findings before commissioning the work, because surfacing them without being prepared to respond is worse than not surfacing them at all. Architect AI around principles, not vendors Tools will keep changing; architectures tied to a specific vendor ecosystem age poorly and limit the organisation's ability to adopt what comes next. Audit client agreements for AI silence Most firms' client agreements were drafted before AI became a live question and are silent on both the firm's AI use in delivering work and the client's permitted AI use on the firm's output; that silence inherits defaults by omission and leaves the firm exposed under privacy regulation and professional guidance. Channel shadow AI use as signal, not risk to suppress In most organisations, staff are already using AI in ways leadership has not sanctioned; treating that shadow use as evidence of real work-in-context rather than as compliance risk reveals use cases, knowledge gaps and adoption blockers that top-down planning will not find. Declining AI engineering commits you to content discipline The argument for deferring a custom AI build — pipeline, integration, evaluation harness — because content quality is the real leverage point only holds while someone is actively doing the content work; declining the engineering is a commitment to the discipline, not a free deferral. A document store is not a knowledge management system Shelving documents in a repository is storage, not knowledge management; the presence of the repository often produces false confidence that the problem is solved. Users assume AI has access to information it does not have Users routinely overestimate the information AI has access to, treating it as if it were working from a complete picture; this overestimate compounds with AI fluency to produce misplaced trust. Hire for durable AI judgement, not transient AI mechanics AI skills split into durable judgement — when to use AI, how to structure problems for it, how to verify output, where not to use it — and transient mechanics — specialist prompt engineering, bespoke pipelines platforms will absorb. Hire and train for the first, be sceptical of the second. Expect current AI deployments to look primitive in retrospect Current AI deployments mostly fit the technology into existing workflows; treat today's designs as transitional and expect later shapes to differ fundamentally. Internal-adoption friction is no protection against external disruption The organisational inertia that slows internal AI adoption offers no defence against vendors who have already absorbed the technology and deliver finished outcomes. Involve sceptics early in AI initiatives Sceptics are more valuable than advocates during the design of an AI initiative — they see the failures most clearly; involve them early in roles that protect against the failures they fear, rather than sidelining them as resistant to change. Start with knowledge management, not tools Audit and structure what the organisation knows before selecting AI tools; the limits of AI output are set by the limits of its input context. Leadership team AI fluency must be collective, not individual A single AI-fluent leader in an otherwise-unfluent team creates strategic blind spots rather than an advantage; fluency has to be built across the leadership team together, because uneven adoption at the top propagates as inconsistent AI strategy below. Use a frontier LLM as a personal AI mentor Use a frontier LLM as a conversational partner for learning about AI itself — ask it about its capabilities, limitations and appropriate use cases while doing real work with it. The self-directed, contextualised learning this produces outperforms the structured training programmes it replaces. Make tacit knowledge explicit, or AI cannot use it AI cannot interpret the unwritten assumptions that shape how an organisation actually works; operational self-description is precondition, not polish. Measure adoption, not just implementation Deploying an AI tool and reporting success are not the same thing; track active use rather than availability, because the gap between the two is where unvoiced resistance hides and where the investment fails to earn its return. Passive AI adoption is an implicit policy choice Where an organisation has not made explicit decisions about how AI will be used, the defaults of the tools and vendors become policy by inheritance; "we haven't decided yet" functions as "we have accepted whatever happens". Polish and volume no longer signal effort The signals that used to tell reviewers about work quality — volume, polish, comprehensiveness — correlated with effort because effort was scarce; with AI the correlation breaks, and the questions that still discriminate are about process. Restructure pricing for work where AI compresses hours Where AI compresses delivery hours, hour-based pricing compresses firm revenue proportionally; the only response that extends past the current year is to restructure engagements so price is no longer tied to hours, which is a governance project entwined with how people are compensated for their time. Sort clients by AI posture and serve both groups deliberately Client bases are splitting along AI-forward, moving-slowly and AI-averse lines; firms that run a single operating mode for everyone will produce the wrong shape of work for a growing share of their book, and need to classify and serve the segments deliberately. Start AI governance imperfect; iterate rather than wait AI governance should follow the same experimental posture as AI adoption — start imperfect, gather evidence, iterate — because waiting for clarity guarantees the technology gets ahead of the policy. Structure documents for AI consumption, not just human reading Human-formatted documents obstruct AI consumption; plain-text formats such as Markdown let AI work with the underlying knowledge efficiently. Useful AI is a context problem The difference between useful AI and dangerous AI is almost entirely about the context it has; output quality is bounded above by input quality.
Case studies
An ongoing AI advisory engagement with a growing firm An abstracted single-engagement case study showing how a growing firm used an ongoing AI-strategy advisory relationship — covering market scanning, implementation oversight and staff coaching — to navigate AI adoption without diverting internal attention from operational delivery. A regional bank's core banking selection delivered by an AI-amplified solo engagement An abstracted single-engagement case study showing how a solo Shepherd Thomas consultant, AI-amplified, delivered a regional bank's core banking system selection on a compressed timeline, at lower cost and comparable quality to a major consulting team. A tools-first AI rollout that plateaued An abstracted composite showing what happens when a mid-tier firm buys AI tools without putting its information in order first.