Shepherd Thomas · Wiki

Lessons from the work, published as we learn them.

A living set of patterns, heuristics and abstracted case studies from our AI adoption work with Australian mid-tier organisations. Curated, not comprehensive. About this wiki.


Patterns

The mid-tier AI adoption threshold In mid-tier organisations, the daily pressure of business-as-usual sets a payoff threshold that typical AI gains do not clear, so adoption stalls even when tools and training are in place. Updated 24 Apr 2026 ai-adoptionorganisational-readiness AI as a labour service bypasses the adoption problem A delivery model in which vendors sell finished work product, not AI tools, removes internal-adoption friction from the buyer side and accelerates displacement timelines. Updated 24 Apr 2026 business-modelai-disruptionai-adoption AI commoditises general expertise AI is making publicly codified expertise abundant; the gap between an expert and a competent AI-equipped generalist is narrowing, and that gap is where professional fees live. Updated 24 Apr 2026 ai-disruptionprofessional-servicesbusiness-model AI as an operational interpreter of purpose, vision and values AI may offer a different mechanism for translating stated purpose, vision and values into daily operational decisions — continuous rather than episodic, contextual rather than general, and individually available rather than programme-delivered. Whether the mechanism proves durable in practice is an open question. Updated 24 Apr 2026 ai-adoptionorganisational-valuesstrategic-framing AI removes the practical ceiling on workplace surveillance Comprehensive workplace monitoring was always theoretically possible but practically capped by human review capacity; AI removes that cap, and the capability itself reshapes behaviour whether or not it is used. Updated 24 Apr 2026 workplace-surveillanceai-disruptionorganisational-readiness Compliance revenue is structurally threatened Professional services firms that depend on recurring compliance revenue face structural margin compression as AI commoditises the underlying work. Updated 24 Apr 2026 business-modelprofessional-servicesai-disruption AI's most dangerous failure mode is confident wrongness AI's most dangerous failure is not silence but fluent, authoritative output that is wrong — making error detection a skilled, human task that cannot be deferred to the tool. Updated 24 Apr 2026 ai-limitsai-adoptionai-literacy Context rot As AI-generated content feeds back into the organisation's context — documents, transcripts, summaries — today's hallucinations become tomorrow's training data, and the quality of the context degrades over time unless the cycle is actively broken. Updated 24 Apr 2026 ai-limitsknowledge-managementai-adoption Defensibility lives in what AI can't access What survives AI disruption sits in three categories AI cannot access without human participation — privileged client knowledge, trust, and institutional memory. Updated 24 Apr 2026 business-modeltrustknowledge-management The first reader is an AI A growing share of inbound material at mid-tier firms is first read by an AI before a human sees it; the human who engages does so through the AI's rendering, changing what the deliverable has to carry and how the sending firm should produce it. Updated 24 Apr 2026 ai-adoptionprofessional-servicesstrategic-framing Human work becomes relatively expensive as AI trends to free As AI-generated work trends toward zero marginal cost, the relative price of human involvement rises; the value delivered by humans must visibly exceed the AI alternative for the premium to hold. Updated 24 Apr 2026 business-modelai-disruption AI interfaces are generated on demand rather than fixed by design The user interface layer, built historically as fixed buttons and menus that bridge human intent and machine execution, is being replaced piecemeal by AI-generated surfaces built at runtime in response to specific requests; wrappers that sit between user and base model are increasingly a liability rather than an aid. Updated 24 Apr 2026 ai-adoptionai-disruptionstrategic-framing Knowledge management becomes an M&A and partnership signal As AI pervades professional services, acquirers and partners are likely to treat the target's knowledge management as a due-diligence signal because poor KM implies unreliable AI-assisted work product downstream. Updated 24 Apr 2026 knowledge-managementbusiness-modelstrategic-framing Retrieval middleware is being absorbed into platforms at mid-tier scale The middleware layer that vendors and consultants propose to build around frontier models — retrieval pipelines, evaluation harnesses, observability — is being absorbed into the platforms themselves at mid-tier scale; work commissioned to build it now is liable to be stranded by the vendor's own roadmap within an acceptable timeframe. Updated 24 Apr 2026 tool-selectionai-disruptionstrategic-framing The relationship is the product When the codifiable layer of professional work commoditises, the enduring product of a services firm is the relationship itself — the privileged context and the trust attached to it. Updated 24 Apr 2026 professional-servicesbusiness-modeltrust Surveillance-chilled collaboration degrades knowledge work The collaborative behaviours that produce good knowledge work — thinking aloud, proposing imperfect ideas, showing uncertainty, offering dissent — depend on low-observation conditions that AI-enabled monitoring degrades. Updated 24 Apr 2026 workplace-surveillanceprofessional-servicesknowledge-management Unvoiced staff resistance is the primary failure mode of AI initiatives The most insidious threat to AI adoption is not technical or budgetary but behavioural — staff publicly support the initiative while privately declining to adopt it, expressing resistance through plausible non-compliance rather than open challenge. Updated 24 Apr 2026 ai-adoptionstaff-dynamicsorganisational-readiness

Heuristics

AI literacy is not a training problem Treat AI literacy as a durable mental-model shift, not an event — the judgement required to use AI well cannot be installed through a workshop. Updated 24 Apr 2026 ai-literacyai-adoption Expect AI to surface authenticity gaps between stated and actual values An AI system that takes an organisation's stated values seriously will quickly surface where stated and actual behaviour diverge; leadership should expect and plan for these findings before commissioning the work, because surfacing them without being prepared to respond is worse than not surfacing them at all. Updated 24 Apr 2026 ai-adoptionorganisational-valuesorganisational-readiness Architect AI around principles, not vendors Tools will keep changing; architectures tied to a specific vendor ecosystem age poorly and limit the organisation's ability to adopt what comes next. Updated 24 Apr 2026 tool-selectionstrategic-framing Audit client agreements for AI silence Most firms' client agreements were drafted before AI became a live question and are silent on both the firm's AI use in delivering work and the client's permitted AI use on the firm's output; that silence inherits defaults by omission and leaves the firm exposed under privacy regulation and professional guidance. Updated 24 Apr 2026 ai-governanceprofessional-servicesorganisational-readiness Channel shadow AI use as signal, not risk to suppress In most organisations, staff are already using AI in ways leadership has not sanctioned; treating that shadow use as evidence of real work-in-context rather than as compliance risk reveals use cases, knowledge gaps and adoption blockers that top-down planning will not find. Updated 24 Apr 2026 ai-governanceai-adoptionorganisational-readiness Declining AI engineering commits you to content discipline The argument for deferring a custom AI build — pipeline, integration, evaluation harness — because content quality is the real leverage point only holds while someone is actively doing the content work; declining the engineering is a commitment to the discipline, not a free deferral. Updated 24 Apr 2026 knowledge-managementtool-selectionstrategic-framing A document store is not a knowledge management system Shelving documents in a repository is storage, not knowledge management; the presence of the repository often produces false confidence that the problem is solved. Updated 24 Apr 2026 knowledge-managementai-adoption Users assume AI has access to information it does not have Users routinely overestimate the information AI has access to, treating it as if it were working from a complete picture; this overestimate compounds with AI fluency to produce misplaced trust. Updated 24 Apr 2026 ai-literacyai-limitsknowledge-management Hire for durable AI judgement, not transient AI mechanics AI skills split into durable judgement — when to use AI, how to structure problems for it, how to verify output, where not to use it — and transient mechanics — specialist prompt engineering, bespoke pipelines platforms will absorb. Hire and train for the first, be sceptical of the second. Updated 24 Apr 2026 ai-literacyorganisational-readinessstrategic-framing Expect current AI deployments to look primitive in retrospect Current AI deployments mostly fit the technology into existing workflows; treat today's designs as transitional and expect later shapes to differ fundamentally. Updated 24 Apr 2026 ai-adoptionstrategic-framing Internal-adoption friction is no protection against external disruption The organisational inertia that slows internal AI adoption offers no defence against vendors who have already absorbed the technology and deliver finished outcomes. Updated 24 Apr 2026 ai-disruptionstrategic-framing Involve sceptics early in AI initiatives Sceptics are more valuable than advocates during the design of an AI initiative — they see the failures most clearly; involve them early in roles that protect against the failures they fear, rather than sidelining them as resistant to change. Updated 24 Apr 2026 ai-adoptionstaff-dynamicsorganisational-readiness Start with knowledge management, not tools Audit and structure what the organisation knows before selecting AI tools; the limits of AI output are set by the limits of its input context. Updated 24 Apr 2026 knowledge-managementai-adoptiontool-selection Leadership team AI fluency must be collective, not individual A single AI-fluent leader in an otherwise-unfluent team creates strategic blind spots rather than an advantage; fluency has to be built across the leadership team together, because uneven adoption at the top propagates as inconsistent AI strategy below. Updated 24 Apr 2026 ai-literacyorganisational-readinessstrategic-framing Use a frontier LLM as a personal AI mentor Use a frontier LLM as a conversational partner for learning about AI itself — ask it about its capabilities, limitations and appropriate use cases while doing real work with it. The self-directed, contextualised learning this produces outperforms the structured training programmes it replaces. Updated 24 Apr 2026 ai-literacyorganisational-readinessai-adoption Make tacit knowledge explicit, or AI cannot use it AI cannot interpret the unwritten assumptions that shape how an organisation actually works; operational self-description is precondition, not polish. Updated 24 Apr 2026 knowledge-managementai-adoptionorganisational-readiness Measure adoption, not just implementation Deploying an AI tool and reporting success are not the same thing; track active use rather than availability, because the gap between the two is where unvoiced resistance hides and where the investment fails to earn its return. Updated 24 Apr 2026 ai-adoptionorganisational-readinessstrategic-framing Passive AI adoption is an implicit policy choice Where an organisation has not made explicit decisions about how AI will be used, the defaults of the tools and vendors become policy by inheritance; "we haven't decided yet" functions as "we have accepted whatever happens". Updated 24 Apr 2026 ai-adoptionai-governancestrategic-framing Polish and volume no longer signal effort The signals that used to tell reviewers about work quality — volume, polish, comprehensiveness — correlated with effort because effort was scarce; with AI the correlation breaks, and the questions that still discriminate are about process. Updated 24 Apr 2026 ai-literacyprofessional-servicesai-adoption Restructure pricing for work where AI compresses hours Where AI compresses delivery hours, hour-based pricing compresses firm revenue proportionally; the only response that extends past the current year is to restructure engagements so price is no longer tied to hours, which is a governance project entwined with how people are compensated for their time. Updated 24 Apr 2026 business-modelprofessional-servicesai-disruption Sort clients by AI posture and serve both groups deliberately Client bases are splitting along AI-forward, moving-slowly and AI-averse lines; firms that run a single operating mode for everyone will produce the wrong shape of work for a growing share of their book, and need to classify and serve the segments deliberately. Updated 24 Apr 2026 professional-servicesstrategic-framingai-adoption Start AI governance imperfect; iterate rather than wait AI governance should follow the same experimental posture as AI adoption — start imperfect, gather evidence, iterate — because waiting for clarity guarantees the technology gets ahead of the policy. Updated 24 Apr 2026 ai-governancestrategic-framing Structure documents for AI consumption, not just human reading Human-formatted documents obstruct AI consumption; plain-text formats such as Markdown let AI work with the underlying knowledge efficiently. Updated 24 Apr 2026 knowledge-managementai-adoptiondocument-formats Useful AI is a context problem The difference between useful AI and dangerous AI is almost entirely about the context it has; output quality is bounded above by input quality. Updated 24 Apr 2026 knowledge-managementai-adoptionai-limits

Case studies