An ongoing AI advisory engagement with a growing firm
An abstracted single-engagement case study showing how a growing firm used an ongoing AI-strategy advisory relationship — covering market scanning, implementation oversight and staff coaching — to navigate AI adoption without diverting internal attention from operational delivery.
This case study is drawn from a single ongoing Shepherd Thomas engagement, abstracted to preserve client anonymity. Sector is stated generically, specific tools are not named, no staff are identified, and the write-up reflects the engagement at a point in time rather than as a completed narrative with final outcomes. The case is useful because it illustrates how an advisory-partnership shape of work can serve a firm whose leadership attention is already fully committed to operational delivery — and because of one specific staff-coaching tactic that has worked unusually well.
The firm and the engagement shape
A mid-tier engineering services firm in a rapid growth phase. The leadership team saw AI as both a productivity lever and a potential competitive vulnerability if it were implemented poorly, and they wanted a strategic view of the available tools and their evolution. Staff were busy with operational delivery and had neither the time nor the specialist knowledge to navigate the proliferation of AI products on their own.
Rather than a single project, the engagement was structured as an ongoing advisory relationship with three recurring activities.
Market scanning. Regular review of AI products relevant to the firm’s core operational tasks. The output was executive briefings condensed from vendor demonstrations and market research, so that leadership could make informed product decisions without spending time on non-essential detail.
Implementation oversight. Once products were chosen, supporting the implementation phase to ensure staff actually used the new tools rather than reverting to familiar processes. Operational pressure often pulls staff back to what they know; the oversight role kept the adoption work visible and named.
Staff coaching. Targeted capability-building for the people whose work the AI tools were meant to support. The specific tactic that worked best is described below.
The LLM-as-mentor tactic
The central move in the staff-coaching side of the engagement was encouraging key staff to use a frontier LLM not just as a task tool, but as a personal AI mentor. The tactic was to have staff ask the LLM about its own capabilities, limitations and appropriate use cases — effectively learning about AI through daily conversation with the AI itself.
The approach worked better than traditional AI training programmes had for the same firm. The learning was self-directed and contextualised to each person’s actual work. Staff built intuition about when the AI was reliable and when it was not through repeated contact with its output. The LLM became both the subject being learned about and the teaching aid, and that combination shortened the literacy-building timeline meaningfully. The underlying heuristic is set out separately in Use a frontier LLM as a personal AI mentor.
A recurring architecture decision
One operational pattern recurred across tool decisions and shaped much of the advisory work: the balance between specialist “wrapper” products, which package frontier capabilities with task-specific features, and direct use of frontier models. Wrappers that looked valuable six months earlier often became less defensible as the underlying models absorbed the capabilities the wrappers were built to provide. Monitoring vendor roadmaps against platform movement became a standard part of the advisory role. See Architect AI around principles, not vendors for the broader heuristic, and Retrieval middleware is being absorbed into platforms at mid-tier scale for the specific category of absorption that showed up most often in this engagement.
What the case study illustrates
The engagement concretely plays out several patterns. Hire for durable AI judgement, not transient AI mechanics is the shape of the advisory role itself — the firm did not buy prompt engineers; they bought ongoing judgement about which tools to invest in, how to deploy them, and when to move off them.
The engagement is ongoing; this write-up captures the pattern at a point rather than a final outcome. What is visible so far is that sustained advisory attention, paired with the LLM-as-mentor tactic for staff, has produced adoption that conventional training programmes in comparable firms have not.