A regional bank's core banking selection delivered by an AI-amplified solo engagement
An abstracted single-engagement case study showing how a solo Shepherd Thomas consultant, AI-amplified, delivered a regional bank's core banking system selection on a compressed timeline, at lower cost and comparable quality to a major consulting team.
This case study is drawn from a single Shepherd Thomas engagement. Sector and system type are named directly — a regional Australian bank selecting a new core banking system — because the client bank has since been absorbed into another institution, and that change has removed the identifiability risk that would ordinarily require those details to be abstracted. Other identifying dimensions — the bank’s name, its exact size, specific dates, named individuals on either side, and specific outcome metrics — remain omitted.
The firm and the engagement
A regional Australian bank, working to a tight commercial deadline to replace its core banking system. A prior engagement by a major consulting firm had recommended replacement and named a shortlist of candidate vendors; the selection work itself — RFP drafting, vendor proposal evaluation, shortlisting and recommendation to the Board — was the mandate to be awarded next. The same major consulting firm expected to pick it up.
A competing offer came from Shepherd Thomas: a single experienced consultant, backed by current frontier AI tooling, would deliver equivalent quality on a compressed timeline at lower cost. The offer was accepted.
The timeline was unusually tight — a few weeks from a blank page to a recommendation to the Board, spanning a holiday period during which vendor response windows would ordinarily have been longer than normal.
What the AI-amplified process looked like
Requirements gathering, which would traditionally have run on stakeholder workshops over weeks, was compressed to a day of prompted drafting against a frontier LLM given the relevant reference documents. A few days of cross-checking and hallucination-hunting followed, with client subject-matter experts validating the generated requirements against their operational realities.
When proposals arrived from most of the shortlisted vendors, LLM-assisted analysis reduced each proposal to a clean side-by-side comparison on the dimensions that mattered to the Board. One specific AI tactic proved unusually valuable: asking the model to identify vague or ambiguous sections of each proposal. Vendor proposals have an incentive to be specific on their strengths and indistinct on their weaknesses; an AI reading the proposals with that question in mind surfaced a category of concern that manual review would likely have missed.
Hallucination checking was built into the process. Every AI-generated report was fed back to the model for self-validation against the source documents. Rounds of self-checking caught several inaccuracies that would otherwise have travelled into the Board recommendation.
The result
A recommendation was delivered to the Board on the original timeline, at a fee materially below the incumbent consulting firm’s quote. The specific recommendation was accepted. Client feedback — including from executives accustomed to traditional consulting engagements — noted that the work product was at least comparable in quality, with the additional benefit of faster iteration when the Board requested follow-up analysis.
What the case study illustrates
The engagement is one instance of several related patterns playing out together. AI commoditises general expertise is the underlying mechanism: the work of drafting, analysing and summarising is becoming available at low cost to anyone with access to frontier AI, not just to large teams. Human work becomes relatively expensive as AI trends to free explains the commercial gap that the AI-amplified solo engagement was able to open — a traditional team of human analysts paid on billable hours cannot match the unit economics of one experienced person and a model.
The pattern Hire for durable AI judgement, not transient AI mechanics is what makes the solo model durable rather than merely cheap: the value came from the consultant’s judgement about which tasks to hand to AI, how to verify its output, and which parts of the work — regulatory framing, Board communication, stakeholder management — to keep firmly human. The mechanical parts of the work were compressed; the judgement layer was not.
One caveat worth naming. This was a single engagement with specific conditions: a regulated sector, a tight timeline, a well-scoped decision, a Board accustomed to clear recommendations. The AI-amplified solo model does not work equally well in all contexts, and a single observation of success is not evidence of a general commercial formula. The case is useful as illustration of a pattern; it is not a recipe that reliably reproduces.
A balancing observation is worth making alongside that caveat. The engagement concluded in early 2024. Frontier AI capability has progressed dramatically in the time since, and the tooling available to a solo consultant today is substantially more capable than what was in play then. The result achieved here — a solo engagement outdelivering a traditional consulting team on quality, speed and cost — is much easier to replicate now than it was when this engagement happened.