Is AI About To Replace The UI?
- Barry Thomas
- Apr 1
- 3 min read
Updated: Apr 2
A few recent experiences have me wondering if we’re on an unexpectedly rapid path towards a profound shift in how we interact with software.
One client, for example, invested in what appeared to be an ideal AI-powered document-generation tool: purpose-built, thoughtfully designed, and clearly ahead of the curve when it launched. But, only a few months in, the implementation is stuttering and staff are bypassing their shiny new toy. They’re finding it quicker and easier to work directly with a general-purpose AI model—simply opening ChatGPT and getting started.
I’m beginning to see similar patterns elsewhere too. Legal firms, for instance, are more cautious—understandably, given the risks of inaccuracy—but even there, I’m hearing quiet admissions that bespoke tools can sometimes be more trouble than they're worth, when compared to the ever-growing power of the base models.
None of this is conclusive. But it does raise some interesting questions.

Could the User Interface Be Losing Its Grip?
For decades, our relationship with software has been defined by interfaces. Buttons, menus, dashboards—they’ve been the way we tell systems what to do. But what happens when we no longer need those predefined signposts? When the AI is capable enough to understand intent from natural language alone?
I’ve long expected we’d reach this point eventually. What’s surprising is how quickly it seems to be arriving. The pace of improvement in frontier models is exposing the obvious/not obvious fact that many traditional interfaces were built not for functionality, but for us—to help us bridge the gap between human intention and machine execution.
Now that the AI can handle more of that translation directly, the interface doesn’t need to do nearly as much of the heavy lifting.
That doesn’t mean visual interactions will vanish. We’ll still need dashboards, tables, diagrams, timelines. But instead of opening a dedicated application to find them, we may simply ask—and the system will generate what we need, on the fly.
“Show me last quarter’s performance by region.”
“Map out a project timeline based on these five bullet points.”
In this emerging paradigm, the interface isn’t fixed—it’s fluid. Built at runtime. Called into being by conversation.
What Does That Mean for “Wrapper” Products—And for People?
If this shift continues, it has implications not just for how we use software, but for the software industry itself.
There’s been a proliferation of AI “wrappers”—tools that sit on top of foundation models and offer task-specific interfaces. Many are genuinely helpful, especially for teams still building AI fluency. But I’m starting to suspect their window of advantage is narrowing. As base models improve, wrapper tools risk becoming more of a constraint than a bridge. Features that were once selling points—like hallucination controls or workflow templates—are increasingly handled natively by the models themselves.
And here’s the uncomfortable part: if wrappers can be rendered obsolete by smarter foundations, could the same be true for aspects of people’s jobs? If we no longer need a human—or even a user interface—to translate intent into action, where does that leave the roles built on mediating between idea and output?
I don’t think this necessarily points to large-scale human redundancy in the long term. But I do think it points to a deep shift in what kind of work remains human. The hands-on “doing” is shifting—being absorbed, reshaped, or reimagined by AI. What stays with us, at least for now, is the work of judgment: deciding what to do, why it matters, and how to carry others along the path. Many of us that want to remain in work are going to have a lot of personal reinvention to do.
What to Watch For
I’m not offering this as a prediction—just as a possibility that’s starting to echo in different corners of my work. I may be wrong. It may stall. But the pattern seems compelling to me.
Could we be entering a new phase of AI adoption—one where the tools don’t just live behind interfaces, but generate them on demand? Where the dominant skill isn’t learning how to use software, but learning how to ask for what you need from a near-infinite menu of options?
It's too early to say how fast this shift will happen. But it's worth starting to think about it, before the change is obvious in hindsight.
A Few Reflective Questions for Leaders:
Are there places in your organisation where are people already sidestepping official tools in favour of direct interaction with AI?
Presumably your current systems are designed to be used by humans—how well is this going to work when AI is doing the doing? (Note that this is related to points I've raised previously about the centrality of document management for AI.)
If your teams could “talk to” your systems instead of using them via complex applications, what would change?
Which staff roles are focused on execution today—and how might those shift as the doing becomes more automated?
Comments