AI Engineering Blog | Read the AI Articles | Monolith

The Future of Agentic UX Design | Monolith

Written by Lance Hambly | Nov 24, 2025 11:32:12 AM

Author: Lance Hambly, Senior Product Designer, Monolith

Read Time: 7 mins

This article reflects a research-driven product design perspective from our team and is intended to share how we think about building solutions at Monolith.

AI agents are quietly becoming part of our software, not in dramatic sci-fi ways, but as features set to be everywhere. We're early in this change, but it's clear: Products are shifting from static tools to spaces where users and agents collaborate.

Currently, AI shows up mostly as chat boxes, which makes sense. They are easy to use, capture intent well, and handle uncertainty. But they keep agents on the edge, responding with text instead of working where the real tasks happen. What's next is more exciting.

 

 

As agents improve and integrate more, they'll move beyond chats to support workflows directly, showing info in context, automating tasks, and adapting to screen activity. This subtle shift means designing for a system of humans and agents. For designers and product thinkers, this creates a new focus on what users and agents can achieve together.

 

Sidebar: The AI Chatbot Falls Short

The challenge with chat isn’t that it’s unfamiliar or hard to use — it’s that it asks the user to work around it. Every request becomes a small detour: pause the task, explain the situation, translate what you see into typed language, and hope the agent interprets it correctly. It’s a pattern that fits the agent’s needs more than the user’s.

Chat also flattens everything into text, even when the task itself is visual, spatial, or structured. An agent might understand the context perfectly, but if the only way it can respond is through paragraphs of explanation, the user ends up doing the heavy lifting of interpretation. What should be a quick action turns into a miniature reading assignment, and the agent becomes a consultant sitting off to the side rather than a collaborator inside the workflow.

 

 

More recently, we’ve seen agents connected to the actual surfaces where work happens via Model Context Protocol (MCP) Servers and their tools, but they often end up taking longer to achieve things than the user might have alone, perhaps due to lack of context or simply the additional overhead of the understanding layer.

These aren’t fatal flaws — chat still plays an important role. But they make it clear that chat alone can’t support the kind of fast, fluid cooperation that agents are capable of. To unlock that, agents need ways to act within the interface, not just talk about it.

 

The Collaborative Future of Human and Machine

Embedded collaboration is where things get interesting. In the not-too-distant future, agents won’t sit at the edge of the interface waiting for instructions; they’ll operate alongside the user, contributing directly to the work as it unfolds. Imagine being able to hand off a long-running task and know it will quietly progress in the background without constant prompting. Or opening a complex workflow and finding that the groundwork has already been prepared (data gathered, drafts assembled, options narrowed) because your agent understood the broader goal, not just the last instruction you typed.

We’ll likely see something else too: multiple agents with distinct strengths working together, much like specialised teammates. One agent might refine copy, another might run analysis, another might manage a multi-step system process — all coordinated, all context-aware, all contributing to the user’s outcome rather than competing for attention in a chat thread.

 

 

And as agents become more capable on the execution side, the interface itself will adapt in subtle, dynamic ways: UI elements that appear when they’re useful, dissolve when they’re not, or even materialise from a simple prompt (“make this easier to compare,” “add a view for trends”). The boundary between what the user designs and what the agent assembles will become more collaborative than prescriptive.

What’s important here is not the novelty of it all, but the shift in responsibility. Users won’t need to micromanage systems step by step. Instead, they’ll direct, review, and refine, working with software that can actually participate in solving the problem, not just describe it.

This is the environment we’re now designing for: not tools with an AI feature, but products where humans and agents actively shape outcomes together.

 

Principles for Designing Agentic UX

As I’ve been working more closely with agents and the systems that support them, I’ve found myself needing a new mental toolkit to help guide decisions that don’t quite fit into traditional UX thinking. These principles started as notes to myself, a way to stay focused on what actually makes agents useful rather than ornamental. Over time, they’ve become a practical lens for designing products where humans and agents share the work.

Here’s what I keep coming back to:

1. Invisible First, Visible When Needed

The best agent support works quietly: preparing data, avoiding errors, and cutting busywork before the user sees it. When an agent acts, it should be clear, concise, and easy to override.

2. Enhance, Don’t Duplicate

An agent should use existing product functionalities, not recreate them. By enhancing workflows instead of copying them, users gain power without losing control.

3. Action Beats Advice

If the agent can act, it should. Explanations and summaries matter, but don't replace direct action. Agents' true value is in execution, not talk.

4. Design for Handovers, Not Just Inputs

Focus on how tasks flow between the user and the agent, not just how users ask for help. Interfaces must clearly show what the agent did, what the user must confirm, and how they might take control. Smooth handovers build trust and speed; poor ones cause confusion.

5. Build Flexibility into Design Systems

Agents will increasingly generate, modify, and adapt interface elements. Designers must view components as flexible modules that agents can utilise without disrupting the experience. The UI should remain stable for users while allowing agents to participate meaningfully.

 

Rethinking the Craft

Designing for this future asks more of us as product makers. It means thinking beyond screens and components and into flows, handovers, trust, context, and the structure of the work itself. Users aren’t facing interfaces alone anymore — they’re working alongside systems that can genuinely pull weight.

And that gives us a rare opportunity: to define what this partnership feels like. When agents act, when they step back, how they reveal their thinking, and how they help people move faster and with more clarity than the software of the past ever could. This isn’t just an evolution of UX; it’s an expansion of it — one that reshapes how software behaves at a fundamental level.

Adding AI is easy. Designing for user–agent collaboration is the real craft. And that is the design philosophy we are carrying forward at Monolith as we continue to develop our ongoing work integrating agents into our platform.

 

 

About the author

Lance is a Senior Product Designer at Monolith, responsible for blending research, UX, and UI craft to bring clarity to complex tools. He partners with end users and teams across the company to build products that feel intuitive, grounded in real needs, and crafted with a human-centred perspective that makes complex tools feel simple.