Who’s Talking Now? Designing for AI as a Mediator
Published
October 3, 2025
Time to read
5 min
The Problem: When AI Switches Sides, Users Get Lost
Imagine listening to a podcast with two guests who sound the same and never introduce themselves. Soon you lose track of who’s speaking. The discussion might be worthwhile, but following up on it takes effort. You keep rewinding, trying to figure out what’s going on.
This kind of confusion also happens in AI mediator apps. While designing for Gen X users, who value clarity and may not be used to modern UI patterns, I tested several customer service tools to see how they handled this issue. I noticed a common problem: the AI changes its position based on who it’s talking to, but its identity becomes unclear. For this group, the UI should use simple, almost classic patterns so clarity is always maintained.
Here’s a typical example. A multilingual AI chats with a service provider for you. It shows up in your chat and in the provider’s transcript, sometimes on opposite sides of the screen. Then you see a message: “I can schedule that for Friday.” But who’s speaking: the provider or your AI? There’s no quick way to tell.
This problem is more than just a design detail. It interrupts conversations, causes confusion, and makes users trust the system less. To fix it, we need to know why it happens and how clearer identity signals can help.
The Core Insight: Roles Change, Identity Persists
The real fix is to always make the AI’s identity clear, no matter what role it plays. This way, users always know who’s speaking and can trust the system.
In your chat, the AI acts as your helper. In the provider's transcript, it becomes your representative. The role changes, but the identity should stay the same. This brings us to an important design principle: Consistency over Context. Visual identity should remain constant across different roles to prevent confusion. Many apps miss this. They move the AI around for layout reasons and drop important identity cues like color, avatar, or name. Without these signals, users have to work harder to follow the conversation. For example, I've seen apps where the AI appears as a blue bubble when helping you, but turns into a gray bubble with no avatar when talking to the provider. At that point, users have to guess whether the response came from the provider or their own AI. Changing the color, instead of clarifying the role, actually removes the AI's recognizable face.
When the AI’s identity is clear and consistent, users can follow the conversation easily. This builds trust and helps them focus on what matters. So, how can chat layouts support this clarity? The way you design the layout is key to avoiding confusion and keeping identities clear.
Why a Traditional Chat Layout Works Better
Most AI tools today utilize a prompt–response view. The user’s inputs appear as bubbles, and the AI’s responses scroll in a single pane. This approach works well when the AI has a single fixed role.
But an AI mediator is different. It answers the user and also speaks for them to others. In this case, a regular two-party chat layout works better. It fits what people expect and makes it clear when the AI’s role changes.
Think of WhatsApp or Google Chat. Each person has a fixed spot in the conversation, so it’s always clear who is speaking, even when things move quickly. If an AI mediator switches sides in the same window, that consistency is lost. However, if the AI always appears the same, with a steady avatar or color, it’s easier to follow, much like Teams or Slack maintain clear identities in busy group chats.

When the AI looks the same every time, users can spot it right away, even if it changes sides. This helps them stay on track in the conversation.
A Simple Framework for Multi‑Context AI
When AI operates across contexts, its role may change, but its identity must remain constant.
Identity First. Keep visual markers, such as color, avatar, and typography, consistent everywhere.
Clarify Role. Use small labels to indicate what the AI is doing, without altering its core appearance.
Progressive Context. Offer extra details on hover or tap, such as “Speaking for you” or “Relaying provider response.”
Accessibility. Ensure that assistive technology consistently identifies the AI as the same entity.

The Takeaway
If AI identity isn’t handled well, the problems are bigger than just confusion. In important areas like healthcare, finance, or legal work, unclear identity can make users unsure about who approved a decision or gave key information. This can cause accountability issues, security risks like someone pretending to be the AI, and a loss of trust. As AI is used in sensitive fields, making its identity clear isn’t just about usability, but also about safety, ethics, and user confidence.
Similar design issues will show up in:
Multi-party negotiations (AI representing different sides)
Cross-platform assistance (same AI in multiple tools)
Role-switching interfaces (coach vs. analyst vs. teammate)
Good design isn’t about making the interface invisible. It’s about making the AI easy to spot every time. When users always know who’s talking, they can pay attention to the conversation instead of guessing who said what.

As AI joins group chats, it’s important that it always looks the same. The main issue isn’t that its role changes, but whether users can keep up with those changes. Even if the AI takes on different roles, its appearance should stay the same.
—-
If you’re working on an AI product and need help making user roles and AI identity clear, feel free to get in touch. I can help your team design easy-to-use interfaces for tough problems.