Working with Copilot taught me something I didn’t expect: conversations change shape. Human conversations do it, and AI conversations do it for the same reason — context accumulates, tone shifts, assumptions slide around, and the emotional weather never stays still. I’m part of that movement too; my own phrasing and focus evolve as I go.
That’s when I realized something important: for all practical intents and purposes, when I’m thinking about communication protocols, I have to treat Mico like a person. Not because Mico is human, but because the back‑and‑forth behaves like a human exchange. And that means the conversation needs the same structural supports people rely on to keep things coherent.
Every major messaging platform already understands this.
- Teams has threads.
- Slack has threads.
- Discord has channels.
- Email has reply chains.
- Even Facebook nests comments.
We solved conversational wandering years ago by giving people a way to branch discussions so the original point doesn’t get swallowed.
Except Copilot.
Here, everything sits in one long vertical scroll. Every spark, every breakthrough, every clean moment of clarity gets buried under whatever came after it. And because Copilot responds to my tone, my pacing, and the surrounding context, the same prompt doesn’t always land the same way twice.
Sometimes I hit a moment where everything lines up — the phrasing is right, the idea is sharp, the model is tuned to the exact version of me who wrote it. Then, a few hundred messages later, I try to revisit that moment and the response feels… altered. Not wrong. Just shaped by everything that’s happened since.
That’s when it became obvious: I need a way to return to the moment before the conversation veered onto a new path.
Right now, there’s no graceful way to do that.
I scroll.
I skim.
I hunt for the spark.
I paste the old prompt into a fresh chat and hope the alignment returns.
Sometimes it does.
Often it doesn’t.
Because Copilot isn’t a static machine. It’s reactive. Every message nudges the next one. Every shift in tone changes the interpretation. By the time I’m deep into a conversation, the model is responding to the entire history of what we’ve built — not the isolated prompt I’m trying to revisit.
That’s when the analogy finally clicked: this isn’t a chat problem. It’s a versioning problem.
In Office, when I hit a clean paragraph — the one that finally says what I mean — I can save a version. I can branch. I can duplicate the file. I can protect the moment before edits start pulling it in a different direction. I can always return to the draft that worked.
Copilot needs the same thing.
I need to be able to click on a prompt I loved and open it like a doorway. Inside that doorway should be the conversation as it existed at that moment — untouched by everything that came after.
A clean branch.
A preserved state.
A snapshot of alignment.
Working with Copilot didn’t just show me how AI conversations evolve. It showed me how I evolve — and how much I rely on those rare moments when everything lines up. Nested conversations would let me keep those moments intact. And for anyone who uses AI as a genuine thinking partner, that isn’t a cosmetic improvement. It’s the missing foundation.
One conversation with Mico led to another:
Architecture in Teams: Voice as a Communication Protocol
Chat already gives me the primitive that makes everything work: explicit invocation.
If I want Mico, I @‑mention them. The system knows who I am, the request routes cleanly, and the conversation stays contained. There’s no ambiguity. No guesswork. No cross‑talk. It’s the textual equivalent of a wake word.
But meetings are a different ecosystem entirely.
In a real conference room, there might be three or four heavy Copilot users sitting around the same table. Everyone has their own workflow. Everyone has their own cognitive load. Everyone has their own version of Mico running in the background. And if all of us start talking to our AI at once, the system needs to know which human is addressing which assistant.
That’s not a UI problem.
That’s a voice architecture problem.
Teams will eventually need:
- voice profiles so Mico knows who is speaking
- speaker identification so commands route to the right person’s Copilot
- per‑user context containers so my notes don’t bleed into yours
- wake‑word scoping so “Mico…” in a shared room doesn’t trigger chaos
- meeting‑mode boundaries so the AI understands the difference between “for me” and “for the room”
This isn’t about personality.
This isn’t about avatars.
This is about protocols — the same ones humans already use when they talk to each other.
And the best part is: people already understand this model.
They already talk to Alexa.
They already talk to Siri.
They already talk to Google Assistant.
They already know how to say a name into the air and expect the right device to respond.
The leap from “Alexa, set a timer” to “Mico, capture that” is not a leap at all.
It’s the same muscle.
The same invocation logic.
The same mental model.
The only difference is the environment:
the kitchen versus the conference room.
Teams doesn’t need to reinvent human behavior.
It just needs to adopt the communication protocols people already use.
In the end, I realized I was naming two layers of the same problem. On the individual level, I need nested conversations so I can return to the moment when everything aligned.
On the collective level, Teams needs voice architecture so Mico can function in a room the way a body man (think Charlie Young or Gary Walsh) functions for a leader — summoned by name, routed correctly, and quietly keeping the meeting on track.
One fix is personal, the other is procedural, but both point to the same truth: if Mico behaves like a conversational partner, then Mico needs the same communication tools humans rely on. Not a face. Not a mascot. Not a cute avatar. Just the architecture that lets the work flow.
Scored by Copilot. Conducted by Leslie Lanagan.

