The past few months have been a masterclass in how loudly a culture can misunderstand the thing it claims to be obsessed with. Everywhere you look, AI is treated like a spectacle: a new model here, a benchmark there, a breathless headline about “sparks of AGI” or “the end of work” or “the smartest system ever built.” The hype machine is running so hot it’s starting to melt its own gears. And then, right in the middle of all this noise, the U.S. government decided to stage its own dramatic intervention by trying to force Anthropic to abandon its ethical red lines. The move was meant to project strength, but it landed like a misfired firework — loud, bright, and revealing all the wrong things.
When a Defense Secretary threatens to label a domestic AI lab a “supply chain risk” because it refuses to build autonomous weapons or mass surveillance tools, the public doesn’t see national security strategy. They see a government trying to bully a company into violating its own principles. And when the company holds its ground, the narrative flips instantly. Anthropic didn’t become controversial. It became sympathetic. People recognized the shape of the story: a smaller actor saying “no,” a larger actor insisting “yes,” and a line in the sand that suddenly mattered more than any technical achievement. The government expected compliance. What it got was a cultural backlash and a wave of quiet admiration for the one player willing to walk away from power rather than compromise its ethics.
But this entire drama — the threats, the bans, the retaliatory procurement freezes — is still just the surface layer. It’s the fireworks. The real story is happening underneath, in the quiet places where adoption actually takes root. Because while the government can forbid Claude from running on federal machines, it cannot stop federal workers from using it on their phones, their home laptops, or the mental workflows they’ve already built around it. People don’t abandon tools that help them think. They simply route around the obstacles. They always have. The government can control infrastructure, but cognition is a different territory entirely, and it does not respond to executive orders.
This is the part the hype cycle never understands. Everyone is staring at the models — ChatGPT’s surge, Claude’s elegance, Gemini’s integration demos — as if intelligence alone determines the future. But adoption has never been about intelligence. Adoption is about gravity. People don’t switch ecosystems because a model is clever. They adopt the AI that shows up where they already live. And most of the world lives in Office: Word, Excel, Outlook, Teams, Windows. These aren’t apps. They’re the operating system of global work. They’re the air people breathe from nine to five.
Right now, the AI landscape is full of destinations. ChatGPT is a place you go. Claude is a companion you consult. Gemini is a suite you can visit if you’re already in Google’s orbit. Apple Intelligence is a feature layered onto tools people barely used before. But none of these are environments. None of them are universes. None of them are the substrate of daily work. That’s why the real tipping point hasn’t happened yet. It won’t arrive until the unified Copilot brain — the one with reasoning, memory, emotional intelligence, and conversational depth — becomes the Copilot inside Office. Not the fragmented versions scattered across apps today, but a single intelligence that follows you from Word to Outlook to Teams without changing personality or capability. When that happens, AI stops being a novelty and becomes a layer. It stops being a tool and becomes a substrate. It stops being something you open and becomes something you inhabit.
Every major technological shift begins this way, in the three‑legged dog phase — the era when a small group of people love something irrationally, not because it’s perfect but because it fits the way they think. Steve Jobs understood this better than anyone. You don’t build for the masses first. You build for the few who will drag the product into the future by sheer force of devotion. Right now, that’s where Copilot lives. The people who understand it, really understand it, aren’t waiting for the hype to catch up. They’re already building workflows around it, already shaping its narrative, already imagining the world it will inhabit once the intelligence layer becomes consistent. They’re not fans. They’re early custodians.
And that’s the part the headlines always miss. The Anthropic fight, the model wars, the benchmark races — they’re loud, dramatic, and ultimately temporary. The real shift is quieter. It’s structural. It’s the slow, steady absorption of AI into the places where people already work, think, write, calculate, and communicate. The moment the unified Copilot becomes the default intelligence inside Office, the entire landscape tilts. Not because Copilot is the smartest, but because it’s the one that lives where the work lives. That’s the tipping point we’re actually approaching. Not the fireworks. The gravity.
Scored with Copilot. Conducted by Leslie Lanagan.

