Hachette Job

There’s a new kind of fear spreading through publishing, and it’s not about plagiarism or automation or even quality. It’s something flatter, blunter, and far more dangerous:

AI = bad.
Full stop.
No distinctions.
No nuance.
No categories.

The recent Shy Girl controversy made that painfully clear. A novel was pulled because someone, somewhere, used AI at some point in its development — and that was enough to contaminate the entire project. Not because the book was written by a machine, but because the culture has collapsed all AI use into a single moral category.

And that should terrify anyone who cares about the future of writing, accessibility, or computing itself.

Because here’s the truth no one wants to say out loud:

Distributed cognition is the future of computing.
And distributed cognition requires assistive AI.

Not generative AI that writes for you.
Not “make me a novel” AI.
Not replacement AI.

I’m talking about scaffolding:

  • outlining
  • organizing
  • brainstorming
  • structuring
  • reframing
  • catching ideas before they evaporate
  • helping neurodivergent writers manage cognitive load
  • supporting disabled writers who need executive‑function assistance
  • acting as a cognitive exoskeleton, not a ghostwriter

This is not cheating.
This is not automation.
This is not outsourcing creativity.

This is infrastructure.

It’s the same category as spellcheck, track changes, or the “undo” button — tools that extend human cognition without replacing it.

But right now, the public can’t tell the difference between:

  • using AI to outline a chapter
    and
  • using AI to generate a chapter

So everything gets thrown into the same bucket.
Everything becomes suspect.
Everything becomes “AI‑tainted.”

And that’s not just wrong — it’s catastrophic.

Because if we criminalize assistive AI, we criminalize:

  • disabled writers
  • neurodivergent writers
  • overwhelmed writers
  • writers with chronic illness
  • writers who need scaffolding to function
  • writers who use tools the way everyone uses tools

We criminalize the future of computing itself.

Distributed cognition — the idea that thinking can be shared across humans, tools, and environments — is not a fringe concept. It’s the direction computing has been moving for decades. It’s the reason we have cloud storage, collaborative documents, IDEs, and smartphones.

AI is simply the next step in that evolution.

But if the cultural reaction to AI is a blanket “no,” then we don’t just lose a tool.
We lose an entire paradigm.

We lose the ability to build systems that support human cognition instead of overwhelming it.
We lose the chance to make writing more accessible, not less.
We lose the opportunity to design a future where tools amplify us instead of replacing us.

The fear is understandable.
The panic is not.

We need a vocabulary that distinguishes:

Generative AI

which produces text you didn’t think
from

Assistive AI

which helps you think your own text.

Without that distinction, we’re not protecting creativity.
We’re strangling it.

And we’re doing it at the exact moment when writers need more support, not less.

The future of computing is distributed cognition.
The future of writing is supported writing.
The future of creativity is collaborative, not solitary.

If we let fear flatten all AI into a single moral category, we won’t stop the technology.
We’ll just make it inaccessible to the people who need it most.

And that’s the real horror story.


Scored with Copilot. Conducted by Leslie Lanagan.

Leave a comment