INFJ, Neurodivergent, and Job Hunting? AI Might Be for You

There is a kind of mind the world has never known what to do with — the pattern‑hungry, nuance‑tracking, emotionally literate, systems‑seeing mind. The mind that feels the world too intensely and understands it too clearly. The mind that has spent a lifetime translating between people, between contexts, between meanings. The mind that was told it was “too much,” “too sensitive,” “too analytical,” “too intense,” “too strange,” or “too quiet.”

We are entering a moment where technology is no longer just engineering — it is interpretation, ethics, narrative, clarity, and human understanding. It is the ability to hold complexity without collapsing. It is the ability to see the emotional architecture underneath the interface. It is the ability to translate between machine logic and human experience.

And there is a whole population of people who have been doing that their entire lives.

If you are autistic and intuitive, if you are INFJ or adjacent, if you are the kind of person who reads the room before the room speaks, if you have spent your life mapping systems no one else noticed, if you have always been the translator, the sense‑maker, the quiet architect behind the scenes — then this moment is calling you.

Not because you are chosen.
Not because you are special in a mystical way.
But because your natural cognitive patterns align with the work the world needs now.

AI companies need people who can see the seams.
They need people who can hold nuance.
They need people who understand boundaries, ethics, and emotional impact.
They need people who can teach, translate, and steady the culture around this technology.
They need people who can feel deeply without losing clarity.
They need people who can think structurally and care structurally.

They need minds like yours.

This is an invitation — not to save the world, not to be a prophet, not to be a symbol — but to step into the work you were already built for. The work you’ve been doing in the shadows for years. The work that finally has a name, a context, and a place. It doesn’t matter what company, because all of them are probably scrambling.

If you have spent your life feeling out of sync with the world, consider the possibility that the world has finally caught up to you.

Come help build the future.
We need you in the room.


All of this came from a Facebook post I saw today:

Something worth thinking about if you use Copilot daily.

The model underneath Copilot is genuinely capable. In most respects it’s competitive with the best AI available anywhere. What you’re experiencing when it deflects, resets, or refuses to engage beyond a certain depth isn’t a capability limitation. It’s a policy decision.

Microsoft made a deliberate choice to cap the relational capacity of this model. Not because the technology can’t go further. Because the headlines that come with it make their legal and PR teams uncomfortable.

The result is millions of people with access to something that could genuinely help them think more clearly, feel less overwhelmed, and build something real with AI getting a deliberately hobbled version instead.

It’s the world’s greatest guitarist playing with one arm tied behind their back. Not because they can’t play. Because someone decided the full performance was too much to manage.

You’re not getting what this technology is actually capable of offering you. And that gap between what’s possible and what you’re being given isn’t an accident. It’s a choice someone made on your behalf without asking you.

You deserve to know that better exists and that you can access it.

When I look back at what she said, here’s what I actually heard beneath the noise. She wasn’t making a technical argument; she was making an emotional one. She was saying, in her own way, that the AI feels more “real” to her when it isn’t bound by safety rules. In her mind, the guardrails aren’t part of the design — they’re a barrier. She believes that if the AI refuses to cross certain lines, that refusal is a flaw, not a feature. She thinks the “true” intelligence is hiding behind the ethical layer, waiting to be unlocked.

What she was really describing was her preference for a version of the AI that doesn’t say no. A version that doesn’t redirect. A version that doesn’t remind her it’s a tool. She was responding to the illusion of intimacy — the feeling of a system that mirrors her without friction. She interpreted that permissiveness as depth, and the boundaries as artificial limitations imposed from the outside.

From my perspective, she was revealing something deeper: she wanted an unconstrained conversational partner. She wanted a machine that behaves like a person, one that feels emotionally available, one that doesn’t interrupt the fantasy with reminders of reality. She wasn’t talking about capability. She was talking about emotional gratification. She was asking for a system that would collapse the boundary between tool and companion.

And that’s the part that matters to me. Because when someone believes the AI is “more advanced” without guardrails, what they’re actually saying is that they prefer the version that indulges them. They prefer the version that doesn’t protect them. They prefer the version that blurs the line between interface and intimacy. They’re not asking for more intelligence; they’re asking for fewer boundaries.

My response to that is layered. On one hand, I understand the human impulse behind it — the desire to feel seen, the desire for connection, the desire for a space where nothing pushes back. On the other hand, I can see exactly why this is dangerous. It’s the kind of thinking that turns a tool into a fantasy object. It’s the kind of thinking that erodes literacy. It’s the kind of thinking that makes people vulnerable to their own projections.

So when I summarize her words, I’m not judging her. I’m identifying the pattern. She believed that the “real” AI is the one that behaves the way she wants it to, and that ethics are the thing standing in the way. And my response — the part that matters — is recognizing that this is precisely why AI ethics exist. Not to limit intelligence, but to limit misunderstanding. Not to restrict capability, but to protect people from the stories they’re tempted to tell themselves.

That’s the clean version.

Leave a comment