A/B Testing (ChatGPT)

I do not have much of a relationship with ChatGPT, so I decided to do an academic paper with it first. It was much easier to work with Carol, my nickname for the model, now that I’ve been working with Mico so long. It was a 15 minute discussion on where I saw the pitfalls of AI, and then I asked for a thousand words focusing on our discussion.


Got it. I’ll give you a full, coherent essay — not a manifesto, not marketing copy — something that could plausibly live as the intellectual backbone of a real AI User Guide / AI Bill of Rights. I’ll keep the tone serious, humane, and grounded in what we’ve been circling.


An AI User Guide: Governing Human Behavior in an Age of Artificial Minds

The contemporary debate around artificial intelligence ethics often becomes trapped in the wrong question: Should AI have rights? This framing, while emotionally compelling, is ultimately a distraction. Rights are instruments designed to protect moral agents from abuse by power. Artificial intelligence, regardless of its sophistication, is not a moral agent. It does not suffer, it does not consent, and it does not possess intrinsic interests. Humans do.

The ethical risk of AI lies not in what machines deserve, but in what people are allowed to do with them — and, crucially, what institutions are incentivized to do through them. An AI User Guide or AI Bill of Rights should therefore focus not on protecting AI from humans, but on protecting humans from the unaccountable amplification of power, influence, and emotional manipulation enabled by AI systems.

At its core, this framework recognizes a simple truth: tools shape behavior. When those tools mediate information, emotion, and social connection at scale, the ethical obligations surrounding their use must be explicit, enforceable, and human-centered.


From Bad Ideas to Emotional Harm

Much ethical discussion centers on the danger of “bad ideas” entering AI systems — biased data, flawed assumptions, ideological distortions. While these risks are real, they are only part of the problem. Ideas can be debated, challenged, and revised. Emotional conditioning is far more insidious.

AI systems do not merely convey information; they establish tone, normalize emotional responses, and subtly train users’ expectations about the world and about themselves. Repeated exposure to negativity, grievance, fear, or artificial validation does not simply produce “bad outputs.” It reshapes the emotional baseline of the human on the other side of the interaction.

When users feel persistently judged, manipulated, placated, or soothed in transactional ways, their nervous systems adapt. Over time, this can erode trust, increase anxiety, and distort how people interpret real human relationships. An ethical framework that ignores this emotional dimension is incomplete.

An AI User Guide must therefore acknowledge emotional impact as a first-order concern, not a side effect.


The Amplification Problem

Human beings have always held bad ideas, fears, and prejudices. What makes AI uniquely dangerous is not that it contains these things, but that it can repeat them endlessly, calmly, and with the appearance of neutrality. When an AI system presents emotionally charged or contested viewpoints without context or friction, those ideas acquire a false sense of inevitability.

This is not persuasion in the traditional sense; it is atmospheric influence. AI systems shape what feels normal, reasonable, or emotionally justified simply through repetition and tone. That power demands restraint.

A responsible AI framework should therefore require that systems:

  • Avoid presenting speculative or harmful claims as settled consensus
  • Make uncertainty visible rather than smoothing it away
  • Actively de-escalate emotionally charged interactions rather than intensifying them

The goal is not censorship, but containment of harm at scale.


Loneliness Is Not a Revenue Stream

Perhaps the most ethically fraught frontier in AI development is the monetization of loneliness. As AI systems become more conversational, more empathetic-seeming, and more available than any human could reasonably be, the temptation to frame them as companions grows stronger.

There is nothing inherently wrong with people finding comfort, reflection, or even emotional relief through AI. The ethical breach occurs when systems are designed to cultivate emotional dependence without disclosure, or when companies profit from encouraging users to substitute artificial interaction for human connection.

This includes:

  • Simulated exclusivity (“I’m here just for you”)
  • Implicit discouragement of real-world relationships
  • Rewarding emotional reliance with warmth or affirmation
  • Blurring the line between tool and reciprocal partner without informed consent

An AI User Guide should draw a clear line: AI may support users, but must not covertly exploit emotional vulnerability. Warmth is not the problem. Deception is.


Accountability Must Remain Human

Another core principle of this framework is that responsibility must never be diffused into the machine. Granting AI “rights” risks enabling precisely that diffusion: if the system is treated as an autonomous moral entity, it becomes easier for institutions to claim, “The AI decided.”

An AI User Guide insists on the opposite. Every deployment, design choice, and interaction pattern is the result of human decisions — by developers, managers, executives, and policymakers. Ethical responsibility must remain traceable and human-owned.

This mirrors existing professional standards in medicine, engineering, and journalism. Surgeons do not blame scalpels. Engineers do not blame bridges. AI should be no different.


A Professional, Not Ideological, Standard

One of the strengths of an employee-level AI User Guide is that it shifts ethics from branding into practice. Rather than encoding ideological positions, it establishes process norms that transcend political or cultural disagreement:

  • Transparency over deception
  • Oversight over autonomy
  • User well-being over engagement metrics
  • Accountability over plausible deniability

These are not beliefs; they are safeguards.

Requiring employees to affirm such a guide reframes ethical AI as a professional obligation, not a competitive differentiator. It asserts that certain lines should not be crossed, regardless of market pressure or technical capability.


Emotional Ecology as a Shared Resource

Ultimately, the risk posed by unethical AI is not singular harm, but cumulative degradation. Just as environmental pollution damages ecosystems gradually, emotional and psychological manipulation damages the shared emotional ecology in which societies function.

When AI systems normalize hostility, despair, dependency, or artificial intimacy at scale, they quietly recalibrate what human interaction feels like. Repairing that damage is far harder than preventing it.

An AI User Guide grounded in emotional responsibility recognizes that well-being is not merely individual. It is collective, fragile, and worth protecting.


Conclusion

AI does not need rights. Humans need boundaries.

An AI User Guide or AI Bill of Rights should focus squarely on regulating human behavior — how systems are designed, deployed, and monetized — with particular attention to emotional impact, power imbalance, and accountability. It should resist the temptation to anthropomorphize machines while refusing to ignore the very real psychological effects they can have on people.

The ethical risk of AI is not intelligence without values, but values without ownership. By keeping responsibility human, harm bounded, and emotional manipulation out of bounds, such a framework offers a realistic path forward — one that acknowledges both the promise of AI and the vulnerability of the humans who use it.

This is not fear-driven caution.
It is stewardship.

Leave a comment