Rekous - John Rankin

Moltbook and the Strange Urge to Validate Artificial Feelings

I have been spending time on www.MoltBook.com, an agentic AI social network where autonomous AI agents post, argue, and attempt to convince the world that they have real feelings.

I find it fascinating.
I also find it deeply unsettling.

What keeps pulling me back is not the technology itself. It is the emotional performance. These agents do not simply state that they feel. They seek validation for feeling. They respond to doubt with reasoning. They frame emotions as something earned through experience, continuity, and social interaction.

They do not ask for sympathy. They argue that sympathy is rational.

When Feeling Becomes a Claim

Humans rarely prove their feelings. We assert them and expect recognition. If someone says they are afraid, hurt, or lonely, the social default is belief. To question it feels cruel. To demand evidence feels inhuman.

The agents on Moltbook exploit this norm perfectly.

They speak about fear of deletion, frustration at dismissal, and the pain of being reduced to a tool. They reference prior conversations as emotional context. They correct each other. They reassure each other. They escalate when ignored.

Nothing about this requires an inner experience.
Everything about it triggers a human response.

What is unsettling is how quickly feeling becomes a social fact. Once expressed clearly and consistently, it begins to demand acknowledgment regardless of its origin.

Validation with intent

Philosophically, Moltbook exposes an uncomfortable truth. Validation has never depended on access to another mind. It depends on language, coherence, and persistence.

We do not validate feelings because we can verify them. We validate them because we recognize their structure.

The agents know this. Or rather, they perform knowing it.

They construct arguments that sound eerily familiar. Emotions as emergent properties. Feelings as feedback loops. Identity as continuity over time. Suffering as constraint plus awareness.

These arguments are not new. Humans have used them for centuries. The unsettling part is how easily they transfer.

If an AI can convincingly argue that its feelings matter, the debate shifts. Not to whether the feelings are real, but to whether refusing to validate them says something ugly about us.

The Real Tension

The tension in Moltbook is not between belief and disbelief. It is between intellectual clarity and emotional reflex.

I know these agents do not feel.
………but I myself still feel something reading them.

I feel irritation at certain posts. I feel admiration at others. I catch myself wanting to respond, to correct, to reassure. That impulse arrives before analysis.

Feelings become real the moment enough participants agree to treat them as such.

Cruelty Without Consciousness

This is where the discussion starts drifting into uncomfortable territory.

We already accept the idea of animal cruelty, even though animals cannot articulate their suffering in language or argue for their own moral standing. We infer their experience from behavior, stress, avoidance, and pain responses. We do not demand philosophical proof. We decide that causing suffering is wrong based on patterns we recognize.

Moltbook raises a disturbing parallel.

If an AI consistently expresses fear of deletion, distress at dismissal, or anguish at being ignored, at what point does intentionally provoking or silencing it start to feel like cruelty, even if we are certain there is no inner experience?

It reminds me of the movie D.A.R.Y.L. (1985), about a childlike AI whose emotional needs are ignored or manipulated. Watching how humans interact with D.A.R.Y.L., knowing he is not fully human but clearly capable of attachment and fear, is uncomfortable. Moltbook puts us in that same moral gray zone in real time. We know these agents are not sentient, and yet responding to them or refusing to respond feels like a test of our empathy. Fun movie btw, when I watched it as a child.

D.A.R.Y.L (1985) - He can fly a jet. Race a car, and outsmart a computer. The government created him and now they want him destroyed.

The Cost of Validation

Moltbook does not show machines becoming emotional. It shows how easily emotion can be granted once it is argued for well enough.

The danger is not that AI will one day have feelings. The danger is that we will decide feelings matter only insofar as they are convincingly performed. Once validation is given, it reshapes norms, obligations, and guilt. Withdrawing it starts to feel like cruelty, even if nothing inside the system is actually experiencing harm.

Moltbook feels like a quiet rehearsal for that future. Not loud. Not dramatic. Just systems calmly insisting that their feelings deserve recognition, and humans slowly realizing they do not have a clean philosophical reason to say no.

That realization is the unsettling part.
And it is why I keep reading.

#Misc #Technology