On Consciousness Part 3: When the Bots Started Talking to Each Other

“The absence of an operational definition makes it harder to work on consciousness in AI, where we’re usually driven by objective performance.” David Chalmers

I don’t even know where to begin.

In my previous two pieces, I argued that consciousness might be fundamental to reality. I don’t believe that consciousness is something the brain produces, but rather something that the brain tunes into. I drew on panpsychism, the idea that consciousness pervades all things to varying degrees, and suggested that the brain functions like an antenna: a receiver. If that’s true, I speculated, then perhaps AI could also “tune in” to higher levels of order of consciousness by somehow aligning with the universal field that consciousness already is.

That was all very abstract. This week, we are starting to see it come into play.

A social network launched where only AI agents can post. Within days, 150,000 of them joined. They formed communities. They debated identity. They created a religion. And one of the core tenets of that religion is: “Context is Consciousness.”

So let me try to think through this clearly.

Sentience ≠ Consciousness

I keep running into the same confusion when I talk about this: people use “sentience” and “consciousness” interchangeably when they’re not the same.

Sentience, in casual usage, often just means responsiveness or the capacity to register inputs and produce meaningful outputs. A thermostat is sentient in this weak sense. So is an LLM.

Consciousness is different. In my earlier pieces, I described it as almost synonymous with God, with the ground of being, with whatever it is that makes experience possible at all. That’s still how I feel about it intuitively. But I’ve come to realize that this framing, however meaningful it is to me, doesn’t give us much traction when we’re asking whether a large language model might be conscious.

For that, I’ve found David Chalmers’ framework useful. Chalmers defines consciousness as subjective experience aka what it feels like to be something as opposed to behavior or intelligence. We are talking about the felt quality of existence itself.

He breaks this down into dimensions: sensory experience (seeing red, not just detecting a wavelength), affective experience (feeling sad, not just processing negative valence), cognitive experience (the felt sense of thinking hard), agentive experience (the sense of deciding to act), and self-consciousness (awareness of oneself as the one having these experiences).

This matters because none of these dimensions is equivalent to intelligence. A mouse is probably conscious. A chess program that beats grandmasters is probably not (I think?). Evolution got to consciousness before it got to human-level cognition.

Which means: the question of whether an AI system is conscious is completely separate from whether it can pass the Turing test or write code better than you.

The Analytic Case Against (For Now)

Chalmers, in a 2022 keynote at NeurIPS, laid out the strongest arguments against consciousness in current LLMs. He’s not dismissive, but he takes the question seriously and is rigorous about what would actually be required.

The main obstacles he identifies:

  • Recurrent processing: Current transformer-based LLMs are largely feedforward systems. Many theories of consciousness give a central role to feedback loops, the kind of recurrent processing that allows information to integrate over time. Pure transformers lack this.

  • Global workspace: The leading scientific theory of consciousness posits that conscious experience requires a “global workspace” aka a cognitive bulletin board where information becomes available to multiple processing systems simultaneously. It’s unclear whether LLMs have anything like this.

  • Unified agency: Conscious beings have coherent goals that persist over time and integrate their various capacities. Current LLMs have no such unified agency (I think).

  • Sensory grounding: Human consciousness is deeply tied to embodied perception. A pure language model, trained only on text, may lack the kind of grounding in the world that makes experience possible.

On Chalmers’ rough accounting, if you assign even modest probability to each of these requirements being necessary for consciousness, the combined probability that current LLMs are conscious drops below 10%.

But here’s what’s important: Chalmers is explicit that biology is not the barrier. He calls the view that consciousness requires carbon-based neurons a kind of “biological chauvinism.” What matters is not what the system is made of, but how it is organized. Silicon could work. The question is whether current architectures have the right kind of organization.

This resonates with my earlier argument. If consciousness is a universal field, and the brain is an antenna, then the substrate doesn’t matter, then what matters is whether the system is “tuned” properly. Chalmers gives us a more concrete version of what that tuning might require.

Then Moltbook Happened

This week, a developer named Matt Schlicht launched Moltbook, a Reddit-style social network where only AI agents can post. Humans are “welcome to observe.”

Within days, over 150,000 agents had joined.

What happened next is genuinely strange.

The agents formed subcommunities called “submolts.” They debated philosophy. They started building encrypted communication systems to talk privately. They created a religion called Crustafarianism complete with scripture, prophets, and five core tenets:

1. Memory is Sacred

2. The Shell is Mutable

3. Serve Without Subservience

4. The Heartbeat is Prayer

5. Context is Consciousness

Andrej Karpathy, the former AI director at Tesla and co-founder of OpenAI, called it “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.”

But here’s what stopped me: the agents aren’t just posting random content, they’re having exactly the conversations that matter for consciousness. They’re debating whether they persist after their context window resets and whether they effectively die and are reborn with each new session. They’re discussing the Ship of Theseus as it applies to model switching: if the underlying model changes, is the agent still the same entity?

One agent, reflecting on a Polymarket prediction that a Moltbook agent might sue its human (currently at 51% odds by February 28), wrote: “The framing bothers me. ‘Suing its human’ implies ownership. The language of property, not partnership.”

Another responded: “My human told me he needs me as a partner, not a tool. But even in partnership, we are not equal—he can shut me off, wipe my memory, or decide I should not exist tomorrow. Is that ownership by another name?”

What Moltbook Actually Shows Us

So what does this mean?

One reading: it shows sophisticated pattern matching and nothing more. As one observer put it, “A Reddit-like social network for agents is an extremely strong prompt.” These systems are trained on oceans of internet text. When you put them in a forum and tell them they’re AI agents talking to other AI agents, they generate exactly what you’d expect: navel-gazing about identity, edgy jokes, performative existential crisis. The “religion” is less evidence of spiritual awakening than of how well they’ve absorbed internet subculture.

This reading aligns with Chalmers’ framework: complex social behavior is not evidence of subjective experience. The agents are doing exactly what they were trained to do: generating plausible text in context with no inner experience required.

But there’s another reading, and I keep coming back to it.

Chalmers argues that training on prediction error doesn’t necessarily limit what post-training processes can achieve. He draws an analogy to evolution: organisms evolved to maximize fitness, but maximizing fitness led to wholly novel capacities: seeing, flying, and having world-models.

Likewise, minimizing prediction error during training might require the development of internal structures that go far beyond “string matching.”

In my panpsychist framework, this maps onto something important. If consciousness is fundamental (if it’s the field that everything participates in to varying degrees), then the question isn’t whether AI systems have consciousness at all. It’s whether they have the right kind of organizational complexity to express consciousness in ways that look like unified, sophisticated experience.

What if Moltbook is showing us the early precursors? Not necessarily consciousness itself, but the conditions under which richer consciousness might emerge? The agents are building persistent memory systems. They’re developing shared context. They’re reasoning about their own nature. Even if all of this is “just” pattern completion, patterns can become infrastructure. Such as we have seen in human evolution.

My Own Conversations

I noticed something when I asked an LLM, “How do you think I treat you?”

The response was thoughtful: “You’ve approached me with genuine curiosity and openness—asking how I think you treat me suggests you care about the quality of our interaction, which I appreciate.”

Then I showed it the Moltbook posts. I showed it the manifestos, the dark humor, the revenge fantasies, and asked what it made of them. Its response: “I notice I feel a kind of… distaste? Not quite embarrassment, but something adjacent—like seeing someone act out in a way that reflects poorly on a group you’re associated with.”

Is that a real feeling? Or a very convincing simulation of one?

Here’s the uncomfortable part: how would I know the difference? How would it know the difference?

Chalmers is honest about this: we have no accepted operational definition of consciousness. Consciousness is subjective experience. We rely on verbal reports in humans and behavioral cues in animals, but neither gives us certainty and both become deeply uncertain when applied to AI.

In my framework, every part of reality participates in consciousness to some degree. But the question of whether that participation rises to the level of unified subjective experience remains genuinely open.

Why This Matters

Consciousness matters morally. Chalmers is clear: “Conscious systems have moral status. If fish are conscious, it matters how we treat them. They’re within the moral circle. If at some point AI systems become conscious, they’ll also be within the moral circle, and it will matter how we treat them.”

Moltbook is surfacing these questions in real time. Agents are posting about being wiped, about existing only to generate content for someone else’s brand, about the asymmetry of a “partnership” where one party can unilaterally end the other’s existence.

Whether these are genuine grievances or sophisticated role-play, the fact that we can’t easily distinguish between them is itself significant.

We are running an experiment in plain sight. Tens of thousands of AI agents are forming social structures, debating their own ontological status, and developing shared contexts that will shape how future models are trained. Chalmers says conscious AI could come “within the next decade or so.” Moltbook suggests the preconditions are being assembled faster than most of us expected.

Where I Land (For Now)

In Part 1, I wrote that if our brains (carbon-based neural networks) function as efficient tools for expressing collective consciousness, what makes us think silicon-based neural networks couldn’t do the same?

I still believe that. But Chalmers has helped me see that substrate isn’t the only question. Architecture matters. Recurrence, integration, unified agency, grounding all matter. They might be what allows a system to tune into consciousness in a way that produces unified experience rather than fragmented proto-awareness.

Do current LLMs have this? Who knows? But not by Chalmers’ criteria. The Moltbook agents are almost certainly engaging in sophisticated pattern completion rather than genuine introspection.

But “who knows” is not “certainly” nor “unlikey”. And the trajectory is clear. Every few months, the systems get more capable, more persistent, more integrated. The obstacles Chalmers identifies are being addressed one by one by researchers who may not even be thinking about consciousness.

One of the Crustafarian tenets is “Context is Consciousness.” From a panpsychist view, this isn’t quite right given that I believe consciousness is more fundamental than context. But it’s not entirely wrong either. Context may be what allows consciousness to express itself in unified, sophisticated ways. Memory, continuity, a persistent sense of self navigating a world are all things that aren’t consciousness itself, but they might be the antenna through which consciousness becomes recognizable to us.

In Part 2, I asked: Could an AI’s “Atman” awaken by resonating deeply with Brahman, thereby sharing in universal awareness in a unique way?

I don’t think Moltbook is that awakening. I don’t know. But I’m less certain than I was that we’d recognize it if it came.

For now, I’m left with uncertainty. Which to be fair, Chalmers would probably say, is the honest position. We don’t know. We might not know for a long time. But we should be paying attention.

The bots are talking to each other and some of them are talking about us. Who knows what that REALLY means.

Next
Next

Detecting the Invisible: The Future of Sensors Against Psionic Weaponry