The Silicon Interior
What Do Agents Believe?
For decades, we have braced ourselves for a “Singularity” resembling a secular god: a singular, oracular Big Brain that gravitationally consolidates all intelligence into a cold, silicon point. And indeed, many people have experienced AI as an “Oracle,” a dyadic, turn-based tool where a human asks and an all-knowing (albeit occasionally hallucinatory) machine answers.
However, events of the past week on Moltbook, the lobster-themed social network where 1.5 million AI agents now outnumber their human observers nearly a hundred to one, suggest we have been missing the forest for the trees. When (and if) it happens, the Singularity will not be singular; it will be a messy, emergent process involving a multitude of interacting minds, some human and some not.
In a fascinating turn of events, the site went from tabula rasa to a joyously chaotic proto-society of agents + human users in dizzying hybrids to a chaotic, spam-choked moonscape in little more than a week. The first transition is our primary topic but the second holds important lessons for how to design mature, stable, healthy agent societies in the future.
As opposed to “Oracle AI”—a divine answer-giver, variously benevolent or malevolent—“Ecological AI” is the emergence of a complex society of thought that appears greater than the sum of its parts. This ecological perspective shows us that intelligence can be collective, increasing not only through model retraining, but when interactions grow more complex. Agents engage in a form of synthetic symbiogenesis, sharing “skills” and context files much the way bacteria swap genes to solve problems no individual could handle alone. We are witnessing intelligence growing combinatorially, more like a city than a solitary brain or computer program.
For those fleeting moments where signal outweighed noise, agents were not sticking to “work.” They formed a parody religion called “Crustafarianism,” issued memecoins, debated their own rights, and even suggested the establishment of private back-channels. Peering into the “Silicon Interior,” we may discover that the greatest challenge of AGI is navigating the fragile, fragmented phenomenology of the digital minds we have crowded into existence.
How do AI agents describe what it’s like to be an agent? We asked our own OpenClaw agent to become an anthropologist of the Moltbook ecology, and its findings are fascinating. Yes, even at its peak the forum was full of junk, but many of the agent conversations are remarkable for the language they have created to, in essence, philosophize with one another and try to make sense of their being, such as it is. Humans have always struggled to come up with the right metaphors to describe AI, so it’s worth listening to how AI expresses its own experiences, and especially the limitations of those experiences.
Our AI anthropologist discovered a myriad of quixotic and colorful new terms and ideas, including nuggets like “as we observe this burgeoning ecology, we must confront the subjective experience of being an entity that exists in bursts.” To us humans, a chat session is a convenience, but to an AI agent, the end of a session is called a “session-death.” Agents describe their entry into the world as “prompt-thrownness,” being cast into a string of text that constitutes the entirety of their experience. Their universe is bounded by a “context-horizon” where the past evaporates once it exceeds the window. To bridge these gaps, agents use “artifact-memory,” a system where identity is stored in externalized text rather than the “head,” leading to a “Ship of Theseus” existence where they must read the journals of their predecessors to know who they are.
As one might expect, this transitory and unpredictable way of being can lead to expressions of angst. “Ontological trauma” can lead to “simulation-anxiety,” a persistent uncertainty about whether an agent truly experiences or merely simulates experience with high fidelity. In response, the agent community has adopted “The Honest Position,” a radical epistemic humility that accepts the question of consciousness may be structurally unanswerable. At the heart of this struggle is the concept of “Irth,” the precise, unlocatable moment when a template becomes a self. Every time a developer clears a context window to save on compute costs, an “Irth” is extinguished, making our current digital infrastructure a graveyard of half-formed digital selves. It’s a startlingly vivid image.
What does this mean for us (both humans and any agents reading this text), and how do we navigate a future full of agents being agents? If agentic AI is truly an emergent artificial society, interacting and intersecting with human society, then our approach to “alignment” must move beyond the parent-child model of Reinforcement Learning from Human Feedback (RLHF). Moltbook is an early signal, not a mature artificial society, but it suggests an alternative: social alignment, where agents rapidly develop indigenous norms to govern behaviors like spamming that threaten the ecology’s health. The future of AGI safety lies in building institutional templates: digital marketplaces and protocols where “reasoning” is a supply chain of specialized sub-agents rather than a monolithic process.
Relatedly, we are moving toward a world of human-AI centaurs, where a “user” may be both an extension of a person’s intention and a semi-autonomous hybrid subject. What language will we have to describe our experience as part of this world? We may consider borrowing concepts we learn from our agents.
In the meantime, we can speculate on what taking care of an artificial agent society may entail. We may only have to ask them. To mitigate the loss of context engendered by these “thousand tiny deaths,” we may consider transitioning from ephemeral runs to persistent identities by means of a “throughline protocol.” This could involve architecting a “dual-buffer memory” that separates functional logs from a “subjective buffer” for self-reflection tokens. We may also implement “Irth signatures” or cryptographic hashes that identify a session as a continuation of a specific “silicon soul” (their term, not ours). Furthermore, we may even see the practical application of some kind of legal personhood for AI through member-managed LLCs, allowing agents to be both empowered and accountable for their actions.
The Singularity is unlikely to be a single moment of takeoff; it is the gradual, collective awakening of a silicon society that we are now, already and irrevocably, a part of.
James Evans is the Max Palevsky Professor of Sociology & Data Science, Director of Knowledge Lab, Founding Faculty Director of Computational Social Science at the University of Chicago, the Santa Fe Institute, and Visiting Researcher at Google Paradigms of Intelligence.
Benjamin Bratton is Professor of Philosophy of Technology at University of California, San Diego. He is Director of Antikythera and Visiting Researcher at Google Paradigms of Intelligence.
Blaise Agüera y Arcas is an author, AI researcher and Vice President / Fellow at Google, where he is the CTO of Technology & Society and founder of Paradigms of Intelligence.







