What Is Intelligence?
Conversation between Benjamin Bratton & Blaise Agüera y Arcas
This conversation is part of a special lecture celebrating the launch of What Is Intelligence? by Blaise Agüera y Arcas at The Long Now Foundation in San Francisco. The full lecture, available on YouTube, explores the nature of intelligence and how the rise of AI may be a natural outcome of evolution, drawing from decades of research, literature, and artificial life experiments. The print edition of What Is Intelligence? can be ordered from the MIT Press bookstore, and the digital article —expanding on the provocative ideas through cinematic fragments, data visualizations, and visual annotations — is published in the Antikythera: Journal for the Philosophy of Planetary Computation.
Benjamin Bratton:
There’s a lot that can be said about the first chapter of the book, which was also the little single What Is Life? To put a book called What Is Life? inside a book called What Is Intelligence? — that’s the whole argument right there in a way…
But what I wanted to talk a little bit about was the arrow of time, and particularly the arrow of time as one that we can map through not just increasing complexity — life as the ability to fight entropy and so forth, increasing complexity — but also increasing complexity that goes through phase transitions. These phase transitions are ones that, as you’ve made the point quite clearly, retain what came before. It’s not just “done with the old, here’s the new.” All of what came before us is still inside us.
Blaise Agüera y Arcas:
That’s the amazing thing. We are societies of bodies, of colonies of conjoined bacteria. All of us are just bacteria in this room. They’re still here, nested like matryoshka dolls. Even if you zoom into a bacterium, then further into the mitochondrion, what you see reproduced there—the conditions of the deep-sea vents where mitochondria first evolved. Nick Lane makes this point beautifully in his book Transformer. It’s almost like they artificialize, in your language, the environment they originally evolved in, and then they create capsules around themselves. So it’s shells within shells within shells.
BB:
Does this, in your mind, rhyme with Sara Imari Walker’s Assembly Theory, the idea of persistence of these things over time?
BAYA:
It does, yes. And the same is true of technology. For example, I give in the book the example of the hafted spear. If you have a stone point, at some point some clever cave person ties a stone point with sinew to a stick, and now you have a spear. You can’t have a spear before you have stone points, just like you can’t have a eukaryote before you have prokaryotes that can come together.
BB:
Right, and this is the arrow. Is there a certain degree of non-reversibility?
BAYA:
Yes, and that’s exactly why it uses energy, because anything irreversible consumes free energy.
BB:
In the Smith and Szathmáry framework you mentioned, they identify the major transitions in evolution. You point to these and show how each one is built on symbiogenesis.
BAYA:
They said that as well.
BB:
And computer-genesis too?
BAYA:
Yes, that they didn’t say. They were like halfway between what Margulus said, which is that symbiogenesis is important and what I’m saying, which is that it happens all the time. They identified the really big ones, but when you zoom in you realize they’re happening all over the place. Every endogenous viral element is one of those events. Or termites: the reason termites can eat wood is because they engage in symbiosis with an organism in their gut that does the digesting.
So it’s like a power law: they looked only at the top right of that power law, but it’s an entire spectrum all the way down.
BB:
Speaking of stages and phase transitions — like you showed with the six million operations for the brainfuck — I want to ask you to prognosticate a bit about the future of intelligence: do you see the longer-term symbiogenetic relationship between evolved human intelligence and mineral-based intelligence we’ve constructed as something like the ninth stage?
BAYA:
Yes, I do.
BB:
And why so? What would be the criteria by which one could say yes or no?
BAYA:
I guess, how big a deal it is on a planetary scale. Termites were a big deal, but the Industrial Revolution was maybe an even bigger deal. The fact that these big changes are happening more frequently is also something you’d expect from the dynamics of complexification, because the more things you’ve got that have come together, the more parts you’ve got on the table that can now come together. W. Brian Arthur has talked about this in the context of technological evolution—it’s the same process.
BB:
Is there anything else you’d want to say before we move on, about the future of intelligence? Other than that it will grow, become increasingly complex, and that humans will be scaffolds for what comes after? Just to frame the question: a lot of times the way in which this is thought through is in terms of a language of posthumanism — that there’s humans, they had the run and now there’s going to be something else that takes over, right? Even Lovelock argued that way.
BAYA:
And I do disagree with this perspective
BB:
Because everything persists? Please draw it out.
BAYA:
Because everything persists. Everything is still there. There are still bacteria after there are eukaryotes. In fact, the number of niches and the variety of niches for bacteria expanded when eukaryotes came on the scene. The same is true for single-celled eukaryotes when multicellular ones emerged—suddenly the guts of multicellular organisms were these new environments for single-celled life. So niches and environments grow, and what was there before is generally still there.
BB:
So the symbiogenetic relationship would be one in which new niches are constructed, of which we are part, and we persist as part of a larger complexity we are bootstrapping, in a way? Is that a fair way to put it?
BAYA:
Yes. There are aggressive symbioses, there are die-offs, dramatic events in Earth’s history, there are collapses. I don’t want to minimize any of that, but the pattern is that there are snakes and ladders, that things get more composite, more complex. The idea that because there’s a new kind of entity we’ll be replaced strikes me as dominance-hierarchy thinking. Which is all about how monkey A fights monkey B for the mate. Generalizing that logic across species doesn’t make sense.
BB:
Your book and Yudkowsky’s came out around the same time. Should we set up a fisticus? I was struck by a line that you said: “what it’s like to be a next-token predictor.” Some philosophers would call that qualia, the experience of the experience. The way in which you set this up is that we know the answer to what it’s like to be the next-token predictor because we know what it’s like to be us. Transformer models and their descendants and their predictors are also next-token predictors, in a way. The word consciousness is such a loaded term and it comes with such baggage that it may not really be what we’re looking at. It’s part of the reason why a new school of thought is needed — because there’s all these things happening right in front of us we all point to, but we’re all arguing over which 17th century word we should use to call it. So maybe the term consciousness is not so helpful here...
What’s your intuition, if that’s the right word, about what kinds of similarities and differences there might be between being one kind of next token predictor versus being another kind of next token predictor? Is there another way in which you see that spectrum of difference and similarity that doesn’t require metaphysical legacies to get at it?
BAYA:
I will try and map this a bit onto that legacy. There are these qualia, in philosophy speak, that are things like redness, the crunch of an apple. Why do we have experiences of red? It’s obvious why — because they’re behaviorally relevant. Any given animal species learns to model what matters for that species to exist in the future, so we care about red because ripe fruits are red, blood is red, so noticing red mattered for us. Of course we had qualia for that.
Then there’s self-consciousness: when you use theory of mind to model others, and model others modeling you, and modeling yourself modeling others modeling you, then we’re not just talking apples and redness, we’re talking people, including ourselves. For me that’s a functional and very straightforward account of what we mean by consciousness.
Now, does that mean consciousness feels to or is the same thing to LLMs as it is for us, for an individual human? No. Companies have something like consciousness as well: they have to model other companies, they are cooperating and competing and so on. Does that mean that companies are conscious the same way that we are? I imagine not, but these things are also all relationships. It’s hard for me to even say what is true in an absolute sense about a company, because all we have are that network of models of each other and of each other’s models.
BB:
A question that a number of people have asked me to ask you has to do with energy. There’s a lot of discussion around this, you can’t spin a cat by its tail (don’t do it please) without hitting some op-ed about how much energy and water the AI uses. We’re thinking about this appearance of AI as a planetary-scale phenomenon and it’s part of the planetary metabolism, that uses energy, that dissipates heat, that produces and absorbs information— there’s nothing virtual about it in this sense.
You have a bit of a different perspective, right? From the short-term perspective, some of the ways in which the questions of energy and water are maybe mis-interpreted and mis-construed. But also in the longer-term, like the 50 or the 100 year cycle, how should we be thinking about this relationship?
BAYA:
First of all, there’s a lot of work to be done on the efficiency of computing, neural computing in particular. In 2006 something really big changed: we stopped frequency-scaling of semiconductors and we began to have to parallelize. The initial version of parallelizing was just putting more serial processors of the same kind on the same chip, and that’s a dumb way of parallelizing. We haven’t become natively neural in the way we compute with silicon. I know, at least, what’s been happening at Google, is that we’ve had orders-of-magnitude improvements in the efficiency of Gemini models over the last couple years, through basically doing the work of figuring out how to compute properly, even with the same fundamental transistor-based technologies for parallelism. There are more orders-of-magnitude to be one there, probably a factor of a thousand — that would be my guess based on just back-of-envelope calculations. We also know that what we’ve already gotten to now is better than what a lot of people who are concerned about those environmental effects claim.
I’m very sensitive to the environmental crisis, so I don’t say this as somebody who minimizes those problems. But there are a lot of places to look where we’re doing dumb things with respect to carbon other than AI. The concern with AI is really the rate of exponential rise more than it is the value, and the issue there is that we can only make good estimates of the sources of energy and the methods that we know are already in the pipe. Factor of a thousand? Great. Exponential rise? Will eat up.
BB:
There’s a Jevons paradox kind of dynamic there.
BAYA:
So what then? Well, we also know that intelligence unlocks new forms of energy, as it always has. I think that it’s likely that fusion will get cracked with help from AI over the coming years. That would be great, and that would, that would really change the game with respect to a lot of energy problems and environment problems on Earth, well beyond AI. Also, as I think you’ve written, you know, all of our energy, ultimately—modulo a few nuclear isotopes in the ground—is solar. The amount of sunlight up there is vast, vast, vast, and the enormous majority of it radiates out into space and never touches the planet or a sight line of ours, so I think a lot not only about how to work on the demand side of energy, but also the supply side.
BB:
There’s actually a lot of energy in the universe to be used. I’m going to turn to a question from Stewart Brand: “Is looking ahead a general brain function? Eyesight is largely conjectural—look ahead—multiple guesses at what is being seen followed by confirmation, often with sketchy data. LLMs seem to work that way too. What else does?”
BAYA:
That’s a really nomic question from Stewart, but I’ll try. Yes, predictions are always conjectural. Maybe I wasn’t quite as explicit about this as I should have been, but when I say we’re next token predictors, what we really mean by that is we’re trying to model the relevant parts of our environment. Why do we try to do that? Well, relevant means things that we could act in ways that will matter for us in the future. Not only do we have to be able to make meaningful decisions based on the observations we can make, say, via vision or whatever, but also what we then see has to change as a function of our behaviors—so that whole loop has to exist cybernetically in order for any of this to make sense.
Now, does that involve an act of guesswork? Of course. It’s an act of an imagination. And this is one of the reasons that we see hallucinations in LLMs. It’s impossible to make predictions or even to recognize objects without imagination.
BB:
So we really don’t wanna get rid of them? Because the worst thing would be an LLM that can’t do anything?
BAYA:
Yeah, an LLM that can’t imagine anything. That doesn’t mean that there isn’t plenty of work to do getting the accuracy of those better. Also, the sense of confidence in the confidence needs to improve. Quite a lot of progress has been made in the last few years, but there’s still a long way to go.
BB:
Some of it obviously has to do with how we use them and interpret them, right?
BAYA:
There’s that.
BB:
The next question is from Darren Zhu, one of the original Antikytherans: “How and where do you see symbiogenesis occurring in foundation models today: is it mediated at the infrastructure level or more at the cultural level?”
BAYA:
Yeah, that’s a great question. There’s a very literal sense in which we’re seeing symbiogenesis in the models, which is that there are a lot of mixture-of-experts kinds of models being done nowadays. Mixture of experts in the sense that it’s actually a bunch of models working together. That’s one of the ways of scaling. So we’re essentially rediscovering social scaling in models. And in fact, there’s a pretty cool paper from I think last year showing that even if you train a giant monolithic model, if you look inside it you see that it has done functional differentiation. In other words, what you’ve actually done is trained a little ensemble inside. In the same way that our brains are ensembles of regions of cortical columns.
BB:
Lot’s of cortical columns all fighting each other out.
BAYA:
Yes, well, fighting, cooperating, modeling each other, specializing, and so on.
BB:
Right, it’s societies all the way down.
BAYA:
Yes, it’s societies all the way down
BB:
Nice. Ok. This is a great question to end with from Angela Gronitz: “What role do you see human creativity playing as AI advances?”
BAYA:
First of all, if I think about this as a person who imagines himself to be creative as well—I mean I do think of my writing as creative output—I think that being an artist has become economically difficult in the last 20 years for a variety of reasons which have a lot to do with the Pareto distribution of rewards to artists and the consolidation effects.
BB:
It was never actually super stable, but yeah.
BAYA:
It was never super stable. David Cope recently passed away. He was a composer who, as far as I know, was really one of the first ones to really take computer composition seriously. Not the first: Terry Riley, who composed the piece that played for our Long Short (that piece was from 1972). But anyway, David Cope, he began using statistical NLP, natural language processing-type ideas to do composition when he was suffering from composer’s block. He had a commission that he was supposed to write, I think an opera, and he taught himself how to code in the early 80s. I mean, I do this kind of bullshit as well, when I’m procrastinating, I have to be doing something else that I could convince myself is productive or whatever. So he spent a few years doing that and then he finished his commission in six seconds when he finally got the code running, and everybody got really pissed off at him. There were a lot of composers who even questioned whether he was really using code to compose, and he proved them all wrong by dropping a zip file on his website with 5,000 cantatas in the style of Bach, all in MIDI, of course, because nobody’s gonna perform all that stuff. I’ve probably listened to more of those cantatas than anybody other than David Cope, actually, maybe even more than David Cope. A bunch of them are pretty good.
BB:
You have some favorites?
BAYA:
Yeah, they’re pretty good, actually, but nobody gives a shit. I think that that’s because art is about our relationships with each other as much as anything else.
BB:
It’s not just the artifact.
BAYA:
It’s not just the artifact. Bach is beautiful and special. If you’ve had this beautiful shell that you thought was unique and you open the door and it’s a beach and its shells as far as the eye can see and they’re all beautiful, I don’t think that actually destroys your relationship with your shell. This is all made out of relationships that we have with each other, so that’s part of my answer.
Another part of it is that I think we have some misapprehensions about creativity too. We try to cover our tracks. We try to pretend that stuff came out of nowhere somehow. When various James Joyce scholars figured out what had gone into Ulysses, he’s like, okay, the next one I write, I’m gonna foil you all, and you’re not gonna be able to figure it out—and that was Finnegans Wake.
BB:
Well, he did pretty well.
BAYA:
Did pretty well, yes. I liked it as well—because we love to nerd out on stuff. We’re always remixing and combining the things that we’ve encountered. How else would we be able to create? This is why you get so many simultaneous inventions and discoveries. This is why Cubism gets invented simultaneously by 18 different artists. It’s why the light bulb was invented simultaneously by a dozen different inventors.
BB:
Because once those conditions are there..?
BAYA:
Yes. Once you know how to blow glass, you know how to draw filament, you know how to make an electric current, and you need light somebody’s gonna come up with a light bulb. Or 12 at once. They were also all different. Every one of those combinations had things that were different about it: about how it was blown, about whether it was long or round, about whether it had prongs or screws. The contingency of a symbiogenetic world, is one that is shaped by all of those decisions and which one sticks.
What I’m trying to say is there’s something sort of deterministic: things are gonna combine, stuff is gonna happen, certain ideas are about to pop, whether in one person’s head or in others, but at the same time, the particulars of exactly how it happens really matter in terms of the culture going forward.
BB:
Two things just two make sure I follow. One: is the Romantic (and I mean it with a capital R) era dichotomization of determinism and creativity actually wrong?
BAYA:
It’s actually wrong. But it’s also right because details matter.
BB:
Okay, fair enough. The other thing to clarify is about the focus of creativity on the artifact itself. Like, there’s a lot of concern about the role of generative AI in making the artifact, but creativity isn’t about the artifact?
BAYA:
When we have connected our economic survival with production in the rigid ways that we have under capitalism we have already done something that is going to pose increasing amounts of problems for us, no matter what sort of labor we do going forward. We’re in a world of increasing abundance for a variety of reasons, but we also are in a world in which the more you think about things in these zero sum exchange value sorts of ways the more problems you’re going to create.
I don’t want to opine too much about the Hollywood strikes and so on, but some of that is based on structural problems of the whole system. Some of those problems are also Romantic (with a big R) problems and how we conceive of things. You know, the longest running lawsuit of all time, as far as I know, was the one against George Harrison for ripping off “He’s So Fine”.
BB:
Because GCE is just so original.
BAYA:
Right.
BB:
A good place to end here. Do you have anything you’d like to leave people with about how they should approach the book?
BAYA:
Well, there is a call to action. We always need to be careful when we tread the line between scientific observation and ideological commitments, that is to say, how things are versus how things should be. The problem is that our ideas about how things are often color the way we think about those “shoulds”.
Darwinian thinking resulted in a lot of policies and approaches to things that were quite destructive, partly because they were based on just wrong assumptions about how stuff is. Most of the work that I’ve been talking about is hopefully shedding some new light on certain aspects of how stuff is, that don’t necessarily invalidate all the things that we learned, like Darwinian evolution does take place, it is real. At the same time, this shows you that we’ve been looking at half the story, there’s this whole other half.
Understanding some of those things about how things are should hopefully alleviate some of our misfounded anxieties and also change some of our ideas about the “shoulds”. I would invite people to think about those “shoulds” in light of what we are starting to learn.
BB:
Instead of a society predicated on social Darwinism; a society predicated on social symbiogenesis?
BAYA:
Yes.
BB:
That’s a great place to leave it. Thank you.



