The following is a rough transcript which has not been revised by The Jim Rutt Show, Sara Walker, or Lee Cronin. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guests are Sarah Walker and Lee Cronin. Sara is an astrobiologist and theoretical physicists at Arizona State University. She is deputy director of the Beyond Center for Fundamental Concepts and Science and a professor in the School of Earth and Space Exploration. She’s also on the external faculty at the Santa Fe Institute and a fellow at the Berggruen Institute. Lee is regis chair of chemistry at the University of Glasgow in Scotland and CEO of Chemify. I looked Chemify up when I was preparing for this episode, and man, that looks like an interesting damn company. I’ve had a longstanding interest in artificial chemistry, directed chemistry, computational chemistry, et cetera. And I personally believe that the first payoff from quantum computing is going to be in computational chemistry, for better or for worse. But anyway, Chemify is the name of his company. He’s the CEO, so he’s like a big dude, and so he does have some real experience at this stuff. So anyway, welcome Sara and Lee.
Sara: Hi. Thanks for having us.
Lee: Yeah, it’s good to be here.
Jim: Yeah, it’s good to be here. And I’ll tell you, I scan a lot of stuff, read a lot of stuff, and when I read this article you guys wrote, time is an object, in Aon magazine, my head snapped back. I go, “Whoa.” It’s a well-written article. And a lot of pop science is just that pop, right? This is not too pop, but it’s very readable. But it goes quite deep and engages the idea of time. What time may be, that y’all trace the history of the idea reasonably well. And you lay out a new concept of time that has really gotten me thinking, and I immediately reached out to you. It took a while for us to arrange this, but I got to say this was a 9.6 on my signal, out of 10, of reading random papers, which I do all the time, and people send me stuff to read.
I think this one I found on my own. There’s also a, bit deeper, journal article in the journal Entropy called Formalizing the Pathways to Life Using Assembly Spaces by Marshall et al. Sara and Lee, our co-authors on the paper, I believe. Yes, that’s correct. And also there’s a website on assembly theory at www.molecular-assembly.com. And of course, as usual, those links and other links relevant to this discussion will be on the episode page at jimruttshow.com. So we’re entering into the discussion about time. This has been a concept that people have been talking about for a long time, pun intended. So why don’t we start with a little bit of the history of the idea of time.
Sara: That’s a very long history. It depends on-
Jim: Well, let’s keep it short.
Jim: Five minutes no more.
Sara: Yeah. Lee, do you want to start or do you want me to start on that one? I can. So I’m trained as a card carrying physicist. So I think I was indoctrinated into certain concepts of time because they were invented in certain centuries and just kind of stuck. But the thing that recently deeply intrigued me about the concept of time itself, is how it has changed over different cultures, over different time periods and also how it depends on the technology of the time. So our first clocks were made of sand and wind and shadows, but when we built mechanical clocks, then we had this clockwork universe that Newton came up with, and this idea of time as being this ultimate clock that the universe just moves through, that there’s this ticking.
But there’s other concepts of time that physics has invented over the centuries, like relative time in Einstein physics, which leads to the block universe that all times exist everywhere, all at once. And so I think one of the things that Lee and I’ve been most interested in is that none of these concepts of time really have it as a property of objects or a property that actually itself has some materiality to it. It’s always just some kind of fluid the universe moves through or some kind of concept like that. So it might be time to reinvent time yet again.
Jim: Yeah, Lee, you got anything you want to add to that?
Lee: Yeah, I would say that there is a, I’m not a dualist, I’m a materialist, but I would say that that time is a very… So the second law, which we can talk about, in the universe expanding and the fact that time travel is not allowed only forward, not backwards, okay, tells me there is something outside spatial dimensions that is beyond just a measurement of time with things passing. And so that means that there is… I view it like a fabric, which isn’t space, but it is the capacity for things to happen and the capacity for things to happen is increasing. And that is separate to the measurement of time. So you could think of time as a pyramid on its head expanding into something and from that expansion comes space. And so that’s super hard for people to understand because automatically you start becoming dualists and something away. That’s not correct.
I think there is this fundamental fabric, that the capacity for things to happen is increasing, and the ability to measure those things happening is what we call time now. And I think that’s a kind of, once you get that in your head that there is this capacity for more things to happen in the future, future than was in the past, the concept of time, the second law and everything, makes a great deal of sense, because otherwise the second law is one of these religious doctrines you have to adopt. Has no fundamental basis. There is no mechanism. And what we are proposing is a mechanism which all this comes about. And sadly, time travel is not possible. And the fact that physicists can basically make random stuff up that time travel is possible and yet be very condescending to people about say the conservation of energy. For me it’s completely baffling, but-
Jim: Interesting. Yeah, that’s interesting. Obviously listeners to my show know that I am an arch materialist as well. In fact, probably my most well known line is when I hear the word metaphysics, I reach for my pistol. Meaning metaphysics in the Aristotelian or Kantian sense, not the newfangled sense that people use it today to mean anything that doesn’t have to do with hard stuff. So yeah, we will stick strictly to the materialist realm today. I want to reflect back on what Sara mentioned, which is the physics perspective, time, originally, totally reversible. Your basic Newtonian mechanics work just as well forward and backward. Well, guess what? Turns out there’s this one weird little detail physicists hate, which is kaon to K. There is one subatomic particle that the Ks, half of 1% of the time, in a irreversible way that breaks CP symmetry, which is nerd talk. So there’s been this one anomaly sitting out there for the idea that physics time could go forward or backward. I’ve always been a skeptic on it because it just doesn’t seem reasonable.
And then Einstein’s block world’s even more radical, which is that you just mentioned it in passing. I want to highlight this for the audience, that the Einsteinian view is that the universe is a block. Think about the universe being a plane that’s moving through time, and creates therefore a block. Let’s make the plane a rectangle, just to keep the geometry simple. And so we end up with a solid, rectangular solid, out infinitely into the future. And amazingly Einstein’s view, and a lot of other physicists, is that block has always existed and that it’s unchanging, and that is the nature of time. And it’s more or less an illusion that we feel like we’re passing through time. Is that unfair to Einstein, Sara?
Sara: No, no. I think that’s an accurate description of what most physicists think. I just think it’s an unreasonable viewpoint to take. And I’ve always found it really interesting with the theories of physics we have, how we make assumptions about the reality that they describe, and sometimes overextend the descriptions to describe things that maybe they’re not relevant to. So Einstein’s theory really didn’t have anything to do with life, which clearly has some directionality to it. And so I consider Einstein’s universe a dead universe maybe [inaudible 00:09:14] look like that.
Jim: And we’ll get to this later. But yes, if you guys are right, you actually kill Einstein’s block universe.
Jim: Your theory and his theory are incompatible. Not a bad day’s work. Take down old Albert on time, pretty-
Lee: Well. Yeah, that’s before we kill Einstein, I think Einstein did something very, very interesting in terms of giving us relativity. Okay? Relativity still works in a world in which time is irreversible. And I think the problem is that when he got to the clock universe and extension, that is a hard thing to basically make something testable on. And the thing which I’ve always said about my intuition on time is an argument we can have, but we have to somehow make it testable. So we only kill the block universe if we come up with a reasonable… It doesn’t have to be testable to the billionth degree, but it has to come up with a new mechanism and understanding that makes sense of the world in a new way, and would be great if it became predictive as well as explanatory.
Jim: Yeah. And you guys are pointing in that direction. That’s why I was so excited when I read the article. This is not yet another minor tweak to science. This is a big one. And that’s what we like to talk about here on the Jim Rutt Show. So before we move on, I did a little bit of reading on theories of time, again, refreshing my memory. And some of the historical models, why time seems to have this arrow that it seems to be irreversible. The famous example from a high school physics is an egg dropped on the floor and splattering. You say, well you can’t run that movie backward and have it make any sense. So that’s essentially the thermodynamic arrow of time, Boltzmann equations and all that. Disorder increases. And somehow the increase of disorder itself is the arrow of time.
It sounds like it’s something but not quite right. And then there’s the idea of the cosmological arrow of time, that again, this takes big bang literally, that the world started as a singularity. I’m not quite there, but something very, very small at least. And something explosion happened and everything’s been moving out from that. And so moving from the state of a point to the state of big, itself provides an arrow of time, which is then closely related, and I say that you guys fall into this family, the causal arrow of time. Somebody described history as one thing after another. And you could say after another implies time. And if causality is real and causality is sequenced, then one could say, at a minimum, that the sequence of causal relationships has something to do with the apparent arrow of time. And then of course, us physics fanboys love our kaon decay as a particle physics explanation.
There are people take that seriously and the arrow of time is nothing but the effect of a few weird rare asymmetries in particle decomposition. I find that unlikely. But it’s a theory. And then the last one, it’s a fuzzy theory, and that is that somehow, and it’s related to the causal arrow theory, I would argue, that quantum decoherence, the fact that quantum states collapse the classical states all the time, itself is a driver of what appears to us to be the arrow of time. And I would suggest that you have to combine that with the causal arrow family of arrows of time to come up with something useful. So anyway, is that a fair way to categorize the space of arrow of time theories?
Sara: I think that is, I just want to make a point clear about the causal arrow of time that I find really interesting is that irreversibility needs to be an emergent property, in the sense… I mean emergent, whatever that word means, but in the sense that in order for something to be caused, you have to have a mechanism that can cause it in one direction. So it could be the case that irreversibility only occurs if you have something that can cause the same process to happen both forwards and backwards. And those might be implemented in different machines.
So this is one of the reasons that in biology where all your objects are evolved, that you may not have mechanisms for reversing the process because they would have to evolve too. So you’d actually have to instantiate, in a physical object, something that could have the information or pattern to reverse a particular process that evolved. And that’s a non-trivial feature. I think in physics, reversibility is easy because none of those processes require memory to ever form. But if you want to have a reversible process in more complex systems, you actually have to have the memory for the forward process and for the reverse process.
Jim: Now we’re getting up to the edge of your theory, which is that… Well, why don’t you guys take it away? So let’s now lay out in as succinct fashion as possible, despite the fact that you’re both professors, the essence, the core of assembly theory.
Lee: Yeah. Okay. Before that, I would say, just to add onto your time thing, as I say that causation is evidence of time or this thing that it isn’t for time to exist, that’s why people get really confused. Bertrand Russell used to refer to causation as being like a religious artifact, not needed. And I found that really insulting to how we should think about the world. But in terms of assembly theory, I guess, should we go right back to the beginning? I think assembly theory is easiest to understand in terms of we can explain the theory, but also I think it’s good to explain how we built the idea in the first place. And it really is the idea of to say, if I take an object, how unlikely is this object to form probabilistically by chance, when this object can A, have a large number of parts and B, be present in a large number of identical objects?
And I think it’s easy to imagine a one-off object that has lots of complicated parts, that has no impact, that’s just you assigning parts. It could be the man in the moon, some cheese, whatever, or some lines on Mars. Or it could be a load of identical sand grains, not very many, just one part. So they’re all identical, lots of them. So it’s that intermediate space where you have objects that are ostensibly by I or by some analytical method, identical, I don’t know, go to the ice, the apple, I’m not sponsored by Apple, but hey, they can give me some money if they want. They have a little bit. Go to the Apple Store and you pick up like 10 identical iPhone 14s.
They’re identical by every measure that you can think of. Yet they have billions of parts. And so you say, right, if I go to Mars and I find one iPhone 14, what’s the chances that could occur randomly? You would know something is up? Something is tricky. But if you find 10 identical iPhone 14s, for sure that couldn’t have formed by chance. And that was the basis assembly theory in terms of thinking differently. And again, I’ll remain those two components. Number one, the number of parts does an object have and the number of copies that object can occur in. I will pause for a second. I think Sara and I can tag it because we will add on bits to the theory as we go.
Sara: I’m not sure I want to add anything quite at this stage. I think I’ll let you get on and then I can-
Lee: Well, I think the object is the recursive decomposition, right? If you take an object and then… So the next thing from that is how would you know if one object, let’s take the word Abracadabra, which has 13 letters in it and instead take 13 and go, AAAA, right? So which one is more complicated? Well of course it’s abracadabra. Why? Because it’s not just one letter, it’s got several letters, right?
Jim: Well wait a minute, let’s pause a second. At the Santa Fe Institute, we believe that the idea of measures of complexity is still a very unsettled field.
Lee: All wrong.
Jim: Okay, good. Let’s hear… I’d love to hear this one. Now what you’re describing sounds a lot like [inaudible 00:18:38] complexity, which is a measure of-
Lee: Not computable.
Jim: Which is, is it computational complexity essentially, computational complexity. So anyway, so why are we wrong at the Santa Fe Institute that there’s a good measure of complexity?
Lee: I don’t know, Sara, I don’t know.
Sara: The Santa Fe Institute now. So we’re not all technically wrong.
Lee: Yeah, no Sara’s not exactly… I know we play good cop or bad cop, but Sara go on.
Sara: No, no, no. So I think there’s a couple things. One is traditional measures from computer science are more about what is the program size complexity and can you find a minimal machine to run it on? So it’s actually a feature of the interaction of a program with a machine that you can find the minimal length. And this is one of the reasons it’s uncomputable because you have to search overall possible machines to make sure that you have the minimal program description. And in our case, we don’t really care about a computer being able to compute an object, we care about the intrinsic properties of the object. So we think that it’s actually a physical feature of the object itself. And maybe you want to think of the universe is the thing computing that object and bringing it into existence. So it might have some parallels there, but there’s only one universe that can generate objects that we know of.
And so we actually have to have a really strict definition based on the laws of physics and how they operate in our universe. And we’re not looking for the minimal description of an object, we’re looking for what is the minimal set of causal pathways for making the object. And there’s some very specific constraints that we pose on that because of the physics. One of those is that this recursivity, that you can only use parts you’ve built in the past, which means that every object encodes its own memory, which is one of the reasons that we come up with this new concept of time from this, because you actually take an assembly theory that minimal path is a physical attribute of the object. Which means the object itself is extended in time as a recursive, hierarchical modular object that has the structure. And that’s very different than the interpretations from computer science and complexity theory.
And something that we’re working on, really delineating, now that’s coming out of the theory is traditional measures of complexity can’t distinguish random from complex. This is always the problem. We have random things look very complex because they’re unstructured, and maximally unstructured. In assembly theory, random objects are really hard to make, and in fact we don’t really expect them to be evolved. They might be a sign of intelligence, and we can talk a bit more about that because that really comes from him thinking about these properties. But what we look for is objects that have reuse of parts. So evolution constantly is reusing parts to make new things. And the only way that we conjecture you can get to these high complexity objects in the first place, is by having a memory of what existed in the past and reusing those features.
So assembled complexity requires high copy number, and it also requires high copy number of parts. So the copy number is actually very deeply embedded in the causal chain. And so you end up having objects that have a lot of symmetries in their parts because they reuse parts, which is very different than things that are totally random. So what we’re talking about is a evolved complexity, not I just look at the statistics of patterns of systems and say this one looks like it’s complex versus this one. We are actually looking at the causal chain and how complex is that and can that even form based on finite resources, finite time. Can we even explore this region of the possibility space?
Jim: Lee, you want to add to that anything?
Lee: Yeah, so look, I’m digging in my outlook here, I want to give you an email. I read you an email if I may, that Freeman Dyson wrote to me, right?
Lee: Look, I really am not going to make any friends in this, but complexity theory, none of it’s correct, and it’s one of the biggest errors that a lot of people have made. Let me read it to you. While I was talking to Freeman, he said, “I should add the whole literature of complexity.” He wrote this to me on the 20th of February 2018. “So I should add, the whole literature of complexity theory suffers from the same deficiency. The experts call it complexity theory, but in fact there is no theory. There are a lot of interesting examples of complex objects and complex systems, but no general understanding their behavior. And a collection of examples is not a theory if it provides no understanding.”
And so my problem with complexity theory in general is it was generated by computationalists fascinated with computation and Turing machines. Fantastic. And measuring things and not really understanding the context and where they’re doing it. It’s a game. And so they are pretty strong words and I think it’s time that young people stop playing around with complexity theory because it doesn’t give them anything. And rather than making people angry, particularly there are some people out there have lots of complexity measures, it makes them angry. You can’t compare them. And Sara’s entirely right. So being charitable, Kolmogorov complexity is an incredibly interesting concept. It requires computer. And there’s lots of other things that people try and do to try and get statistics. And I think that they’re thinking in the wrong way. It’s a collection of things. I’ve been interested in complexity theory since I was 12, maybe a bit older on chaos theory and dynamic systems.
And I think people quite rightly understand interest in complex dynamical systems. But what do we mean by complex? We mean lots of parts, and we mean interesting behavior that we can’t predict a priority when we call screen them. So complexity basically is an excuse to not actually do your accounting properly. It is an excuse to basically write programs to collect things together and bin them and say, “I’ve got a number.” And it’s an excuse to try and compare things that aren’t comparable. And I think that assembly theory finally gets away from that, because we can harness the desire to measure complexity between domains and actually ground it in terms of how, hard was it to make this object? How complicated is it?
What’s the difference between a complex architecture that has been built and a mixture of stuff you just found on the moon? And I think that… And I’m obviously being provocative because I hope that we’ll get interesting discussions, not abuse, because I don’t want abuse, I want to basically provoke people to think. But I think we have to go deeper than just having a Turing machine do some stuff and think of an imaginary oracle.
Jim: Yeah, indeed. Well interestingly and significant part, I actually agree with you, in that I do not think there is a useful theory of complexity yet. I think of the domain of complexity scientists is a field that’s exploring the kind of phenomena you discussed, and there is no unified theory yet. In fact people at Santa Fe Institute almost all will admit that. But there are little bits and pieces of interesting findings, some of which may provide some insights, et cetera. But any case, so we probably agree more than you might guess, but let’s move on. In assembly… Because we could talk about this all day, we don’t have all day. In assembly theory you have the concept of steps as at least a rough index on complexity. Talk about what your steps are, what steps mean.
Lee: Yeah, that’s important. So I may mention that very quickly. So I think this is where it gets really interesting in that actually if you just take from a molecule first, we want molecules first, because as a chemist I’m pretty boring. I can only think in molecules. So say let’s take a molecule, and for this molecule, draw a picture of the molecule graph and then say what is the… Let’s break the molecule up into its components, and then ask yourself what is the shortest route I can take to reform that molecule of the least number of steps where you are able to reuse components? And what was pretty astounding is that actually, it’s obvious now that there is a shortest path to make a molecule conceptually using those [inaudible 00:26:50] units. And what we found out is that it is actually experimentally measurable.
In fact assembly theory was invented as a process of measuring this decomposition and then rebuilding it. And so the concept of the step is super important because this tells you about, there was some memory somewhere or some kind of decision on a tree that allowed you go there, there, there, there, there, oh that molecule. And so what you can do on this decision tree, if you like. If you have a good number of steps, say 15 steps, the chances of you taking each of those correct branches on the tree, the more steps you go into space, the more improbable it is that that happens. So basically the larger the number of steps a molecule has, the more improbable it is that it would have formed by chance in just a random mess. And I don’t know, I think Sara also has a spin on this.
Sara: Well I want to reiterate what you said because I think this is one of the most important things for me as a theorist, thinking about these problems, is that this theory was developed starting from the measurement. And so a lot of people thinking, in my space particularly, about the nature of life and how do we build theories of life. They always start from abstract concepts and they don’t really try to bridge it to what could you measure in the lab. And so I think Lee’s really profound insight was, I can go in the lab and I can measure stuff. What could I measure that’s relevant to building a theory of physics that will explain life? And so that’s actually really how the theory started, and I think, as a theorist, that gives it much more ample playground for actually developing concepts that are physically relevant because nature’s the ultimate set of creative constraints.
And the fact that we could do this is, to me, pretty profound. So the other thing I just want to point out, because we usually make this point in most discussions about assembly theory, but I think people underestimate how big chemical space is. We don’t even have a conception. We can see pictures of the Hubble deep field and we go, “Oh wow, the universe is big,” but we don’t see a picture of what’s the possibility space of the things that can be made on earth. And it’s exponentially huge because it’s a combinatorial space, and we can make all these kinds of objects. And chemistry is really the first place that you can really see that. So even conservative estimates about the number of small molecules are just astronomically large, and there’s just not enough resources in the entire universe to make every possible small molecule. So what assembly theory is doing, that’s super interesting I think, is actually grounding by talking about these number of steps for recursively making objects.
It’s now giving a structure to that space that’s almost like a coordinate in the possibility space that tells you how hard is it in the space of possibilities to get to this molecule. And then the copy number gives a weight to that probability distribution to try to say that the fact that these exist is really unexpected based on the structure of this space being built up and these expanding possibilities, the deeper we get into that space by the number of steps. So there is a sense of a physical space that we can think about in assembly in terms of how molecules or anything that can be constructed by a finite number of steps can actually be arranged in a physical volume now. Which I think is super interesting, because physicists had to invent concepts of time and clocks to measure it. And we also had to have rulers to measure physical space. But this actually gives a way of measuring possibility space in a meaningful way with respect to what we expect evolution to be able to produce.
Jim: Very good.
Lee: Yeah, and I would say the… Go on.
Jim: Go ahead.
Lee: I was going to say that actually Sara and I saw this discussion way further ago. Her argument to me, one of the first times I met her, was that there’s missing physics. And I was like, “Oh this sounds kind of cool. What does this mean exactly?” And she’s like, “Well it’s kind of missing.” So I was like, “Okay.” And then it was this, so, well there was this causation information stuff, and my initial feeling was that wasn’t quite right but it wasn’t wrong. And the notion in my head was there are molecules that are interesting and they seem to appear. And so I think what we did in our own way, for a little while, is try to put those two things together. Say, “Well, okay, can we identify evidence of information in the universe in the absence of biology?”
And I actually think this is what Sara’s insight was. I’m just a random chemist who could set fire to things. “Can it burn? Oh, it burns. Awesome. Well done.” And then when I was thinking about these mass speck experiments, that became the avenue that we started to go down. And the nice thing is, this concept of the number of parts in the molecule, is now measurable, not just one technique but three different techniques. You can actually measure it. And all the chemists I think still think that this is a nonsense number because it’s got nothing to do about the synthesize ability. It’s got nothing to do about other than the probabilistic bounding. But the fact you can count the number of different parts in the molecule using, so you can shine light on it. And the way it absorbs the light, the more different wavelengths and the light it can discover.
The more colors you can get absorbed, the more complex molecule. There’s another technique called magnetic resonance. The more different way you absorb in the radio frequency, the more different radio frequencies you can discreetly observe, the more parts you have. And in mass spec, which is a where you weigh a molecule and you cut it up by hitting it into bits, the more bits you get when you hit it, again assembly index is proportional to those three measurements. So it’s super cool that we had a fundamental concept that we were playing with and we could then actually measure it and it seems to be right. Each molecule, we know, has a minimum assembly index associated with it.
Jim: Yeah. That was another thing that caused my head to come up, not just random speculation, backed up by actual results, presumably replicable results. But now let’s go into the other thing that was hugely interesting to me, and you mentioned it in passing. I’d like to dig into this more deeply. And this is the idea that at some level of number of steps, it becomes important to have memory in a system. And I’m going to give you my take on an example of what this memory might be that would help our audience understand it. And tell me if I’m full of shit, because I often am, but I’ll at least take a whack at it, which is, big, organic molecules, the memory that results in big organic molecules is essentially the deep memory in DNA and the local memory in the cytoplasm and the metabolism within a cell. And those are essentially the memory that you’re talking about that’s a prerequisite to the creation of large organic chemicals in cells. Is that close enough?
Lee: Yeah, I like that. That’s a great way to explain it. The genes encode for the proteins, the proteins do stuff, and in the cytoplasm there are local environmental things going on that can allow to shape that and you get the molecules you need.
Jim: Ribosomes, scaffolding, et cetera. But all that is memory. And some of it’s very, very deep memory. Some of it goes all the way back to LUCA, our last universal common ancestor, three and a half billion years ago. Some of it is current, the epigenetics of what’s going on in your cytoplasm, for instance. And so these are memories of many scales that provide a prerequisite to create N number of steps. And this actually ties in closely to the work. I don’t know if he’s aware of assembly there, but I’m going to have him on my podcast in a couple of weeks. I’ll ask him. That’s David Krakow’s ideas about the evolution of information processing in the universe. I see this as closely related, because an implication of your theory is that as structures that allowed memory to get deeper and deeper and more complicated, if not necessarily complex, increased over time in the evolution of the universe, the ability to take more steps increases along with that. And the two may well be dependent, extra steps may well be dependent on memory depth.
Sara: Yeah. I think we’re both fans of David’s work and we’ve talked with him a bit about assembly theory. So it’s really cool to see the intersections there.
Jim: Yeah. Yeah, I’ll be [inaudible 00:35:33] Yeah. All right, so now we’ve talked about memory, we’ve talked about number, which actually is interesting because it rules out the occasional fluke, something that happens very, very low probability by randomness, and such things do happen by randomness. So you have a large number of a species, and a species that has a high complexity count by your step mechanism, and you then give that an assembly index. That’s approximately right. So the two together say that this is a representation of high assembly. Is that approximately correct?
Sara: That’s right.
Lee: Yeah. So we came up the assembly equation which unifies those things. So the assembly equation, roughly speaking, is a summation over the exponential to the power of the assembly index times one minus the copy of the object. Why is it one minus? Because you only have one copy. It has zero contribution. So one minus one is zero and it goes to zero, and then some kind of normalization. And we have a higher weighting on the assembly index because that gives you more memory. But I actually think the copy number is nicely poisonous. You should be able to measure the assemblyness. When we were coming up with this word, Sara and I were debate what we call it. I was going to call it ass entropy or arse entropy.
Jim: I like that.
Lee: That didn’t work. My mobile phone’s name is still arse entropy. So occasionally I get a Apple saying, “Where is arse entropy?”
Jim: I like it.
Lee: But it is arse… I was going to call it arse entropy forever now. Assembly is literally a measure of the amount of evolutional selection that’s gone into the objects that you’ve considered over the volume. It may be at the end of the universe, or there will be no end of the universe, if we are right, it will just be more and more evolved. The assembly just goes up. So there’s that in there and it’s super interesting to get those bands together, and we’re still digging into that because assembly tells you about selection, I think. If nothing else, tells you about selection and memory to get there.
Jim: And I would add it probably also points to selection for memory.
Jim: Because if things have more steps, at least sometimes, provide selective advantage, then that would say there’s a meta evolutionary evo devo context going on for the evolution of more memory as well.
Lee: So assembly theory, the most profound outcome of assembly theory that, I think, Sara and I were thinking about, Sara’s missing physics, in an essence, was selection has to predate biology as we know it. No selection, you need a local creationist to come up with life if selection didn’t occur before biology, because where’s all the complexity come from? So selection builds in steps randomly to start with, and then by chance the mechanisms from memory get built by the universe. And we can talk about what those loops are. They’re very simple. We’re still bottoming that in the lab right now. Sara’s becoming an experimentalist as fast as I’m becoming a theorist, and vice versa. Because I think it’s useful to design experiment. I like theorists design experiments because it makes it much more grounded. So assembly theory allows us to identify selection before biology. And then there’s a critical event that happens at biology, where biology is a selection amplifier.
Jim: Well it becomes a big enough auto catalytic network that has closure, and the ability to resist entropy sufficiently not to be dissipated. And that magical moment occurred, as far as we know, just once, or at least only one has survived. Now regular listeners know origin of life is one of our regular recurring themes here. We’ve had Eric Smith on, we’ve had Bruce Damer on a number of times, have other people talk about origin of life. And so the idea of selection in the prebiotic is something that regular listeners would know about. But I’m glad your theory goes there because it must, as you realize, otherwise you do have, and then a miracle occurred, and that’s not a good way to-
Sara: Well it’s not a miracle in assembly theory. There’s actually a mechanism to talk about why-
Jim: Yeah. Bruce Damer lays out his own mechanisms, and there’s multiple… Eric Smith as his, and Harold Morowitz was a dear friend of mine, had his. And all that was [inaudible 00:40:04]
Sara: I think there’s a difference in a mechanism of explaining the origin of life itself, which some people think is a chemical phenomenon, where you want to explain why maybe a replicating molecule or a auto catalytic set emerged from a geochemical environment. But there’s another bar of explaining the origin of life as a cascade over 4 billion years of building complexity on this planet, with the origin of life as a recursive transformation that is continually still happening to this day, which we call evolution. And so I think that the problem that we’re trying to solve with assembly theory is it’s addressing the origin of life, but it’s addressing the origin of life from a different explanatory framework than what we would traditionally would consider origin of life science. So in some sense it is a theory for the original life but it’s actually much broader and it’s attempt to explain features of life that so far have been unexplained.
And for me this was always the most interesting thing, going in the origin life field, is there were all these people that were trying to define life, and then use their definition to design an origin of life experiment. And no one was starting from really first principles to say, well what is the phenomena of life? And then how could we understand the transition from non-life to life from more fundamental principles? Although I think some of the people you named have certainly been thinking in that space, but I think that’s a really tall far. And then the question becomes what is the experimental proof you solve the origin of life? And Lee and I are in agreement that the real experimental proof is actually evolving alien life in the lab, which is not just recapitulating the steps we think happened on early earth but having a totally [inaudible 00:41:37] origin of life event that might explore some other region of chemical space.
Jim: Aah. We’ll talk about that a little bit in more depth later. But let me get onto something closely related.
Jim: Which is, again, another thing caused my head to… My head almost whipped off reading this damn article. So many interesting ideas, that you found this sharp line between non-biootic and biotic steps in chemistry, I believe it was 13 or 14, was essentially the flip point. And you said pretty strongly, “15 steps are above it must be biotic,” or as we’ll talk about next, the equivalent of biotic. Because I do think y’all have a little bit of biotic myopia. There could be other processes that had similar complexity generating attributes that aren’t life, but at least for now the only one we know about is life. So talk about this idea of a sharp phase transition and how that might be useful in various ways.
Lee: Yeah, I think it’s tough to say something definitively when you’ve got N as one, and obviously some people say that life is a continuum or life doesn’t exist. I was at a meeting with Sara when someone said life doesn’t exist, and we’re like, “That’s useful,” and life is a continuum, that’s also useful. But what we do see when you take chemistry in life and chemistry and inorganic chemistry and compare them, you have a sharp phase transition in the conventorial space. That’s great. That doesn’t mean to say, before we go and discuss this, that is impossible to get from inorganic chemistry to life. It must be gradually in steps.
But what I think happens is you take small steps, and it builds those systems and it’s happenstance and then actually when the systems become established through selection and evolution, you establish this new technology if you like, that is so far to the right, say left is non-life, right is life, there’s a clear gap that opens up in the middle. So I should say that. So it is not magic, there is a continuum in complexity, but then a life has a habit of amplifying that and just running away. It’s technological takeoff. It’s like now going to look for old phones, phones that are plugged into the wall. We know they were there, but everyone’s wireless. Just want to make that point.
Jim: Well maybe I live in a place that doesn’t have cell coverage. It’s a 20-minute drive to the closest cell covers.
Lee: Exactly. So there’s still examples, but the phase transition to smart, if you were to take all the phones on earth and plot them against the one-
Jim: Very, very quick [inaudible 00:44:12]
Lee: … you will have [inaudible 00:44:13] phase transition as in inorganic to… So when we saw that we were like, “Gee, that’s really interesting.” And it looks like on earth, the chemistry and the biology on earth has this characteristic, and the chemistry not in biology has this other characteristic, and there’s a sharp difference. Now is it the same on Titan if there’s life on Titan or on planet X where planet X is life? We don’t know, because it’s going to be perhaps a feature of the chemistry that’s being cooked. But we think, looking for a phase transition between abiotic and biotic chemistry is going to be a very good signature in the future. And I don’t know, Sara, if you agree with that makes sense. I’m trying to cover the contradiction of how did you get there if it’s not allowed?
Sara: Yeah. Well I think just a few points for clarification. One is, life can make low assembly stuff. So anything below the threshold can also be produced by life, but it’s just not a definitive signature that living physics is present because we would expect it to possibly be formed by random chance chemistry, not selection. And so this boundary is this constraint that we expect the universe to have where selection or whatever physics is governing life, that mechanism must be in place in order to observe those high assembly objects. Otherwise, there’s no prior expectation we should have that they would ever be produced. And that comes from the argument of the number of steps, and the fact that the space is combinatorially exploding. So every step you you could make a misstep, and also the number of objects in your past is increasing. So there’s this double exponential growth of the possibility space or exponential, if you actually constrain it to be physically realistic.
Then so you’re selecting things out of an exponentially growing space. And then we’re also trying to observe them with high copy number, which means that whole causal chain had to be selected to produce it. So that memory that we’ve been talking about has to be instantiated in physical objects. So you can think of assembly theory basically saying, life is a stack of objects that’s very deep in time, that are basically assembling other objects. And that entire stack is necessary to get to these high assembly things. And so this is one of the reasons that we see a threshold and we can theoretically predict there should be a threshold. And actually we’ve done a little bit of that work in past papers, but one of the things that we’re doing now is really fleshing out what this transition from non-life to life actually looks like in assembly theory and quantifying as properties, and some of the mechanisms of crossing that boundary.
Which, for me, is super exciting because I think it’s an interesting way of framing the origin of life problem that you can actually quantify when you think it should happen, and what the kinds of conditions of the chemistry are to mediate the transition. So I think those are predictions we can make from the theory. But also I think when you’re talking about the origin of life, you’re assuming life is different than non-life, in some sense, because you’re expecting a sharp transition. And I think what assembly theory does is because it unifies the non-living and living universe with this description in terms of selection, and how much time is embedded in an object, that we actually now have a meaningful way of talking about when systems cross that boundary, because we have the same descriptive language for describing both the non-living and living universe. And I think that that’s an incredibly powerful way of approaching the origin of life problem. Whether the things we’re doing turn out to be correct, I think that’s the right way of framing it.
Jim: Yeah, very nice. That makes a lot of sense to me. So 13 or 14 is the prebiotic range, fifteens in the biotic. How big does biotic stuff get in terms of steps?
Lee: So it gets pretty big. And it’s interesting, we were getting trolled actually on one of our papers, and they were like, what is… And no, I won’t necessarily go into the troll just now because never feed the trolls. And they were complaining about the numbers and so on. And actually you look at the paper we published in Nature Communications a few years ago, we take a load of different biological samples and e coli and taxol are on there. Now, taxol is a molecule that you can purify input, and e coli is obviously a cell and you crunch it all up. So the label on that graph is e coli is all the molecules and e coli and there’s a lot of them, and taxol.
But to answer your question directly. So there’s only so much complexity you can compact into a molecule. And then some really interesting things happen, because molecules get big and they become polymers. And when they become polymers then there’s a different way of counting things that we are working out at the moment. But basically you can get molecules that have complexities going up to 30, 40, 50 or 60. So super huge. But there might be another phase transition. And this is something that Sara was… Again, a way Sara and I came together on because she also likes thinking about techno signatures, which I think is super cool because the [inaudible 00:49:21]
Jim: We’re talking about next.
Lee: Yeah, so I’ll let Sara talk about that. So there seems to be another phase transition. The transition from random to evolutionary. Evolution is not that smart, but it is that persistent, and it is going to go, and it doesn’t… But evolution, and at some point for abstraction, builds platforms and that platform allows abstraction. You can redesign things from scratch. So me, as a chemist, I could go in a lab and make a really complicated molecule that has an assembly index of 50 because how could I do that? I just put 50 different elements together in one molecule and I make it a high [inaudible 00:50:00] number and I’m like, “There you go, assembly index 50,” which is quite hard, I would say. But biology could never do that. So suddenly you might even better use assembly to not just tell the difference between dead and not life, but life and technology, and technology implies intelligence.
Jim: I love this, because I’ll tell you what, when I first showed up at the Santa Fe Institute as a retired business guy in 2002, I made an assertion that a lot of people poo-pooed at the time, which is that there are two pretty bright lines in the history of the universe. One is the evolution of life, and the second is the evolution of, what we would now call, general intelligence. Probably closely related to language or something very close to language. And this sounds like your theory actually confirms I was right. That there’s a clear phase transaction and assembly theory between biotic and non-biotic. And there may be, and I would say probably is, between… Because I was going to exactly ask you that question. Putting your Chemify hat on, you throw all your venture dollars at it, what’s the biggest damn assembly number you can come up with using the very hottest, newest synthetic chemistry techniques? Is it thousands? Is it millions?
Lee: No, it won’t be thousands or millions because it becomes uncountable, because the molecule becomes so big. So here’s a really interesting thing. This is why assembly theory is actually fairly good theory for being precisely computable. The assembly theory is good when the object can be found in identical copies. And in chemistry you talk about moles of molecules, identical copies. If your molecule is unstable then it becomes difficult to have a lot of identical copies right down for molecular assembly. And if your molecule becomes so big there’s lots of errors, like in a protein, that you could have mutation, then suddenly the theory has to change and go up a level. So I think there is, and I’m just finishing a paper on this right now, there is a finite limit to the amount of information you can store in a molecule.
It’s the information limit in a molecule, and it requires a redefinition of what a molecule is. And it’s so cool. The chemists are angry with me as it is. They’re going to be double angry after this and say “Ah, this is rubbish.” But I think it’s actually not rubbish, it’s to do with the error… Because let’s just say you and I start making molecules and as fast as… So when I make the molecule and you’re quality control. So I throw molecule at you, you go, “Yeah, that’s correct, that’s correct. Make me a bigger one. Bigger one, Lee, come on bigger molecule.” And I then make a molecule that’s so big ,I make an error. And I’m like, “No, there’s an error. Correct it.” So I correct that error and you’re like, “No, there’s an error down here now. Come on, get it right, man.”So if I come-
Jim: If the mutation rate is high enough, you’ll never get a clean comp.
Lee: Exactly. So there is a fundamental limit to the amount of information you can cram into a volume in molecular assembly.
Jim: Yeah. If you assume a mutation rate. Now my academic, my scientific home field is evolutionary computing, so I’m well aware of error catastrophes and things like that, which actually reminds me when we talk about memory, memory’s got to be measured in some effectiveness term, right? Because again, my hypothesis has always been, I had an amazing four-hour conversation with Stuart Kaufman about this. We were both sitting there going, which is error catastrophe is a big barrier to the origin of life in that it’s a mathematical result of mutation rates being too high. The ability for evolution to build very high is limited, it slows down to almost zero pretty quickly. And that’s a phase transition when the error rate per unit of information is above X.
So I’m going to ask, does your theory of memory include some effectiveness index like fidelity of copying of core informational components such as DNA? The conversation I had with Kaufman was, “Okay, how in the hell did we get from a world without error correcting copying of DNA to error correcting copying of DNA without error correcting copying of DNA? And we both agreed this was a very narrow ridge, the passage of which is not entirely clear. So talk to me a little bit about what your memory… An index on that memory, quantity of memory, might include on the qualitative side. Things like high fidelity information copying, error correction, et cetera, if that makes any sense.
Sara: Totally. Lee do you want to take it?
Lee: I’m talking too much.
Sara: I know, but I think you have more to say on this as a chemist than I do as a physicist. But I [inaudible 00:55:03]
Lee: I would say, at the moment, the error correction is actually something you can only see looking back because you build these machineries. And what we are doing at the moment, and I think Sara alluded to that, we’re actually looking at the, and this is maybe Sara, what you can mention is the scaling and the way selection is building things, and we are going to come up with… And we’ll use that to go into the lab. So that’s a very good way of saying, “Sure it’s a problem, but we exist and technology exists, so it’s not an insurmountable problem. But rather than speculate with large numbers, we’re actually going to come up with some manifold and then go backwards.” And that’s what Sara’s team is leading at the moment and we are helping and vice versa.
Sara: The other thing, so I want to build on what you’re saying, but the other point that I was going to make, so I wanted Lee to start because he’s usually more concrete and I’m much more abstract. But the abstract, the way I think about it, is usually the canonical error threshold in this error correction is distributed over the object. So if I have an object that’s a sequence, I’m worried about getting the precise sequence of bits in the string. Whereas in assembly theory, the error is distributed over the causal chain. So it’s a different sense of what actually needs to be corrected. Because what you actually want to preserve is the stack of objects making other objects. And the things that are on the tail end of that causation you almost don’t care about, in some sense. You want to presume that whole evolutionary history.
So I think the error threshold is capturing. If you course grain out all the history of the object, that there’s some issue with reproducing the fidelity of this object. But what we care about is actually how do you even get that stack of causation to build those objects in the first place? And as Lee mentioned, this post selected part of it. So assembly theory is one of the things that’s most radical departure from traditional theories of physics. When you write a theory of physics, usually you have a law of motion and you have an initial condition. So you have to fine tune the initial state of your universe and then apply this external law. You have some great programmer in the sky or a God-like entity that can just run the system. In assembly theory, we don’t do it that way because we’re trying to build a physics that explains observers inside the universe like living things.
So we look at objects that exist and we deconstruct their history, and then build the local volume around those objects that exist. And then talk about where are those objects in the space of things that could exist. And so in this sense, I think it will have some very meaningful things to say about the structure of why certain objects were selected in the biosphere when they were as a function of time. But some of the traditional framings of questions in origin of life are, they look totally different in this framing. And some of them become very relevant, and some of them are actually just not questions that are the right ones to be asking because they might be too detail oriented about the particular architecture of what evolved.
And what you really want to ask questions about, from our perspective, is how do you get an evolutionary process, and selection should do the work of building these structures, if you have some way of me maintaining memory of the past and building up new objects? Because there’s some kind of preservation. Objects that exist when a continue to exist, so they get selected for persistence. And so sometimes I think of assembly as a physics of existence. Why do some things exist and not others? And then the explanation for that is always in terms of these causal chains and the local relationship between objects. And that’s another feature I really like, is that some things can only exist with other things. They can’t exist on their own.
Jim: That’s the auto catalytic network.
Sara: Yes, exactly right. I actually think auto catalysis will be an emergent property of some other assembly theoretic principles. And we’re working on that too now. Because I was always a big fan of Stew’s ideas, and I’ve talked with him a number of times over the years about some of these things as well. And so as Lee. But I think one of the things that’s really hard about auto catalysis is how brittle they are, and how you have to fine tune the reaction conditions. You can get closure on a graph, but then you have to put dynamics on it. And it’s really hard to get these things to be stable in the lab. And I think that there’s some ways that we can explain why you get certain features of auto catalysis that you do in real living systems by embedding them in this evolutionary context that we have in assembly theory. Again, work to be done because we have so much to do now.
Jim: Such interesting stuff. I have a little bit of time. I got two topics I want to get into. So let’s cut this off even though there’s lots more to be said here. Next, you called this a theory of time. I’m not so sure it actually is. It’s a theory of measuring something that happens over time, but it seems like it could easily be compatible with at least some deeper physics of time. I had Lee Smolin some time back and we talked about all kinds of crazy stuff, but we also talked about his work with Unger on real clockwork times, screw all these guys. There actually is a universal clock everywhere, tick, tick ticking. It’s not obvious to me that your work is incompatible with that theory and perhaps other theories on deep fundamental what’s happening at the plunk time kinds of things.
Sara: No, No, I think it’s very… Lee and I have both had… This Lee, we both had conversations with Lee Smolin about this, and I think he probably comes the closest to how we think about time as any-
Lee: I think assembly theory is a theory of time.
Sara: Yeah. And so do you want to take it Lee or can I say a couple things?
Lee: No, no, no. Go ahead. I can then-
Sara: There are some parallels between assembly theory and causal set theories of quantum gravity in terms of having a causal directed structure, except they treat events and we treat objects. So we think the physical things are the actual objects you see in the universe as physical structures. But I think your criticism, why not just treat assembly theory as a measure of this thing? Why do you have to treat these things as intrinsic properties of objects? And this, to me, is really the part that’s touching on this new physics, because the problem I always had coming into the origin of life is I thought, intrinsically, the concept of information was really important. But information is this really abstract thing. Nobody can point to information as a physical thing. And information theory itself has lots of problems. And some of the easiest, obvious ones, that it resolves are things like dealing with the fact like novelty creation.
So information theory already assumes you have a frequency distribution over objects. So they already have to be abundant in order to even talk about information. And assembly theory is really about how do you even get there in the first place. So there’s some interesting connections between canonical concepts of information that you might find in complex systems and assembly theory, and what assembly theory explains. But the deeper story is, for me, is can you make information material? And I could trade time as an object for information as an object. But the point of information is information is accumulated over time. So it’s a temporally embedded structure.
And so if you want to treat the objects of evolution as material objects, which I would prefer to do than saying they’re disembodied and they’re not physical, then you have to treat this temporal dimension as a physical feature of the object. And so that embeds the object in time, but it embeds the object in time in the sense that it’s informational properties, the properties about how it was caused to come into being, the properties about how it’s constrained to exist in the space of all possibilities, all these things that might sound information theoretic in origin, are embedded in this shortest path. So that’s an informational structure. And now you’re saying the molecule is not the thing I hold in my hand, it’s actually this extended temporal structure that captures these evolutionary informational properties.
And to make a new material category of nature, to say time or information in this embedded structure is an object, I think people think is really radical, but they forget that all of the things in our theories of physics have exactly that property. They’re things that we can measure and then we build a theory out of them. So mass didn’t really become relevant as a physical quantity until we were trying to write down equations of motion and equations of gravity, and it became the relevant parameter for describing features of object over their color or their size. And we treat objects as point particles with a mass. That’s a weird abstraction to do, but it works for gravity. And then we say objects have mass as a physical property, because it’s something we can measure.
And in assembly theory, we say, some concept of information or causation is relevant for describing physics of evolution and how objects come into existence. In assembly theory, we have a way of measuring this, at least for molecules, in terms of shortest path and copy number. So let’s treat that as the object of our theory and really take seriously the physical properties that has for constructing a theory. And I think for me, this is why I’m willing to take that leap, because I don’t think information is disembodied. And I don’t think when we see abstract properties, even the words we’re communicating, that they really are non physical. I think they’re just deeply embedded in time, and what we’re seeing is evidence of time.
Jim: Let me point out here, that this is where, if you’re right, you refute Einstein’s block universe.
Jim: Einstein assumed it all happens simultaneously somehow. But if this theory is correct, some things have to happen before other things can happen. So the two are fundamentally incompatible. Lee, why don’t you take that.
Lee: Yeah, I was going to say, so this is really why… So exactly same thing. Assembly theory is to time what mass is to gravity, or assembly index, whatever you want to call it, or assembly.
Jim: Okay. So that’s why you are arguing it is a theory of time rather than an application of time. Because I might argue it’s a way to represent trajectories in time as embedded in artifacts or something like that.
Lee: Yeah. Yeah. And I have a great deal of sympathy for saying it’s the tertiary measure and all this stuff, but we are able, we are making a leap and taking the extension back to that, because let me give you a point. So the block universe is kind of… A block universe novelty doesn’t exist. Right? And the future-
Jim: At least it’s not explained. There’s no rational basis for novelty.
Lee: No, no. In the block universe, you can slide rule, we go up and down, you can go to novelty, you can go to the boxing brain, the block universe… A natural consequence of the standard cannon of physics is that novelty doesn’t exist. Free will doesn’t exist. There is no future. There is no past. There is just the ability to move around. That’s the consequence. And physicists put all sorts of mumbo jumbo in there, whether it’s the quantum mechanics mumbo jumbo or ever ready in interpretations, many are. But let me get back to a couple more concrete things. The universe is expanding in time, and that means that time, although I am a materialist, space probably doesn’t exist. Space is a local phenomena that requires this thing that is time, that creates options.
So there are more options in the future than are in the past. And so at the birth of the universe, the universe didn’t even have the compute capacity for intelligence. It had to create enough states, to find an assemblage is a brain. And I think once we, and then find assemblages ourselves and all these other things. And so it’s very important that we understand that these things emerge together. And I’ll just say one further thing. The physicists require four things to be true for the universe to be. You have to have a big bang which requires… Order is suddenly, “Bing, here’s load of order guys, there you go. There’s your second law.” So order at the beginning. Thanks for that religious experience. Number one. Number two, time is emergent. So number three, causation is emergent. And then time is emerging. So we have all these things. You can replace all those four assumptions with just one thing. The universe is intrinsically asymmetric, which I hasten to add, particle physics already tells this in CP violation.
Lee: So we see CP violation, and yet we put all this other stuff in. And so now why does assembly theory reveal time? Well, what it does is it, I think Sara puts this very well, in that she sees complex objects as having depth in time. A cell couldn’t just suddenly spring in to being just after that cell division. There’s a lineage going all the way back, and it’s a 4.2 billion year old lineage or give or take 4.2 billion years. What’s 200 million years between friends? So when you see that object, the depth of all those states that you have to go through. And that’s why it’s super interesting that we have to reveal time in this way.
Jim: Very cool. Well now let’s move on to our last topic. And regular listeners know I am obsessed with the topic of the Fermi paradox and the search for extraterrestrial intelligence. And of course I get guests to speculate on them even if they have no knowledge whatsoever on the topic. This time I actually have some guests whose theory is actually closely aligned to these questions. And Sara indicated that she’d been thinking about this in passing when she mentioned techno signatures, which in the SETI world is the idea of looking out into the universe, not necessarily for messages from little green men, but from Dyson shells around stars or other techno signatures that some massive technology built something that could not be explained by the prebiotic physics or the equivalent of biology. So who wants to start with the implications of assembly theory for SETI and the Fermi Paradox League? Go ahead.
Lee: No, I was going to say Sara has much more to say on that.
Jim: Oh, okay Sara, I was going to say, I thought you were raising your hand. You were pointing to Sara.
Sara: Well I think there’s two conversations to be had. One is the actual, what are possible resolutions to the Fermi paradox? And another is like what would be evidence of technology? And I think-
Jim: Yeah, let’s do the first and then the second.
Sara: Yeah. So I’ve actually been writing about this a little bit because I’m writing a popular science book, and one of the chapters I have titled The Great Perceptual Filter, which is a play on the great filter. And I also wrote about it in this essay I wrote, AI is Life with Noema Magazine recently, like some of these ideas. But I think people don’t understand that technology itself evolves when they’re talking about making first contact, because they think of technology as a static concept like radio waves or the canonical target for SETI. But I think if you think about all of our sensory perceptions that have evolved in the biosphere themselves as technologies, it’s very clear that our ways of seeing have dramatically changed over 4 billion years. So for example, the first cells might have had a single photon receptor and then the multicellular eye evolved. And then human societies came along and built telescopes and microscopes that allow us to see the most distant parts of the universe and some of the smallest.
And I think the great filter is actually more about the technology of perception that we actually haven’t developed the perceptually apparatus to see the structure of the physics of life. And you already see this in AI discussions. If you think life is about causation, we can’t build into our technological systems a notion of causal structure and how to actually see objects as extended in time or as causal structures. And so my take on the Fermi paradox is it has nothing to do with aliens going extinct or the frequency distribution of them, but we actually don’t know what we’re looking for because we haven’t solved the physics of what life is.
And the analogy I usually make is to think about gravitational waves, which I’ve been permeating this planet for billions of years. But until we had Einstein’s theory of relativity and we could predict their existence, and then it took us a hundred years to build interferometers to detect them, we couldn’t make first contact with that phenomena, because we needed this new technology, a perception as Claire Web calls them, of gravitational wave interferometers to make contact with that phenomenon, and actually see the universe in gravitational waves. So for me, I think alien contact is still potentially in the future.
And as I alluded to before, I think the first contact actually might be doing an experiment in the lab, making aliens from scratch because then we really understand the physics, and then just particle physics and cosmology are tightly married where we make new particles in the lab and then we have new understanding, a large scale structure of the universe. I think life is a sufficiently deep phenomenon in terms of the fundamental structure of our universe, that if we solve that physics here on earth, then we might have new lenses and new ways of seeing the universe to actually resolve where other life might be, or if it’s out there, or to be able to make predictions and testable bounds on why we’re not seeing it.
Jim: How about remote sensing? Is it possible with spectroscopy to find assembly numbers greater than 15 in this exo planetary atmospheres, things in that sense?
Lee: Yeah, that’s what we’re working on just now. So I have three complementary things to add to what Sara’s saying. We talk about this quite a lot. The first one is the boring one in that if the universe has grown in capability over time, because there’s more options available, maybe the first time it was possible to made intelligence, it did, and here we are. That’s number one. And if you think about it, go all the way back to the start of the universe, and there’s space coming up, “Boom.” Well, I shouldn’t make pictures because it’s all audio. But you got this point where the universe started, universe expanded, and then stars formed and as soon as they could, they exploded and produced heavy elements. Those heavy elements the [inaudible 01:13:35] around other stars to form planets. Those planets got cooking chemistry. However, think about, for a second, the minimum amount of expansion from the origin required.
So the universe is already expanding for 5 billion years, say, or a billion years. Let’s say the first stars just blew up quickly because they were the wrong mass as they can. And then another billion years passed. So you’ve already got 2 billion years of expansion. So that’s quite a lot. So that’s one reason why you’ve got Fermi paradox because the universe started making life at the same time. Why are we unable to detect it right now? Well, because of the assembly chain. Because basically we have a common… It’s not evident that aliens will have any reasoning or any mathematics or any language that we can ever understand in principle, because we have a different origin. And we might have to go back to the origin of the universe to understand the alien. And this may seem weird, but maybe assembly theory will help us to decipher this, it will provide a Rosetta Stone for complexity.
But only just applying it right now. It’s just emerging, literally this year. Before that we did not have the ability to spot the difference between noise and assembly in the universe. And I’m hoping that James Webb and SETI and various others, as we collaborate with them, and the people that take the theory on, go there. So I think I’m saying two things. Number one, life probably required a certain chain of events, stars to form, explode, planets to form, and how far away they are from each other. Number one. And number two, because each origin of life is a unique event, aliens may work on a different timescale than one planet. Thoughts may take longer, cultures are different. And so the great perceptual filter is not a great… Well it is a great perceptual filter, but only in respect to Fermi’s imagination. But we are here to make it better.
Jim: Sara, looks like you have something to add to this.
Sara: Yeah, I was just going to say, just to give a visual to what Lee’s describing. If you think about life as being a trajectory through the space of possibilities, alien life is a completely different trajectory through that space. So our ability to recognize it really depends on overlapping histories, and overlapping histories between objects or how they share in information. So alien by definition are things that are not in our causal chain. And so this is, I think, one of the reasons they’re so hard to recognize. But I wanted to go back to your question about exoplanets because it is something that we’re actively working on. And I’ve had, in my lab, for a number of years, at least one PhD student actively working on exoplanet observations. And I think there, the real challenge is, one, the observations are super hard. And two, I think our planetary models, they’re not designed for the problem of life detection in mine.
So a lot of the bio signatures people are thinking about right now are these simple bio signature gases. So I think what assembly theory offers there is this idea, that you were just alluding to, that we might actually look for complex molecules in an atmosphere. But what we’re trying to do right now is basically try to figure out, from first principles, how would you model an atmosphere of a planet or even a planet’s evolution in terms of assembly theoretic principles and what you can measure from telescopes. And so this is a super hard thing to do, but basically the program there becomes then how do you detect how much memory a planet has in it of past states in some sense? Or what is the structure of evolving complexity on a planet, and how do you detect that remotely? So I’m super interested in being able to do that because I also think life is a planetary scale phenomenon.
The idea of detecting a biosphere techno sphere rather than an individual instance of an individual lineage within that structure. So we have ways of thinking about it, but again, this theory is so new and we have so much stuff to develop to both marry theory with experiments on earth, and then theory with observations of other worlds. And Lee and I are both very hard-nosed in how we want to do the science because we want to always connect back to what’s actually observable and testable. And so the bar there is very high for answering some of these questions in the short term, but we’re working very diligently and have amazing postdocs and students in both our groups trying to make those things happen.
Lee: Our imaginations are bigger than our resources [inaudible 01:17:59]
Sara: That’s a nice way-
Jim: That’s a good thing, that’s a good thing.
Lee: But fortunately we’re training, or we have the privilege of having a lot of younger, and not always younger, but collaborate with lots of smart people and collaborators who are willing to work with us and adopt us and carry on. But I think-
Sara: And do the craziest things. They’re all crazy, but it’s great.
Lee: This gave me an idea actually just from this podcast, actually we should talk about this Sara later. So actually let’s just assume that earth was the first chance the universe got to make life. And look at that radius from the singularity, the big bang, and then just draw a circle around, and don’t look back, look over there. I don’t even know if that makes any sense. And I’m pretty sure that if we do our jobs properly, that we will be able to, using James Webb and the further telescopes from that, the further heritage that comes out there, we will have some chance to find living planets. And I think we must go beyond just looking for things like oxygen or methane, that we should look for more complexity. And I think Sara’s notions of this planetary atmospheres changing, applying assembly theory models over there, and looking for weird phenomena that can be explained by an assembly theoretical approach. We have a non-zero chance of doing it in the next decade or two.
Jim: All right. Now, last question. Your work perhaps would give you some perspective, though neither of you or origin of life experts on… How hard is it for life to have formed? In the Drake equation, we have all these various terms and the pre-filters versus, and then Robin Hanson’s post filters, et cetera. You may or may not. As any good 14 year old nerd who read a lot of science fiction, I, of course, assumed there had to be hundreds of thousands of sentient species in the universe. Now I’m much less sure. It might be one, right? Because of issues like this narrow edge razor to get over the error catastrophe with DNA. And then another one is the emergence of the Eukaryotes. How the hell did that happen? I’ll make the question brisk. Does assembly theory tell us anything useful about the probability of life evolving on an otherwise suitable planet?
Sara: I think it will. People always cite these examples of these very rare chance events that happened one time in the history of life on earth. But I think almost every event that’s happened in the 4 billion years of Earth’s life is an incredibly improbable event. So when we see improbable events within that space hub and multiple times like the origin of multicellularity, we go, “Oh, that must be easy.” But of course you have to scaffold and have that entire evolutionary history, and maybe from all the stacked objects that make multicellular organisms, it becomes an inevitability. And maybe in some cases there’s still things that only happen once, because one once was enough to fill the space of possibilities to keep the cascade going. So I think the notions of probability that we think about when we make arguments about the single sample of life on earth are completely flawed, because the entire causal chain, it’s one structure and it’s not like you can say there was a rare event in that structure because that structure can’t be decoupled from everything else that’s happened.
And the second point is, I think, with assembly theory, we can make a prediction about the likelihood of life emerging in certain chemical environments, with certain sets of constraints. And that’s something that Lee and I are working on. So Lee’s trying to build the experiments based on the same technology that he’s developing with Chemify to do these large scale, origin of life, experiments. And from the theory side there, we’ve talked about this already on the podcast, but we anticipate there to be something like a phase transition or a transition in the physics at a certain assembly threshold. And so the question is how can we force a system to cross that and how do we understand the physics of that transition? And so that’s something that we’re working on now that I think will make predictions about how easy or hard that transition is.
But I think you cannot make statements about the likelihood of the origin of life until we solve that problem. And people will do exactly what you were saying, where they’ll say, “Oh, there’s tons of earth-like environments, there’s probably billions of them in the world. Surely life must be an inevitability?” But it doesn’t mean anything if the probability of the origin of life is one in a trillions of planets. So until we solve that problem, I don’t think we can bound the probability of aliens at all. Which is why I think the alien life problem and the origin of life problem are exactly the same problem.
Lee: Yeah, I say I don’t know, I think Sara would also claim to be an origin of life chemist. I was once. I stopped being an origin of life chemist for one reason. We became historical. Let’s go back and look at what could have happened. And the phrase pre biotically plausible to justify experimentation where people are doing great work and they’re trying to get to the answers some things. I just think for me, I want to be able to solve the problem real time. And I think there’s some things we don’t yet understand in the physics and assembly theory.
And I think the Drake occasion is, it was a Sunday, maybe a Friday afternoon, let’s do this. And it takes a life of its own. I have a slightly different opinion to Sara. I think that I’m persistent. I think that matter drives to complexity and the only process to get complexity is through assembly theory explanations, selection and so on. Selection’s occurring everywhere in the universe. So I think we need to reframe the question, say if selection predates biology and evolution, in the origin of life, how much selection is required to start making that transition? And then we can start thinking about looking at that everywhere. And I think when people say, and I think I agree with Sara about the planet earth being everything is unique. Because if you take a load of sand and you draw a picture in the sand and then some wind blows it, that’s unique, that’s unique, that’s unique.
I think what isn’t unique, and it’s common in the universe, is there is persistence and there is stuff that is going to undergo selection. And so I think that we really need to start framing it in that way. And I think that we are… The one reason for the long time of the [inaudible 01:25:06] ups and then via photosynthesis and oxygen [inaudible 01:25:12] Maybe Earth has a low global IQ over time because the G was wrong, and some planets where G is less, therefore you don’t need such metabolic efforts to move around. Then you get to intelligence quicker. We just don’t know. But what we do know is selection, or we have a very strong impression selection is as fundamental for the emergence of biology as gravity is for the emergence of stars.
Jim: Very cool. This has been one of my favorite episodes ever. This was just fantastic. So I really want to thank Sara Walker and Lee Cronin for a hugely interesting conversation. And check out the episode page at jimruttshow.com for numerous links to learn more about the ideas of assembly theory.