The following is a rough transcript which has not been revised by The Jim Rutt Show or by Jessica Flack. Please check with us before using any quotations from this transcript. Thank you.
Jim: Howdy. This is Jim Rutt and this is the Jim Rutt Show.
Jim: Listeners have asked us to provide pointers to some of the resources we talk about on the show. We now have links to books and articles referenced in recent podcasts that are available on our website, we also offer full transcripts.
Jim: Today’s guest is Jessica Flack, resident professor at the Santa Fe Institute.
Jessica: Hey Jim. Thanks for having me on the podcast.
Jim: Hey, great to have you on. I’m sure we’re going to have a really interesting conversation. Jessica received her PhD from Emory University in 2004 studying animal behavior, cognitive science, and evolutionary theory, spending thousands of hours recording primate behavior at the world famous Yerkes National Primate Research Center. Today she’s the director at SFI of C4 Collective Computations group at SFI, which describes its audacious mission as: “we work on how nature collectively computes solutions to problems, and how these computations are refined in evolutionary and learning time. We explore these ideas at all levels of biological organization; from societies of cells, to animal societies, to markets, to machine-human hybrid societies.”
Jim: “C4 research projects sit at the intersection of evolutionary theory, cognitive science and collective behavior, statistical mechanics, information theory, and theoretical computer science.” This is some serious shit here, I’m telling you. We’re going to get into it here.
Jim: Jessica’s also long been interested in some of my favorite things, the foundations of complexity, science, and the nature of causality. Now, let’s start out with our first question. One of your new research ideas, what you worked on with some other folks at SFI and elsewhere, is that the long-held view that complexity emerges from interactions amongst simple components is wrong. That’s a big challenge to the convention. Indeed, our mutual friend Murray Gell-Mann used to remind the SFI community regularly that the full name of what SFI did wasn’t complexity science, but rather complexity from simplicity. Tell us about your new view.
Jessica: Yeah, so I don’t want to say Murray was wrong, because of course Murray is never wrong, but there’s a sort of nuance that we need to add to that understanding of how complexity arises. And I think one of the things that we’re coming to understand with better and better observation measurement tools is that lower levels of organization are not necessarily simpler than higher levels of organization. As an example, if you look at a eukaryotic cell, it’s not just… And this is really for people that are not cellular biologists, obviously… It’s not just 10 or so organelles floating around in some cytoplasm. It’s actually an incredibly complex thing with many, many molecular… thousands of molecular motors, densely packed organelles. Everything’s optimally packed. When you look at images of these things today, they look actually like cities do.
Jessica: Especially if you look at a city from the point of view of… sped up like in Godfrey Reggio’s films, Koyaanisqatsi. So, amazing complexity at the cellular level. And just recently it was discovered, for example, that cells can compute the XOR function. Certain neurons can compute the XOR function. So also, individual cells, individual neurons, are capable of incredibly complicated behavior, behavior that we thought required a larger system to perform. And so that’s the first observation, that in fact, lower levels of organization are a lot more complex than we’ve been giving them credit for. Now, one could say that, well, the higher levels of organization contain the lower levels, and so even if they’re more complex, the higher levels by definition are more complex in a sort of Russian doll model of complexity. But that’s not really, at least as far as I’m concerned, a very interesting notion of complexity, that Russian doll model.
Jessica: And I think what’s really going on is that you’ve got this incredible complexity, lots of parts doing complicated things at lower levels, and then you’ve got the components trying to figure out how the world works, trying to estimate regularities, and through that process they’re doing what we call coarse-graining, getting a handle on what those regularities are, what the things are in all of that complexity that they should pay attention to. And you get this kind of compression of that microscopic complexity. And the compression or the coarse-graining, then, even though it’s lossy and so you by definition, when you core screen, you lose information, the system also gains information because it gains information about what the right strategies are given those regularities. And as a consequence of this, you get what I call a macroscopic expansion. That is the capacity for the system to perform new functions because it has a better understanding of what strategies work in a given environment.
Jessica: So you have microscopic complexity, you have what I’m calling an information bottleneck by virtue of the coarse-graining or the compression of the regularities at the microscopic scale. And then because of that information bottleneck, you get this macroscopic expansion of function. That’s where the emergence comes in. So as far as Murray’s points about complexity from simplicity, they’re still sort of correct in the sense that the information bottleneck is simplifying. So you’re getting the complexity through this coarse-graining, in a way. But unfortunately I think what’s happened is the way people interpret that is they think that it means the microscopic scale is simple.
Jim: Perhaps it is simple if you go lower, right? Maybe atoms and quarks are sort of simple, but maybe not. Maybe we dig into them further, we’ll find out that they’re just as complex. Hard to say.
Jessica: That’s an interesting point, Jim. I think before we get to atoms and quarks, we know that prokaryotes are a lot more complex than we thought too. It’s been discovered that they can perform, within a prokaryote precarious, complicated metabolic functions, and they have what would be something like seed organelles in compartments. So again, we underestimate the complexity there.
Jessica: Now in terms of atoms and so forth, I think one of the things we always have to remember is that physical systems, by virtue of having had many more hundreds of years to study them, good science, look, to us, now a lot simpler than perhaps they did when these things were first being worked out, right? So as you say, it’s not entirely clear to me that physical systems are as simple as we like to think. I think Murray’s eightfold way provides some backing for that, right?
Jim: Yeah. He added some complexity downward, basically, right. From what used to be a very simple object, okay, here’s our point, proton, or neutron, and then we go, Nope, not quite that simple.
Jim: As it turns out, every time you look further, what do we find? What’s the most likely answer? The further you look, not quite so simple. So let’s get back to this idea of micro complexity and a bottleneck. It’d probably be helpful for the audience to have an actual example, and maybe something at the level of a cell moving up. I don’t know. What would be a good example?
Jessica: Let me start with the example that I’ve worked on, and where some of these ideas originally started where I develop them, and that’s actually in primate societies. You mentioned I did thousands of hours of work observing primate societies, macaque and chimpanzee societies. And in the course of that work, I saw that the monkeys were sort of trying to figure out… this was well known in animal behavior, monkeys have a power structure, what’s typically called a dominance hierarchy. They have fighting abilities that are kind of intrinsic, they develop over time, very slowly, but those fighting abilities are invisible to the other monkeys. The monkeys have to sort of infer what the fighting ability is through fights. That’s how they learn about who’s weaker and stronger. They can’t just see it. It’s not like a bird who has a badge on its chest, and the badge sort of tells the other bird how capable that bird is of winning a fight.
Jessica: The monkeys have to infer this through fights, the underlying fighting ability. And some macaques, like a pigtail macaque, which is one of the species that I’ve worked with, they have this history of fights with other individuals. And over that history, one individual will learn that it’s likely to lose to the other. And if the asymmetry between them is large, so that one individual knows with a high probability it’s going to lose, that individual emits what’s called a subordination signal. And it emits this signal outside of the agonistic or fight context, and it tells the receiver of the signal that the sender recognizes that it’s likely to lose, and has agreed to yield if a conflict in the future arises. The sender is agreeing to a subordinate state in the relationship. So this subordination signal summarizes, there’s a coarse-grained representation of that fight history, and then the two individuals use it.
Jessica: They reference it to make decisions about how to behave in the future with each other. Fights do continue, but they continue at a much lower rate, and the idea is that it’s like a background processing [inaudible 00:00:08:41]. The fights are continuing just so if something changes in the monkeys fighting ability or circumstances, that dominance relationship, that subordination contract can be reversed. Now everyone in the group is doing this, they’re all learning about each other’s fights, and they’re exchanging these signals, and the signal is highly unidirectional, meaning that only one individual in the pair gives a signal. So it’s a very reliable indicator of this role, and now there’s a network of these things, a circuit, if you like. A circuit of these unidirectional signals, and in that circuit is encoded information about the distribution of power. It’s essentially a mechanism that the monkeys have for voting on who they collectively perceive to be the most capable individual of winning fights.
Jim: I like that. So you start out with a bunch of monkeys fighting, which is a complicated, many-to-many relationship, complex, I would say formally. And that gets cooked down through presumably some genetically inclination, to at least a fuzzy signal from which individuals say “I am submissive to you”, a sort of dyadic connection. And then all the monkeys give out those signals, at least all the submissive monkeys, anyone below the alpha presumably. And then that set of signals unifies across the field of all monkeys in that particular troop to establish the kind of complicated network relationship of dominance hierarchy. Did I get it pretty close?
Jessica: You did. And I’m just going to change a few things that will [inaudible 00:10:08] understand. The first is that the fighting ability and the ability to win any particular fight is governed by both this intrinsic ability that develops over an individual’s lifetime, so it’s a slowly changing thing based on this experience, and some genetic things like how big it is. But that’s, of course, also a little bit environmental. And it’s based on things like how a particular monkey feels on a given day, or whether it’s alliance partners are around, and so forth. So things that are temporarily variable. And so the reason why the monkeys need a bunch of fights to figure out what the regularity is, is because there are all these fluctuations. So that’s why this coarse-graining and use of the signals is an important mechanism, it gives the monkeys a reliable indicator of the overall state of the relationship.
Jessica: It’s an important distinction too that it’s a subordination signal, not just submission, because the monkeys do use similar signals in the fight context to indicate they’re going to yield in this particular fight. But by moving the signal outside the fight context to the peaceful context, the monkeys are reducing the uncertainty in the receiver that the signal stands for subordination, and not just submission in the immediate context. So that’s a really important innovation, and it relates to the evolution of language and how… We can come back to this later… and how individuals learn the meaning of signals that are temporally and spatially divorced from their reference. Very complicated and interesting problem.
Jessica: Okay, so now the monkeys have exchanged these signals, and you’ve got this network of these signals, and you can ask, or the monkeys can, by looking over the network. Who gives signals to whom, figure out how much agreement there is based on the way the signals are being exchanged about who can use force successfully. So this is fundamentally a collective problem, and individuals can make errors in their assessment. That’s very important, right? So the power structure that’s arising out of this process can be imperfect.
Jessica: It may not [inaudible 00:11:55] directly on to the underlying distribution of fighting abilities. It’s the outcome of this collective assessment process. That’s super interesting. So the idea is not just that we’re recovering a ground truth, it’s hard to see the underlying distribution of fighting abilities, but we’re recovering a kind of collectively constructed ground truth or view of who’s powerful. So when in the groups that I’ve studied, one of the… and this isn’t true in every group or every macaque society, some are very different… the power distribution that’s encoded in this circuit of signals is heavy-tailed, meaning that there are a few individuals who sit out in the tail who are different from the rest of the group, who are perceived by everyone, more or less, as disproportionately powerful. So they’re like our billionaires.
Jessica: And those individuals, because they’re perceived this way, they pay almost no costs, they get no aggression response to interventions in fights. So they can perform functions like conflict management that could not be performed if the distribution were more normal or uniform, where there are none of these billionaires, so to speak. So these individuals out in the tail can do things that by virtue of being perceived so different from everybody else that they wouldn’t be able to do if the distribution were different. That’s your emergent function.
Jim: That’s interesting. It shows that diversity has its uses. It basically produces classes that could have different roles than if everything were uniform.
Jim: Another question for you, one of my old favorites. There are some arguments in cognitive science that our reasoning is roughly Bayesian. If that were also true for macaques, whatever monkeys these were… presumably, if the fight ability differential was large, the number of conflicts necessary to admit a signal ought to be relatively small, and when the fighting ability is closer to being similar by Bayesian analysis, it will take more conflicts before you get a reliable signal. Is that what the data shows?
Jessica: Yeah, that basically is what the data shows. Also, the individuals who are very similar in fighting ability, they often don’t signal at all, and they sometimes avoid each other. So you would need to fight more to figure out what the difference is, as you point out. But also with fluctuations and contextually variable stuff, like how you’re feeling today, or the weather, or the presence of your allies, would make so many fights required. Sometimes, the strategy they adopt is just not to interact.
Jim: That’s interesting. That would basically point toward my old favorite, roughly Bayesian, right? Of course, they don’t sit there and run base theorem, but they say, this guy is too close to me. It would take a long time to prove my dominance, right? So not worth getting beat up that many times, even if I win.
Jessica: Jim, you raise a very interesting point. You say they wouldn’t be sitting there computing Bayes’ theorem, but, yes, consciously, they’re probably not computing Bayes’ theorem, but their brain may be doing that. That is a possibility that we, I think, get confused about in social cases where we think that we’re consciously performing the computations, rather than just layering that on top of computation our brain is performing. The explanation.
Jim: Yeah, that was my point, that I am a believer, more or less, that we are unconsciously running Bayes’ theorem in the background for… roughly… for a lot of things. So that’s why I asked the question. It happens to be a kind of a horizontal, interesting idea that I follow up on whenever I happen to get a sniff that it might be relevant.
Jessica: Yes, absolutely. I agree, and there’s a lot one can discuss there.
Jim: Absolutely. But we’ve got a lot of things to talk about, so maybe we’ll come back to that. Let’s talk a little bit about causality. I know that’s something you’re interested in. It seems like every time anybody digs deeply into complexity, if we’re being honest, we run into causality, or we decide to sweep it under the rug. What are your thoughts on causality?
Jessica: That’s such a big question.
Jim: Give us a big answer.
Jessica: Okay. I’ve thought about causality in a few particular contexts, which I can mention. One of them is in the context of what’s called a downward causation, which is this idea that higher levels of organization somehow mysteriously impact lower levels of organization. This has been a debate because often the higher levels are thought to be statistical, and it’s not clear materially how this works, how information can influence, mass materials, so to speak. There’s been a lot of nonsense around this downward causation discussion. One of the things that we’ve sort of hit upon in our work is that what’s happening is… it comes back to the coarse-graining I mentioned… that as the components of an adaptive system are estimating regularities, that’s doing this coarse-graining, they’re starting to use these coarse-grained representations rather than the environment, these coarse-grained representations of the environment to make decisions.
Jessica: Even if those estimates are wrong, they’re still using them to tune their behavior. As the computational capacity of the components of the system… when it’s similar, and when the environments that the components are observing is somewhat similar… then these estimates can start converging, and you get what I call collective coarse-graining.
Jessica: And so this, through this kind of collective coarse-graining, these statistical regularities start to converge. Everyone’s using kind of the same estimates to make decisions. And in that sense, you’re starting to get this downward causation. Then, the components are still doing the work, they’re reading these global variables that they’re constructing through their estimates of the world, and they’re tuning their behavior based on them. So, there’s no mystical… we’ve simplified the problem, we’re being very operational. There’s no mystical issue here. The components that… it’s still materially instantiate. The components are doing the work. They’re reading the variables. But you’re getting this higher level starting to form as those estimates converged, as those coarse-grainings converge. You get one picture, even if it’s wrong. More or less, one picture in the system of how the world works.
Jim: Which we both know, downward causality is something people love to talk about, and are often very slippery about it. Let’s dig in to your view about this a little bit.
Jim: The example I use for downward causality, as a thought case experiment, and correct me if I’m wrong on this, is, let’s imagine; I, me, whoever that might be, decides to move my hand. Now, the atoms in my hand would normally, in the course of life, follow Newton’s law, and just sit where they are. They jiggle around a little bit, [inaudible 00:18:20] and motion, but they certainly wouldn’t move 18 inches from right to left all of a sudden. The reason they did that is downward causation. Whereas somebody, maybe it was me, decided maybe using something called free will, which we’ll get to later, to move my hand. And so that downward causation caused those atoms to move. One, is that a reasonable description of downward causation? And if so, how would you apply your thinking to that?
Jessica: I distinguish between simple feedback, apparent downward causation, and effective downward causation. Your atom case is closest to what I would call ‘simple feedback’. So there is some consequence, because the world is changed to those atoms, but the atoms are not reading anything about the world and making decisions based on that reading. And the apparent downward causation, you’re getting tuning, so the components are reading, they’re making their coarse-graining, and they’re using those coarse-grainings to make decisions. So they’re reading, essentially, global variables. But the tuning can be partial and imprecise. And then this weaker form the components… they need to be tuning their behavior to these estimates of these environmental or aggregate properties. And there’s some kind of downward causation as soon as that happens. But it doesn’t demand that the estimates of the [inaudible 00:19:38] properties are correct, or even good predictors of the system’s future state.
Jessica: So there are theories for how the [inaudible 00:19:43] work could be wrong. Then what happens is downward causation… in this apparent downward causation, it might it be it becomes effective downward causation, a much stronger form, when the coarse-grain representation, or these aggregate properties, are predictive of the future state of the system. And another name for these variables, we call them slow variables… they’re the aggregate properties [inaudible 00:20:03] the slow variables are robust, or small perturbations, the estimates of the variables are used by all components to tune decision making, or most of them, and the components or the individuals are mostly in agreement about these variables, right?
Jessica: So your atoms are doing none of these things, and the estimates are converging so that there’s an increase in what we call a mutual information between the microscopic behavior and the macroscopic property. So in the atoms case, you just have simple feedback. The environment has changed, and that has some consequences for the atoms, but they really haven’t had a role in that. In order for it to be downward causation, the components have to have a role in it. They have to be perceiving or measuring some regularities and using those regularities to tune their behavior. That’s the critical aspect of it in my view.
Jim: Okay. Let’s take my example one step further. Maybe we can get into that realm. Would you describe the way the muscles behave relative to the concept in the head that says “move your arm,” presumably influenced by signaling over nerve systems, et cetera. Is that coming closer to your idea?
Jessica: I don’t know enough about the neurophysiology of muscle movement to answer that question, but neurons like backpropagation, I think when you have feedback from a higher level to a lower level where the neurons have to actually make estimates of those regularities to do the backpropagation mechanism, would be an example. So it all comes down to this; if the components are estimating regularities and then using those estimates to tune their behavior. And when there’s a majority or a large number of them in the system doing this, that’s when I call it effective downward causation.
Jim: So that would certainly fit the case of the muscles, because they’re in a constant recurrent loop with the higher levels of the brain, and they’re constantly adjusting as they go.
Jessica: Yeah. So this doesn’t require any… and I have to be careful… sophisticated cognition, because actually, I think there’s a lot of sophisticated cognition going on in lower levels. But this isn’t just at the individual level, it certainly can describe muscle cells ,and principal, and neurons. It certainly seems to describe some neural systems where there are these feedbacks with the neurons estimating regularities.
Jim: Gotcha. And the regularities in this case, or what the brain state, what is the regularity that they’re estimating?
Jessica: For example, it could be something like a brain state encoded in some timescales, or some circuit property. A simple example would be in an experiment over the random dot. Basically, you have dots on the screen moving left to right, and there’s a lot of dots. And you can control the number that are moving left to right. And a subject, a monkey, is watching the screen, and has to estimate which direction the dots are moving. And so the neurons receive some input, some signal, that’s based on the monkey’s intake of this visual scene. And then the neurons have to decide how to respond to that signal, how to fire.
Jim: Okay. And then, so then the state is the dots on the screen?
Jessica: In that case, the state would be, yes, something in the environment. The dots on the screen.
Jim: The screen.
Jessica: In that case, the state would be, yes, something in the environment, the dots on the screen.
Jim: And the downward causality would be the behavior of the monkey based on those dots?
Jessica: Well, no. In the case of the neurons here, so the neurons, then have to sort of estimate which way the dots are moving and then they have an opinion about this and that opinion is encoded at the population level. And an interesting question is if the neurons have different opinions, how do they come to consensus about it? And then presumably says a little bit more complicated example. That consensus view of what’s going on is then passed onto another system, another population of neurons in the brain that maybe control the motor output of the neuron in which direction of the monkey and which direction the monkey looks indicate to the experiment or whether the dots are moving off to right. So the downward causation always has to be from a higher organizational level to a lower organizational level. And so I’m not sure in that case whether it’s occurring in the population of neurons that’s encoding which direction the monkey should look or further downstream.
Jim: Okay. It’s getting there. It’s still a difficult topic, right? There’s still not clear in my head. Maybe it is in yours, but I think we have to all do a little bit more thinking about this.
Jessica: Well, the monkey example affords the a really easy version of this. In the sense that the monkeys in the downward causation occurs in multiple contexts. So the downward causation occurs in the signaling context when they reference the signal, the subordination signal for decision making.
Jessica: So they’ve estimated the regularity from the fights. They encode that regularity with the signal exchange. They reference the signal to make decisions about how to interact in the future. And then as the signals consolidate in the network, the signals change consolidates in the network and they start competing the power distribution, they reference the power distribution for decision making. What’s the cost they’re going to pay in fights based on how they’re perceived collectively in the group. In terms of capacity to win a fight, and so you get downward causation from the power distribution to their decisions during conflicts and other aspects of social interactions. You have downward causation there are two at two levels, one at the signaling level and one in the power distribution.
Jim: Okay. I will call out to our audience that Jessica written a quite accessible paper called, Coarse-graining as a Downward Causation Mechanism. Is that still a paper that you’d recommend people read to get a sense of your thinking?
Jessica: Oh yeah, absolutely.
Jim: Yeah. I’ve found that to be quite good actually. Final question on causality. Are you familiar with Judea Pearl’s work and his work at trying to unify probability and causality?
Jessica: Yes. I’m not an expert on Judea Pearl’s work, but yes, I know his book and his papers and it related to some work that we do on robustness.
Jim: What’s the essence of his idea about unifying probability and causality?
Jessica: Well, I’m not sure that I want to summarize that, but I can say a little bit about causality in the context of robustness and some of the challenges that we face in measuring robustness. So of course, robustness is the ability of a system to maintain a function or a state or structure in response to perturbations. And one of the challenges in measuring robustness that it’s often not visible until there is a perturbation and you have to be very careful about certain kinds of things. Like for example, you could do an experiment where you perturb an element of a system or component of a system and you see no change to the system and you ask yourself, “Why might this be?” Is it that the system’s robust, this perturbation? Possibly, or an alternative explanation is that the component that you perturbed makes a very small causal contribution to the function of interest.
Jessica: So let’s say again, I can give you an example from our monkey society or gene regulatory network. In the monkey case, one of the things we did is we knocked out the policing mechanism, that this mechanism that these individuals in the tail of the power distribution perform this conflict management mechanism where they break up fights and impartially among other individuals in the group. So they don’t take a side and they can do this by because they’re perceived to be so successful or so powerful by the individual in the group. And so we knocked out this policing function by removing these individuals from the group and confining them. So the removal was partial. They had vocal and visual access to the group, but they couldn’t actually perform any interventions. And we had to sort of show in advance that they made a causal contribution to fight so that they actually were impartially breaking up these fights.
Jessica: And so one of the issues in robustness as I said is knowing whether the… If you observe no change to a system as a result of the perturbation, it’s because the component you disabled or perturbed is making a causal contribution. And then what you want to measure is, and ideally if you can, you want the magnitude of that causal contribution. And then you want to measure what we call the exclusion dependence. And this is work with David Krakauer and [inaudible 00:27:43]. The exclusion dependence is the change to the system function or the target function in the absence when you do the perturbation. So now you have the causal contribution prior to the perturbation and the exclusion dependence. What happens to the system when there is a perturbation. And it’s the difference in these two quantities that tells you whether the system is robust. So in other words, you have to knock out a node that makes a causal contribution and you have to observe that when it’s removed, the system changes.
Jim: And so in the case you’re talking about here, you took out these highly dominant animals that were playing the policing function and when you took them out, something changed.
Jessica: And we knew in advance that they were making a causal contribution by virtue of their intervention behavior. Right? And then yes, we disable the system measure the exclusion dependence. This is so you take these two quantities into account so you’re not measuring trivial robustness essentially.
Jim: Gotcha. I like it. That’s good.
Jessica: So one thing that can happen when you do the perturbation is the system… If you don’t set the timescale of measurement correctly, the system could rewire itself to recover the function in an alternative way. And that would also make it seem like the system had not changed. But if you knew that the components that you were removing were making a causal contribution, then you would question your results. So again, this distinction between the causal contribution prior to perturbation and then the exclusion dependence as a result of perturbation. These are two different concepts and they’re very important to keep in mind.
Jim: Okay, that’s good. Since we’ve been talking about slippery topics, let’s go into one of the slipperiest. What are your thoughts about this thing some people call free will and maybe how does it relate to consciousness?
Jessica: Okay. So I have a some somewhat I think slightly different take on this and it is definitely related to two dominant… My take is definitely related to two dominant ideas in neuroscience, which I’ll come to in a second. But I basically think about free will as, the feeling and it’s very important, the feeling that we can make choices. And then choice is just the ability to select among alternatives due to the sort of inherent stochasticity in the world and errors in information processing, which I think are common because of all that complexity that we discussed earlier. So I distinguish between choice and free will where choice is the ability to select among alternatives given the stochasticity in the world. A variety of different things and free will is the feeling that we can make choices. So by saying feeling it doesn’t mean that it’s an illusion.
Jessica: The important point is that it’s a kind of emotion, a feeling about how about whether we can make choices and it relates to consciousness in the sense that for me, and again this is again slightly different take, I think although it’s been talked about for so long and probably since humans could talk, and I’m sure someone has articulated it this way. But for me consciousness is essentially the degree of access to computations in our brains at the system level. So whether it’s the whole brain of the whole organism or some sub-system, whatever your system, however you define it is… However you define your system. It’s a degree of access to computation the brain at the system level captured by an effective theory is essentially a theory for how the computation works, whether right or wrong. Let’s see, assessor understand how the computation is being made, so it’s not about and perception, I’ll introduce that too.
Jessica: Perception is a state of the world, how we perceive the state of the world and elements in those states. And you often hear about consciousness as this integrated thing. And I would say that it’s perception that can be integrated and consciousness really is about the degree of access to computations in our brain. So we have a less access to our motor control explaining them, articulating them than we do to in principle our understanding of social interactions. And one more point I want to add is that this sort of effective theory or access to these computations can be given in a variety of different ways. Needn’t be in natural language. It could be in mathematics or it could be even expressed musically or in terms of visual imagery or texture. So it’s just about an alternative representation. So again, consciousness is the degree of access to computations in the brain.
Jim: What does it mean to access? To be able to see them, to understand them or to use them?
Jessica: To have a sort of a theory for how the computation is performed, whether it’s right or wrong. So when you do walk down a set of steps, you often have no access to the computations your brain is making to ensure you put your step, your foot down in the right place. Your brain is doing that computation but you can’t describe it. Now, if you were pushed by a friend to… You tripped and somebody asked what happened, you might be able to sort of make up an explanation based on logic or the use of understanding of geometry or something for how your brain performed that computation. So I say our conscious understanding of many of our motor control movements is very low. But perhaps we have better theories for social interaction. So our conscious understanding of social interaction seems to be a bit higher. Although that could be a little bit of an illusion.
Jim: Well, this seems like a pretty high level sort of evolutionary definition of consciousness. I mean, I’m thinking here, the way you’re describing it only applies to humans.
Jessica: No, I don’t think so because that’s why I said that the representation needn’t be in natural language.
Jessica: It could be a mathematic, it could be mathematical, it could be visual imagery, it could be geometric. There are many alternative representations that other organisms could be using to sort of have a theory for how their brains performing these computations.
Jim: [inaudible 00:33:09] the theory. I like to use Gerald Edelman’s distinction between primary consciousness as he would describe it, the kind that we are pretty confident that a dog has versus extended consciousness, which is the kind that for sure humans have and maybe some of the great apes and maybe some whales and dolphins have, and his definition of primary consciousness. If a dog is essentially it’s in its own movie of the world, right? It’s in a scene and it acts and it feels that it’s in this scene and it doesn’t necessarily theorize about that. While extended consciousness in Edelman’s terminology does start to bring in concepts like self-awareness and the ability to theorize. Is your definition either of those two or something else?
Jessica: Yeah, I mean the definition of consciousness… The ability to access those computations and have a kind of effective theory for how they’re being performed is related to Edelman’s second point. But I would not restrict that to humans. I think we don’t know the answer to this question yet. And like I said, by allowing many different types of representations then we’re not ruling this kind of consciousness out in other animals because we’re focusing on the ability to do this in natural language or mathematics. Right? So that’s an important point. Now in neuroscience are two big theories as I understand it. Four are on consciousness and one is comes from some French, I think mostly French neuroscientists and it’s basically called the global neuronal workspace idea of consciousness. And it’s similar to what I’m articulating in the sense that it suggests that there are many different processes going on in your brain and when you’re conscious of something you sort of have access to those computations in many different parts of your brain.
Jessica: So, and the way it sort of comes from this idea, I think that was originally came out of early days of AI where you have a kind of blackboard and different processes basically post information to this blackboard and when it’s posted to the blackboard, it’s globally… So-called globally available information. The blackboard of course is smaller, finite, so you can’t post everything there all at once. And the more you can post there, the more conscious you are so to speak. That’s one take on a global neuronal workspace which comes from these French neuroscientists. I think it’s very interesting. And that’s related to what I’m saying, although not identical. And then the other idea comes from [Tinoni 00:35:31] and Christof Koch and people like that integrated information theory, which is more or less there are different takes on this, but more or less the idea that consciousness is about sophisticated mechanisms that through integration and modularity allow you to represented a web of causal effects.
Jessica: And the sort of higher you this measure called Phi is the more conscious you are. So, and the idea is that a higher Phi means that you can represent, you have a network structure that can represent more of these causal interactions. And so I would say that what I’m saying is more, it’s closer in some respects to the sort of global neuronal workspace, except I’m also including this idea of this effective theory so that you have this theory for how your brain is performing these computations, which brings in a little bit of that Tinoni perspective. Right? So you’ve got on the one hand, this sort of global neuronal workspace, it says you get access to these computations when they’re posted to this blackboard, this kind of global posting, and then the Tinoni, which is sort of emphasizing this web of causal interactions that’s made possible by this particular type of network structure where this Phi is maximized.
Jim: Yep. Yeah. I would add a little bit to that, which is the global workspace theory was actually, I think originally fully articulated by Bernard Baars who’s an American though it is being pushed today by French men, like [Dehaene 00:13:48], etc.
Jessica: Exactly. That’s right. Baars, Dehaene and [Shen Xu 00:36:52], I think it’s the same?
Jim: Yeah, I don’t know that one. In my own work in consciousness, I basically follow Baars with some extensions and corrections and certainly we have Tinoni and Koch with the integrated information, but there’s the third real important one, which is the more bodily emotional theories of people like Antonio Demasio. One of the books that I strongly recommend to people if they’re interested in understanding his perspective is The Feeling of What Happens. He argues that the seed of consciousness is not in the cortex are the thalamic cortical connections or in perception or anywhere else, but it’s deep in the brainstem, in the feeling of having a body and that that is where consciousness comes from. So he would argue consciousness goes way, way, way back, evolutionary. I would say those are the three differing arguments from the neuroscience community. And then of course we have other arguments and people like David Chalmers about the hard problem, which we don’t really need to get into today because that’s above our pay grade.
Jessica: I think what we’ve covered actually tackles Chalmers’ hard problem exactly that. I mean, so what I would add to this is, so I don’t, I don’t really see that there’s a hard and an easy problem as Chalmers articulated. I think that that’s false and that the free will thing that I… The point I made earlier that free will is a feeling that we can make choices is related to the embodiment points you’re making. And this mind brain distinction comes back to the core screening and the global neuronal workspace points, which is that by core screening you’re come out coming up with these sort of summary statistics, global variables essentially that you’re posting to this blackboard. And that’s where you get the notion of mind is essentially that. The summary statistics are core screen representations of neural dynamics, the weak problem of Chalmers that we have some access to secondarily.
Jessica: And when we have that access, that’s what I call consciousness. And so like I said, I’m not going to restrict it to humans because I’m not going to limit it to those representations being stated in terms of natural language. They can be any type of representation. So dogs, amoeba, anything that can have a representation of the computations that it’s performing that’s secondary would be a candidate for having some level of consciousness. I just don’t think that Chalmers saying… I don’t believe in that. Again, it’s like the downward causation. It’s mystical.
Jim: Yeah. That’s my take as well. Even though I can’t prove it. My intuition having been working in this space now for eight years is that there is no hard problem. You know guys like Daniel Dennett would agree with that as well, but we haven’t quite been able to figure out exactly how to articulate it though. I like your approach, which is we think about things like free will. Let me try to put your words into some different words and tell me if I get it or not, is that if we think of free will as the ability to make choice within a certain level of abstraction or what you would call coarse-graining, then we can say that we do have free will. Even though I’m currently reading Brian Green’s new book, explanation of everything about science from the Big Bang to Beethoven essentially. And he argues, “Ah there’s no such thing as free. It’s all fundamentally reductionist physics all the way down.” But your analysis would say that at the right level of coarse-graining then to degree we have choices within that coarse-graining. Then we have free will.
Jessica: Yes. I think that’s basically what I’m saying. I would highlight two things that one, our choice space could be larger than we imagine or understand and so we might, in that case, our feeling of free will is less than it could be if we fully understood our choice space. That’s one sort of nuance. And the other is that yes, how you coarse-grain will impact how large you perceive that choice space to be.
Jim: I’m going to branch off here, something not in my notes. So could something like meditation affect our ability to access our free will or change how we feel it or experience it?
Jessica: Yeah, I think there are definitely ways that we could… So let’s say we’re highly stressed and so we don’t perceive the full choice space. We think there are certain options that are close to us, but they’re not actually and by taking, making certain interventions, meditation might be one, exercise could be another where we calm our brains or change our attentional mechanisms or quiet our responses. We might be able to perceive a larger choice space and then increase that feeling of being able to make choices and then that way so to speak, increase our freewill, possibly. I mean that’s really speculating and it’s a fun thing to think about.
Jim: People who listen to the show regularly know that I’m fairly skeptical about spirituality and I call it the S word and I’ve had some very interesting conversations with people who are practitioners of what they call spirituality. I will say I have learned from them and I hope they have learned from me. Next time I run into one of these discussions, I’m going to ask them, all right, I’ll present your concept of free will. And I’d say, “Explain to me how contemplated practice or other forms of spirituality may or may not expand or refine or improve this concept of free will?” So interesting. So I’m going to move right along here a little bit. You’re a coauthor with a bunch of other, my favorite complexity science folks on a paper titled, The Challenges and Scope of Theoretical Biology. And it’s a paper I’d also recommend to our listeners despite its incredibly deep ideas, it is quite accessible and as I recall, has no math at all. I know it’s a huge topic. Could you tell us what you can about it?
Jessica: Yeah, so I mean this is obviously a huge point of interest at the Santa Fe Institute. Well like behavior and adaptive systems and there seem to be lots of regularities in law-like phenomena and physical systems. One question is why there aren’t as many or why we haven’t identified as many or as often laws in biology in adaptive systems. And, and so this paper is sort of an attempt to address that. And this is a big theme in my work, in fact.
Jessica: So I guess I would step back and start with this observation. That energy was sort of at the core of 20th century science and maybe the proceedings say 300 years. And the focus on energy in sort of in physics, of course culminated with the splitting of the atom during the Manhattan project and in biology for the most part the focus has been on… So this is a really big picture to sort of get into this difference of why seem to be fewer laws of biology. In biology also for the most, for the 20th century and before the emphasis has been on energy, how organisms get it processed and converted into offspring.
Jessica: Information is always recognized as important, rights? So you’ve got Wheeler for exampling arguing that bits in fact proceed matter and that lovely paper he wrote. And then you’ve got [Jain’s 00:43:25] connecting entropy, the number of options available to a system, to statistical mechanics, information theory. And then you’ve got people like Seth Lloyd writing books about universe itself being a computer. But I say despite this, the overwhelming focus has been on energy and I’ve argued and we sort of dealing with this in our paper to make progress in biology, information has to be more central and to sort of see why that’s the case. It’s sort of useful to step back and think about some distinctions between physical and biology, biological systems. So physics for the most part dominated by concepts like pressure, temperature and entropy that emerged you fairly simple collective interactions and kind of they give… These kinds of interactions give sort of deep insight into the behavior of the physical universe, right?
Jessica: And so physical particles have properties like position, velocity, mass and their collective properties [inaudible 00:44:14] world of temperature and pressure and so forth. And you get thermodynamics, it’s kind of theory for a relationship among the macroscopic or aggregate variables at equilibrium. And you get these regularities and like the ideal gas law, which is an equation of state that tells you the amount of gas is determined by its pressure, volume and temperature. And so you get these regularities or laws like the ideal gas law. Now biology also makes use of comparable collective concepts like as physics does, but in biology these are concepts like metabolism, conflict management and robustness. Some of the things we’ve already mentioned and in contrast to physical systems, these are very functional properties with consequences due downward causation and so forth to the system and its components. So physics is essentially producing and we’re going to get to the close point in just a second.
Jessica: Physics is essentially producing orders. The minimization of energy and biology’s doing this with the addition of information processing. And now what are states? We look around biology and we see not a lot of laws, but order states are ubiquitous. There’s law-like relations, it seems all over the place. But why adaptive systems have this extra step of information processing. And whether it’s the reason why we see fewer laws in biology are these big open questions that people ask at SFI.
Jessica: So one of the things you want to understand is what do… What does information processing do in biology? And a nice sort of entry point into this is to think of the scaling work of Geoffrey West and his colleagues, Jim Brown and Brian Enquist and others, right? So Kleiber of course, observed a long time ago, a hundred years or so ago, this robust, this disco relationship between mass and metabolic rates that such a mass scales to three quarter power, right? So this is, this is one of the laws we have in biology. And then Geoffrey and his colleagues came along and derived this from first principles for axioms about how these accents capture the energetic constraints. It’s sort of like how energy is distributed through the circulatory system of mammals. So we get this law.
Jessica: Our energy is distributed through the circulatory system of mammals. So we get this law, but it’s a law for an example where there’s a strong energetic constraint. Now we also, like I said, have information processing all over biology. And so Jeffrey and Cruz sort of shifted to cities and they started asking me about scaling in cities. And they’re asking, they’re about questions like how does population size scale with crime rate and patent generation so forth. And one of the interesting things they find is that for a lot of these city variables, the scaling is super linear. That means it’s an increasing return to scale in contrast to the sub linear three quarter scaling that was found for the metabolic systems. So here’s an interesting observation, in these systems where information processing is important, that team to be collective, you get super linear scaling and in a strongly constrained, energetically constrained systems, you get this sort of sub linear scaling.
Jessica: So there does seem to be this difference or this implication of having information and collective effects in your system. And then the last point I want to make is that this brings us to the collective computation. And back to our point about complexity at the beginning of this talk when you asked about the information bottleneck and the macroscopic expansion, so what does information do? So information processing do the system … Well, my suggestion is that a lot of information processing is erroneous. The components are making mistakes or have restricted computational capacity so they cannot perfectly estimate or optimally estimate regularities in their environment. So what do they do? Instead of doing it just on their own, they collectively compute the regularities, they collectively core screen. And in this way they sort of solve this problem that information processing introduces and the problem is subjectivity.
Jessica: And so the reason why we haven’t seen or identified a lot of laws in biology yet is because in order to see those laws, we need to understand how the system is processing information through collective computation. And once we have that, we’ll have a better idea of what the relevant macroscopic variables are, all right? So the basic idea is that because the system is performing these computations without taking the systems point of view, we will never see the laws and in physical systems that is absolutely not required.
Jim: Could you expand that just a little bit? The difference between physical systems and the biological? I didn’t quite get that last sentence.
Jessica: Okay. Right. So the microscopic world is complex, we sort of established at the beginning of this conversation. So how do components get regularity? So they are doing information processing, they’re estimating regularities despite all of that complexity, yet they’re error prone, so they don’t make these estimates perfectly.
Jessica: So information processing in adaptive systems is introducing subjectivity. It’s introducing idiosyncratic views for how the world works because of the errors that components are making. And one of the arguments that we make in our work is that one way to overcome this subjectivity and produce ordered states is by collectively computing pooling our opinions of how the world works and getting the consensus view. Now there are consequences of that. One is that we might get a better picture for how the world works, but another is that sometimes we’re not recovering the ground truth if there is one out in the environment that is like something like the height of the Eiffel tower that over which we have no control but we’re computing a social regularity like power. So my point when I was going through the monkey example earlier about the power being kind of the outcome of a collective perception is that the view of power is in some sense being created as the system moves through the dynamics.
Jessica: So it’s collective and so in that sense the power distribution is not a ground truth. You’re not recovering a ground truth, you’re recovering a collectively constructed variable that is a result of information processing. So my point is that in physical systems where there is much less of this information processing and it’s not as error prone or error prone at all, the law like regularities are easy to observe from the observer’s point of view. But that in adaptive systems, law like regularities are a little bit hidden because in order to see them you have to understand how the system perceives the world, how it’s processing information, what the errors are it’s making, and how those errors are impacting the components through their interactions are creating like in the power case or in our decision to elect Trump or whatever. So these things are not based on ground truth out there in the world, they’re based on the collective dynamics or an outcome of information processing the collective dynamics.
Jim: And one could say perhaps you have to even take a subjective point of view from the perspective of a component, is that somehow makes sense?
Jessica: Exactly. You have to try and figure out how the components perceive the world, what their theories are for how the world works and how those theories combined to produce their social structure and their collective enterprise. You have to take their point of view and I think once we start doing this more seriously, we will identify laws in adaptive systems. Another way of putting this is that there are not as many to the macroscopic variables that are observed in the Kleiber case. Kleiber’s observations were not mechanistic, they were aggregate level observation, macroscopic observation, that massive metabolic rate scale. In the city’s case, there’s many many variables that you could measure. And how do you know what are the macroscopic variables describe like qualities of cities? How do you know what the right ones are?
Jessica: Well, the right ones in some sense are the ones that have downward causation consequences in the system, the ones that the components are constructing through their interactions and can perceive or read so they can tune again in response. So without taking the system’s point of view, we can’t know we’re titrating in some sense between the microscopic interactions and the macroscopic world they’re creating. We can’t know what the right variables are a priority.
Jim: This is very good. I really like that. That made it ring for me. Let’s move on now to another topic. It’s actually from the same paper, I will quote, “How much of biological nature can be predicted from basic physical law?” This question is simple to answer, affectively zero. This connects directly with the thorny question of emergence. So we’re getting to another slippery topic here. What are your thoughts about the thorny question of emergence?
Jessica: You’re going to detect a theme here, but I believe all of these things, freewill, consciousness, and emergence are overly mystified. The mind brain problem, right? And I think it comes down to you just being a little bit more materialist about this. So emergence for me is when you have a reduction in uncertainty and generally slow variables is of course green variables what’s giving you that. So again, like if you take my macroscopic expansion thing and the power example, which were fairly easy to think through, you get this distribution of power that’s sort of encoded in this network. Typically, with emergence we think there’s some kind of non-linearities, it makes it hard to predict, but that’s not the only thing. So in the power case, the algorithms that are used to quantify the power structure that sort of tell you how it’s encoded in the network or circuit of signaling interactions, there’s some non-linearities in those we think.
Jessica: So the power distribution cell has a little property of emergence to it. But the real emergence comes in the functional consequences of having this heavy tail distribution with these few individuals sitting way out in the town and being able to do this policing. So you get the slow variable and the uncertainty reduction to the power distribution and then the surprise comes from the performance of this new function policing. So again, for me it’s not that mysterious emergence, it’s comes from slow variables and of course screening and uncertainty reduction and the production of surprise for new function.
Jessica: And I don’t think there’s more to it than that. I think one thing that’s hard to understand is sometimes why it doesn’t occur and there are perhaps other ways than what I just described for it to be generated. For example, there was a study that came out the other day in nature physics on schooling behavior in fish that showed that you can get the schooling through noise. The alignment decisions of the fish when they’re noisy can push the fish from a poorly aligned state to one where they’re highly aligned and in that sense, that’s a very different mechanism than what I’ve been suggesting in the power case where with the core screen, there’s no course for any necessary there and that’s interesting, but you have emergence potentially in both cases.
Jim: Yeah, very interesting. Yeah. For those interested in really digging into emergence, one book I’d recommend is The Emergence of Everything, How the World Became Complex by Harold Morowitz. He basically describes our whole universe as something like 27 emergencies from quirks up to social systems and it’s actually quite interesting. Have you read that one?
Jessica: I’ve looked through it. I knew Harold of course, great guy.
Jim: Yeah. He was actually my first mentor in the complexity area before I even came out to Santa Fe when he was at George Mason. When I first retired from business, he was the first guy I ran into.
Jessica: Well, I didn’t know that.
Jim: He had a huge influence on me to say the least. This one we can punt on if you don’t have good thought on, but [inaudible 00:54:57] and he’e far from equilibrium dissipative structures, do you find that in any way useful in thinking about or understanding emergence? [inaudible 00:55:06] idea is that, and this of course not in the Santa Fe tradition, it’s what they call the Brussels’s tradition I think. That essentially complexity emergence and far from equilibrium systems and essentially it’s the way the universe dissipates far from equilibrium energy flows is that the complex systems get spun up and they use up the energy and they degrade it by the second law and that’s what they do. And that’s essentially where complexity comes from.
Jessica: I haven’t read [inaudible 00:55:36] in many, many years, but I will say one thing, I do agree that adaptive systems obviously don’t really have the framework to deal with this non-equilibrium, but that they recover some of those equilibrium properties by having kind of overlapping timescales so that they do create these slow variables that I keep talking about these core screen variables that essentially give an effective equilibrium. So the slow variable creates a background that from the point of view of the component is more or less stable or constant and against which it can adapt. And so on the one hand we need better formalisms and approaches for working in non-equilibrium ideas into adaptive systems for sure. On the other hand, I think adaptive systems have developed ways of recovering essentially equilibrium states to these kind of effective [inaudible 00:56:24] slow variables and these are having that kind of an effective equilibrium as a consequence of these slow variables.
Jim: Okay, that’s good. Yeah. I always throw in a [inaudible 00:56:34] question with SFI people because I always notice that most of them either push back or don’t know too much about it. I do think it’s an area of complexity that I wished the SFI people would learn more about because I do think it has some merit. That’s my editorial comment for the day.
Jessica: I think that’s a fair point, Jim. It’s one of the things that we sort of cover or learned about in early days, in our early work a complex systems and at the summer school and so forth and I haven’t really thought about too much since and I should take a look at it again.
Jim: Cool. Let’s move on to the next topic. From your edge bio, you say a central philosophical issue behind this work, meaning your work is how nature overcome subjectivity. There’s that word again, inherent and information processing systems to produce collective ordered states.
Jim: Pretty much a summary of a lot of things you’ve talked about, but you do pose it as a philosophical issue. So what do you think are the philosophical implications of your work? Are there truly metaphysical issues that arise or can complexity including emergence be described a strictly in the language of science inside a realist program with no recourse to metaphysics?
Jessica: I think that in the end, science is sufficient. I do believe that but most of the work that I do and some of us at SFI fall into camp is very much in the business of taking concepts that from philosophy or that are sort of outside of science and turning them into questions that can be approached using scientific methods. So all of my work sits on the edge or between philosophy on that border between philosophy and science, but we’re very much in the business of taking these questions that maybe from a science perspective seem ill or poorly defined from a science perspective and turning them into something that, to a model that can be analyzed mathematically or approached with data. So I’m at that border for sure and there are other SFI people who are not anywhere near that border and much more interested in the develop profound methods for thinking about networks, but their questions are very clearly defined from the get go. So, that’s my partial answer to your question.
Jim: Give me an example of part of your work that’s actually informed by an actual philosophical question.
Jessica: Certainly this distinction between collectively constructed macroscopic grills in a ground truth, right? So this is a downward causation is another one, right? Most biologists or most people who work in adaptive systems are generally thinking about the system learning about states of the world, where are the states of the world are so exogenous to the system now outside his control. And I think we need to take seriously that by virtue of this erroneous information processing, a lot of the world’s is being collectively constructed. And since those ideas originally they come out of certain areas of philosophy and definitely some of the structural and early 20th century anthropologists, were thinking about these kinds of things Claude Levi Strauss and others, Ruth Benedict. And I studied that work early on. There’s been a lot of influence from those literatures in early days to my sort of original thinking about this stuff.
Jim: Yeah, that’s good. Yeah, it is interesting because some of the most interesting problems are right in there in that liminal space where we don’t even quite know how to fully state the problem correctly.
Jessica: That’s right.
Jim: At least initially and so we should reiterate between what is the problem exactly. And once you get the problem stated right, sometimes it’s not that hard to figure out.
Jessica: Yeah, that’s sort of what we do. It’s exactly that.
Jim: It’s actually one of my insights into artificial general intelligence is a lot of the work being invested in today by companies like Google is in so-called question answering capabilities. I keep pushing back and say I want to see question asking capability that would be far more impressive actually.
Jim: People look at me like I’m nuts, but I think you get it?
Jessica: Yeah. No, I highly value that, that’s true.
Jim: I’ve found that frankly and my working with scientists that the really best ones are the ones that are the best in asking questions. And the variants in their ability to answer questions is less than their ability to ask questions.
Jessica: That’s good because I feel that’s my skill and sometimes I have no ability to answer my own questions.
Jim: Well, that shows you that you’re brilliant. So, that’s good by the rut rule of how to measure science so that’s a good thing.
Jessica: Only if you were really true.
Jim: I love it. Another thing you wrote recently, I don’t know if it was recently or not, but chapter nine and work called social and economic engineering.
Jessica: Mm-hmm (affirmative).
Jim: I read that and it reminded me of the possibly false road many of us nerdy kids went down when we read the foundation trilogy way back yonder and Hari Seldon in his psycho history and social engineering, and of course it’s worth noting that the foundation trilogy, it’s just been announced it’s going to be made into a TV series from Apple. That should be interesting. How the heck do they got to make a science fiction series from a really deep intellectual story that has no sex and no violence? That may take some work.
Jim: But anyway, to the topic of social economic engineering, you seem to take the lime, if I read it correctly that, hey, maybe we can now do some social engineering and economic engineering. What say you to that?
Jessica: Yeah. So I’ve written a couple of little essays on this and I think a lot of the engineering that we’ve done has been sort of reactive and qualitative. And so with big data, with some methods and concepts that I’ll talk about in a second and with machine learning and so forth, I think we are at this point where there could be a big change in how we engineer things and you see a little bit of that, the sort of foreshadowing of that with the sort of Facebook stuff, both from the science side and the intervention interference by Russia or whoever into on Facebook. Basically what’s happening is that we now have vast quantities of microscopic data on individual behavior and we’re in a position where for the first time and it comes back to these micro to macro maps.
Jessica: And aside, that’s an important aside, in the Geoffrey work on scaling, they had the macroscopic observable mass scales with metabolic rate and they had some axioms from which they derive the microscopic, so they build this micro to macro map. And what I’ve been saying for the information processing systems is you can’t build that micro to macro map without having some idea of how the sort of components are processing information, how they’re getting, building their effective theories for how the world works. So to do that, you need good microscopic data. You need data on what they’re doing, how they see the world, what their strategies are. And so with Simon today on David Krakauer, we developed this approach called inductive game theory a while ago, which is basically, it was a kind of response to simple game theory models, a negative response that were the strategies in the payoffs’ everything’s being posited in toy models. And what we said, well no, we want to go into the data and actually pull the strategies out.
Jessica: So you have this rich time series on individual interactions and behavior, go into that time series and try and extract the strategies that the individuals are using to make decisions in their social environment. And once you have these strategies then you build up how they collectively interact, and to do this we built circuits of how these strategies interact to produce a macroscopic property of interest in an animal society that fight size or power distribution. It could be a city variable, anything really. And so it’s a very different way of building a micro to macro map than in this sort of mass metabolic rate case. And one of the reasons we’ve wanted to build these maps in addition to the sort of foundational reason which is getting out laws in information processing systems and figuring out what those laws are, we’re taking the systems point of view and having all of those strategies and data from the system itself.
Jessica: Another reason is because we thought by having these micro to macro maps and then understanding how you simplify them or how the system simplifies them to make decisions, would tell us in a more informed way and potentially highly quantitative way what interventions we could perform to induce changes at the macroscopic level, like in social structure for example. So if you had a theory for how power distribution was arising in a given system and you knew what the dominant causal coming back to our causality discussion, dominant causal contributors are to a particular power distribution, then you could intervene in a more informed way on those variables or individuals to change the type of power distribution that you see.
Jessica: And so the basic idea is that by developing methods for studying micro to macro maps and information processing systems, we could make more informed interventions and potentially engineer outcomes in social systems. And I think we are going to be able to do that. Now of course, one complexity that arises is that as everyone can do this, everyone when you adopt this approach can build better models using microscopic data, you’ll have multiple different groups, nations, corporations, whatever, together, intervening, you’re going to create a lot more complexity and you might actually make it because there are all these couple systems competing, it might make it very hard to control the consequences of your intervention.
Jim: Oh, coevolution. Well, we’ll have a coevolution of complex interventions essentially.
Jessica: Exactly. And I think that is going to happen now.
Jim: Yeah, because it’s already happening.
Jessica: Yeah, exactly.
Jim: We just don’t know it yet. Very good. Very good. I like that. That last insight is actually the coolest thing I think that we’ve said on this show is that as we get greater insight in how to control complex social systems, inevitably there’ll be a co-evolutionary war of such interventions.
Jessica: Yeah, it’s just another version of Gephardt’s law.
Jim: Yes, indeed. Get back to the Hari Seldon thing, I think just for fun. Both Jessica and I are really serious Lord of the rings nerds. I mean, I think I’ve read it 39 times.
Jessica: You’ve [inaudible 01:06:32] me.
Jim: That’s all right. [inaudible 01:06:33] as a Lord of the rings nerds. As I recall in some of our conversations we said, “Okay, what character are you?” I think you said you were [Elegon 01:06:40], is that correct?
Jessica: Yeah, I’m [Elegon 01:06:41].
Jim: That’s right. I’m Tom Bombadil without a doubt, right?
Jessica: Oh, that’s a good one, I think that’s a great choice. I’m a little jealous.
Jim: But anyway, so back to foundation trilogy of the folks working on this work that we know about, I’ve just decided that Simon is Hari Seldon.
Jessica: That’s funny you say that because, so Hari, when I give the talk … I haven’t given a talk in a couple of years, but I used to give the talk on inductive game theory and the circuits on micro to macro map pretty regularly, and I did that work with Simon. And every time, particularly actually in audiences with some physicists, I would be called or accused of being Hari Seldom.
Jim: Yeah, so maybe you’re Hari Seldon.
Jessica: Well, or maybe it’s the inductive game theory, that’s Hari Seldon
Jim: Though Simon looks like Hari Seldon.
Jessica: He looks like Seldon, exactly.
Jim: That’s Simon DeDeo by the way, who was my very first guest on the Jim Rutt show. So go look at episode number one and you can hear a very, very interesting far ranging conversation with Simon DeDeo AKA Hari Seldon. Let’s go on to another topic. Agent-based models have long been part of the work at SFI exploring complexity. Do you have a view of the strengths and dangers from that approach?
Jessica: Yeah, so I have this sort of canonical view of a strengths and dangers. I think that the sort of standard thing you hear everyone complain about is you put anything in and that anything comes out again, garbage in, garbage out. And so I think there’s a lot of bad agent based modeling out there. Now there are like little agent based modeling worlds and even without sort of thinking hard like same happened of course for statistics with SPSS and so forth, anyone can use these programs to build an agent-based model and so a lot of bad science gets done.
Jessica: Now of course there’s good agent based modeling and Epstein and excellent. Some of these other guys who they’re doing very thoughtful work but at the crudest level, when an agent based model is just tracking individual level behavior, tracking individuals. I mean you can define it that could be like the simplest definition of an agent-based model of tracks individuals. And so in that sense, there’s a lot of modeling frameworks fall under agent-based modeling. They’re just not what we sort of canonically think of agent-based modeling and so in that sense too, the inductive game theory that I just mentioned, falls in that space and the difference between, I’d say the way that we do inductive game theory and the way that…
Jessica: Oh between, I’d say the way that we do induct a [inaudible 01:09:03] theory and the way that HBS modeling is often approached is that the strategies that we use in our models come directly from the data. And so I’m fine and in fact advocate this approach if your strategies are empirical. And if you are rigorous and thoughtful about how many of them you put in, right? So you don’t want to build… you don’t want to take into account a huge number of strategies. You want to have some understanding of what the most important ones are and begin with those. So yeah, I mean Asian based modeling’s powerful. It can give you a lot of insight. It’s just that it’s easy to abuse.
Jim: Yeah, that’s a very good point. I recently attended a workshop organized by Josh Epstein and Rob Axtell and some others called Inverse Generative Social Science.
Jessica: In January, right?
Jim: Yep. Yep. And one of the points was, hey, let’s start with the data when we can. What a concept, right? And then let’s generate things like maybe genetic programming from the data and then lets the agents be those, right, as a way to just not make some stuff up. Because I remember when I first retired from business, started playing pseudo scientist, one of the first things I played with was agent based modeling, and my models would have 47 variables, which I just made up.
Jim: When I got out to SFI, I quickly got schooled on 47 variables? That’s ridiculous. Three. Three variables and two or one is better, right? And they were right. So you got to be really careful when you do this stuff, but you can get some amazing insights. And you mentioned Josh. By the time this episode comes out, his episode will be out. I have a episode of Josh’s which will come out on the, Josh Epstein that is, on the 16th of March.
Jessica: Yeah, I mean, I’m a pluralist. I think that we gain insight by using lots of different tools and approaches, and so agent based modeling or tracking individuals coupled to extremely reduced, more phenomenological model where you can analyze the model. You want as many multiple perspectives on a problem and they each do different things and then it’s sort of the collective outcome that helps you understand what’s going on.
Jim: Very good. Well you’ve written a lot about social policing and its effects on social systems. I really like your paper Policing Stabilizes Construction of Niches in Primates, back to our apes and monkeys. Tell us about that.
Jessica: What we did in that study is we wanted to assess the contribution of policing to social system robustness. So I already talked a little bit about that and we developed a set of experimental techniques to identify the consequences of the policing to robustness by taking into account both the causal contribution and exclusion dependence that I measured earlier. And so basically in that study we had half a football size colony of macaques. There was a heavy tail distribution of power in the individuals in the tail, performance policing behavior.
Jessica: And we removed them temporarily over the course of say 20 weeks on randomly chosen days, once every week or so. And we studied how social networks and various social variables changed when they were removed, and inferred from that the contribution of their policing behavior to things like how connected the networks are, how negative and positive behaviors spread over those networks to how clique-ish the societies were in the presence and absence of the policing mechanism to how often individuals showed aggression versus socially positive behaviors as a consequence of the policing.
Jessica: So yeah, it was one of the first what I would call behavioral knockouts. So most knockout work is done on genes or in molecular systems and there really hasn’t been rigorous knockouts in social systems, and that was sort of one of the first. There had been much earlier studies in the 60s where the scientist sort of like just removed the individuals they thought were strong or powerful without really a theory for what would happen as a result of those removals.
Jessica: But, we had a lot more of the ideas worked out when we did this knockout study. It was fun. And what we found was that in fact, what the policing mechanism did is it made the system more integrated. Individuals had a larger number of partners in their social networks. They had better relationships with those partners and there was less volatility, right? So there was less aggression. The policing seemed to allow individuals to build better local social environments.
Jim: So the hippies were wrong. You can’t off the pigs.
Jessica: Yeah. So I don’t know. I mean, I think there are lots of things to discuss here about how, about inequality and egalitarian societies and our sort of a knee jerk reactions to the 1% and so forth. I think we really need to approach that in a more thoughtful way. It’s not clear to me that having a system with billionaires is necessarily problematic. It may in fact be that with… What you need are certain kinds of regulations to incentivize good behavior. But when you have individuals who are really out there in the tail, their ability to do things faster or more efficiently than, say, government, might be quite a bit higher.
Jessica: And so these are trade offs we need to really think hard about. If we’re going to have a system where government can do certain things and individuals out there in the tail could do other things, but we want, but because they’re, sort of like potentially could go rogue, we want to create incentives so that they do the right things.
Jessica: So that’s the question for me is it’s an empirical question what the best system is and not an ideological one. And I wish there were more empirical studies of how these different social structures, what their consequences are. There are some in economics, but I think we’re really only beginning to get a sense of the full picture.
Jim: Yeah. Folks we know, Sam Bowles and Herb Gintis have done work along with some other folks that strongly imply that the kind of work that you saw in primates also applies to humans in some of the simple games that they have set up all around the world cross-culturally, where the existence of the ability to punish bad actors reduces bad actions. Do you have any thoughts on how this kind of work applies to humans?
Jessica: Well, I mean one thing, just you mentioned punishment. I mean one of the things that was very important in the primates’ robustness and interventions, conflict work, is that the policers, they’re perceived to be disproportionately powerful, so no one’s going to challenge them. But it seems to be critically important that they actually don’t use force when they intervene. So when they intervene, it’s impartial and they generally use if any, very low levels of aggression, like not much more than a threat.
Jessica: Now on the one hand, they don’t need to. But you know, if you look at all the sociological theories of power, and one of the big influences on me was this guy Bierstedt. Wrote some very nice work on power and what it is, some of the cleanest perspective on that, that I read quite widely in that literature.
Jessica: When you use power, you risk losing it. That’s one sort of thing. And another is that it’s a credibility thing, so these individuals are out in the tail, and they’re out there because of the collective dynamics and to some extent, choice made by the other individuals in the system. So if they were abusing their position, they probably would receive fewer signals because it would be avoided. And so the opportunity give the signals wouldn’t be there.
Jessica: And so they would actually accumulate, they would have less power because the signals are key to them, the key to the stability of their power, right? So this is a really sort of important point. They risk losing power when they use force in these contexts. That’s one. And then another is to some extent the signaling dynamics is really like voting in the human case where you’re making a choice for one leader over another, and you could influence that outcome by avoiding and not giving the signals.
Jessica: And you see that quite clearly in chimpanzee groups where, unlike the macaques, and remember in one group at Yerkes there was an alpha and a beta male. So the beta male gave a subordination signal, it’s called a pant-grunt in that case, to the alpha male. But the alpha male was very aggressive. And so in the dyadic relationship between the alpha and the beta, the beta had to give the signal, but the rest of the group did not like.
Jessica: And so alpha is defined by receiving a signal from everybody else. But the most powerful individual in the group is the beta male. So the beta male, even though he gives a pant-grunt to the alpha male, receives more pant-grunts from a larger or more diverse group of individuals than the alpha male received. So we said that beta male was the one who was perceived to be the most powerful individual in the group even though technically the alpha male is the one that receives pant-grunts from everyone.
Jessica: And what you would actually see the chimps doing is they would pass by the alpha male and within his sight, in his field division, they would give in this sort of very irreverent way, pant-grunts to the beta male.
Jim: Ah ha.
Jessica: [inaudible 01:17:54] pant-grunts to the alpha male. And so it really, I mean, I never published any papers on this because I didn’t have enough data. And so this is a little bit speculative, but it did really seem like a choice in the part of the chimps in terms of voting. They were like saying, look, I’m not voting for you, I’ll give you a pant-grunt if I have to, but I’m not really voting for you. I prefer this guy whose interventions are more diplomatic, actually.
Jessica: And he was a better manager of the sort of group’s social interactions than you are who uses force too much. So, really interesting stuff. And so, I mean, this notion of power and how you maintain it and how you use it, I think it needs a lot of careful thought.
Jim: Yeah. It sounds like we take that model, it’s an argument for kind of self organizing, network based authority as opposed to top down role based authority.
Jessica: Yeah, I mean, it can become top down if you develop institutions and norms and rules that allow it, that comes back to the slow variable point and make it hard to change. Right? it effectively transitions into something that is top down even if it started bottom up.
Jim: Yep. Let’s move on to our next and we’re getting close to our time line so it will be one of our last, maybe not the last. You’ve dug as deeply as anybody I know into what might be called the foundations of complexity. I sometimes warn folks that complexity science is still a baby science. Now I will say since I’ve dug deeply into it myself, it’s been at least 10 years and I’m sure a lot’s happened. Is it still true that complexity science is a baby science and if so, what do you think the next areas where foundations might become solid, or will they ever?
Jessica: Well, I mean, I think it’s definitely still, compared to physics, for example, definitely still early days and then there’s a distinction, too, between people working in complexity science, maybe some theoretical biology and so forth and the rest of biology and social sciences in particular. And so things might be moving kind of fast within complexity science, but it’s not the results and approaches and ways of thinking still haven’t spread as far as they need to.
Jessica: So I can still go to a biology department and give a talk and find that they never walked over to a physics department, or the idea of using a maximum entropy framework or an Ising model to study biological system seems silly and sometimes it is silly because they’re used inappropriately or without any attention to mechanism.
Jessica: But yeah, so there are definitely still sort of like, it’s the future is… What’s the expression? The future is here, but it’s not evenly distributed. [inaudible 01:20:32] complexity science, so that’s one issue. But then in terms of the topics and the questions, I always tell people at the public lectures and in the summer school talks that we sort of have two types of talks at the summer school and in public lectures to some extent.
Jessica: The one type of talk is the kind on the complexity science stuff that’s now been around for a while and on which there are textbooks, so for example, network science and some theoretical computer science and maybe you know, dynamical systems, right? So you get all this Bradley and Chris Moore writing his book on computation coming from theoretical computer science, not computation biology, not computation by biological systems, which is what we do.
Jessica: And you get Mark Newman doing this absolutely outstanding work on networks and that stuff is, it’s being hammered and it’s gotten to the point where it’s pedagogical. So there’s cool stuff on the horizon to do, but there’s a core of agreed upon concepts and methods developed by these guys and can be conveyed in a fairly accessible pedagogical way to junior or lay people.
Jessica: And then there’s this other stuff, and the other stuff tends to be more like at the borders as we were talking about earlier between philosophy and biology or just where the questions are, where all the currencies in working out the question and then taking that question and making it amenable to modeling and theory and gathering data. And that’s where another group of researchers sit is in that space.
Jessica: And certainly like my work and David’s work sits there. And then the collective computation stuff that we do, that as you knew where it is, I often do that long list of disciplines that informs collective computation. Maybe in one or two of those disciplines do I know as much as my peers. Maybe one of them. In most of them, I really don’t know that much, but I know a lot about collective computation and I know a lot and increasing a lot about how it relates to ideas about computation in theoretical computer science.
Jessica: And so this thing that I study, it sits in this liminal space. You used a word, used earlier that I quite like. It sits in this liminal space, this sort of intersection between many different fields and you have to be very comfortable drawing from those fields and working with collaborators, it’s super important, who know more about those fields than you do and being comfortable not knowing everything.
Jessica: And that’s general feature [inaudible 01:22:48] SFI in order to make progress in these new areas. And so I think there’s a number of new areas, like another one is the thermodynamics of computation that sits adjacent to collective computation that people like David Wolpert work on. And they’re building on old ideas at SFI, but they feel new again.
Jessica: They feel new and very young, and so I’m always trying to convey to people the difference in the talks, in the summer school talks, like here you’re going to be exposed to this new set of ideas. It’s not well worked out. The person giving the talk does not know all the answers, maybe knows more of the questions, and you’re going to be exposed to some more pedagogical stuff and you shouldn’t confuse the two.
Jim: Very good. Wonderful distinction. Yeah, like for instance, the network science work of Mark Newman, which has a really good book called Networks, I think, or something like that. That’s a good example of one that’s well worked out and then a lot of the rest of this stuff is still a work in progress, so that’s probably a good way to think about it. So here’s our last question. You spent thousands of hours with primates of various sorts, and I’ve long been fascinated by the idea that we have such close relatives here on earth. What do we think between us and chimps, it’s 1%, one and a half percent difference in genes, something like that.
Jessica: Mm-hmm (affirmative). Gene expression.
Jim: And frankly, I’m saddened in a pretty deep way about their fate in the wild. You’ve spent thousands of hours with them. What are chimps like? And other primates? What are they like?
Jessica: When I was in graduate school, I was in the group of Frans de Waal, and Frans had, he’s a famous Dutch ethologist and Frans would make this point and I think it’s just so important, and that is, he would hire a graduate student or someone to help collect data on the monkeys or the chimps, macaques [inaudible 00:15:31]. One of the questions he would ask them after they spent a couple of days up there on the tower is what’d you think of the animals? What’d you think of monkeys in the group or the chimps? And if they said, “Oh, I love them all,” they weren’t hired. Right?
Jessica: So one of the first points Frans made is incredible insight about social dynamics and individuals and individual behavior and so forth is that just like in human groups there’s this incredible heterogeneity and they have complex personalities and very interesting social structures. And if you don’t see that, then you shouldn’t be an ethologist. You shouldn’t be collecting these data.
Jessica: And so the chimps, to be honest, like there’s a lot of variability in how likable chimps are. There are some extremely likable chimps and very intelligent ones or others that are quite Machiavellian and just never seem happy. And you know, I mean just like humans, I see the full range there as well. I mean it’s a little lesser in the macaques but, but there too.
Jessica: And so chimps, I mean there’s this stereotype. Chimps are the aggressive, hierarchical ones and bonobos are the sort of free love egalitarian ones. And these distinctions are, of course, they are stereotypes and they’re amplified a little bit by the politics within the animal behavior community of who studied. The Wrangams of the world, Wrangham’s at Harvard and the Frans’s of the world or the [Thomas Ellis 00:01:25:44] and their particular viewpoints on things.
Jessica: But there is something to these distinctions and it’s not necessarily that these distinctions are, have a genetic basis. They might have some genetic basis, but they are distinctions that get amplified by social structure. And to give you an example, a reference, a story by Robert Sapolsky… Sapolsky’s sort of a neuroscientist. He works on cortisol and stress and health and he’s done primary work and studied sort of the role of cortisol and serotonin and other things in behavior and dominance and related phenomena.
Jessica: And he studied for a while, he had a field site in Kenya. And I thought about going there and working with him when I was in grad school, but I remember he had this great story. He had these groups of baboons at this fields… two groups of baboons, I think. And there’s a garbage dump, and in these groups of baboons are these very aggressive, dominant males and they weren’t the type of males who, according to the story, who modulated their power. They abused it regularly.
Jessica: And so there probably was a fair bit of change. But these aggressive males of course, by rights got access to the food at this garbage dump and they got tuberculosis. They died. Nobody else ate the garbage, and so what this left, it’s left this culture in this group, or left, the only animals that left were these sort of docile, as Sapolsky relates this story, these docile, less aggressive males.
Jessica: And the social culture of the baboons changed. And the interesting question was, well was it just going to last as long as these young males then became bigger and older lived? Or would it be culturally inherited where, and maybe there’s some gene expression issues and so forth. And according to Sapolsky it was, so the next generation after these younger males also maintained these social norms, so to speak, in terms of the style of interaction they had with other animals.
Jessica: And so, the chimps are more aggressive. They do seem to be more aggressive, maybe a little bit more hierarchal than, say, the bonobos. But I think this is a little bit overstated, and also it’s not necessarily just to kind of like species specific thing. There’s a lot of plasticity in social structure and coming back to that downward causation, behaviors get reinforced by the sort of norms and institutions that are in place.
Jessica: And when those change, when the mechanism or context arises for those things to change, you can get big changes in the system. And that brings us full circle to the emergent or social engineering questions you asked earlier like how do we intervene? Who do we remove? What behavioral changes do we want to induce? What’s the right way to get a different social structure to move from a more despotic society to a hierarchical one?
Jessica: And I think by having these more informed micro to macro maps or information processing systems, we’ll be in a better position to do that. And so there’s a long answer to your, “What are chimps like?” They’re like their social structure demands to some extent and to some extent what their genetics give them. But you shouldn’t discount, one shouldn’t discount the importance of social structure and social institutions in animal societies to the behavior that we see.
Jim: And I would add humans, as well.
Jessica: Oh, of course.
Jim: Yeah, indeed. Well, that was, I think, a good wrap up question. It managed to bring together a lot of the themes that we’ve talked about. I’d like to thank you for a truly interesting wild ride into all the things that you work on and the stuff that’s done at Santa Fe Institute and elsewhere in these areas. I think our listeners will really like this. Thank you a lot.
Jessica: Yeah, thank you, Jim. I mean, it’s wonderful talking to you. You have such an informed and wide ranging mind that no topic is pretty much off limits. I like that a lot.
Jim: That’s what we try to do here. So thanks.
Production services and audio editing by Jared Janes Consulting. Music by Tom Muller modernspacemusic.com.