Transcript of Episode 105 – Christof Koch on Consciousness

The following is a rough transcript which has not been revised by The Jim Rutt Show or by Christof Koch. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is Christof Koch. Christof is a neuroscientist and the chief scientist at the Allen Institute for Brain Science. For many years, the scientific study of consciousness has been one of his main fields of inquiry. Welcome Christof.

Christof: Thank you Jim for having me.

Jim: Yeah. This is as people know who listen to my show, one of my areas of strongest interest is the scientific study of consciousness and I underline scientific. We’re going to talk about all kinds of things, some of them pretty far out, but all of them will be from the perspective of science. And I’d also like to say or announce that this is the second in our series on the science of consciousness. Our first was with Emery Brown from MIT on consciousness and anesthesia. Coming up in a few weeks is Bernard Baars, who is the creator of the Global Workspace Theory of consciousness. And next month we’ll have [00:00:57]Antonio Damasio to talk about his brand new book, hot off the press is at that time with his own theories on the scientific basis of consciousness. But today we’re going to talk with Christof about scientific study of consciousness, and a lot of this it’s going to be based on his relatively recent book, The Feeling of Life Itself: Why Consciousness Is Widespread but Can’t Be Computed.

Jim: I should also note, the end notes at the end of each chapter are very much worth reading in this book. It’s one of the things I really appreciate is when end notes are more than just dry bibliographic information. If you read the book, don’t skip the end notes. They’re good. In fact, and there’s some humor in them. So they’re definitely worth reading. Anyway, let’s get right down to it. Consciousness is one of those very slippery concepts. I talked to people about consciousness, a fair amount and that they have radically different ideas in their brain what they mean when they use the word. I’m sure you’ve run into that.

Christof: Somewhat. Although most philosophers and scholars who study it agree that it really closely related to the way life fields when I’m angry, when I’m in love, when I see something, when I hear something, when I dream of something, when I have an ecstatic experience, whenever a near death experience, when I have a psychedelic experience and I see colors and hear sounds that aren’t really there. Those are different aspects of conscious experience. And the central mystery is and has always been since Aristotle first wrote about this 2,300 years ago, is how is it that these feelings get into my head? It’s clear or at least in principle, it’s clear why if I see something I can reach out with my hand and grab the coffee cup, but why I should actually have a picture and why this picture feels like something? I can look at my coffee cup and it looks like, there’s a movie and this is the movie of my life.

Christof: And then there are these voices and sounds and how do these voices come upon? How do these sounds come across? And then I have feelings, I can be angry, I can be in love. How does these feelings come about? And the mystery is I can imagine sort of all the events. Let’s see. If I go back to this image of the coffee cup in front of me, I can see the photons strike my retina and in the retina in the eye it sets up electrical excitation, and that toggles up the optic nerve that leaves the eye into the brain [poper 00:03:41], and this electric activity gives rise to other electric activity in downstream neurons in my cortex. And ultimately that activity gives rise to activity, my motor cortex that translates it into activity in my spinal cord that activates my muscle in my hand and I reach out.

Christof: That chain of events at least in principle is sort of we can imagine these mechanistic events happening. But then in addition to this, there is feeling, it’s like the genie rubs, you rub the brass lamp and suddenly there’s a genie there. Wait a minute, how did these feelings come there? Nothing in physics tells us, nothing in quantum mechanics, nothing relativity tells us about feelings, nothing in the periodic table of elements in chemistry. Nothing about the endless ATGC chart in our genes tells us about these conscious feelings, but here they are. So, that’s the central mystery.

Jim: Indeed. Going back into the history of the idea, one of the… In the very early just pre-modern period, Rene Descartes came forth with his idea of Cartesian dualism. Could you maybe tell us what that is and maybe your thoughts on what that did to the thinking about consciousness?

Christof: We owe an immeasurable debt to Rene Descartes, the French man who lived most of his adult life in fact in both in Holland, because there he felt he was relatively safe to do worldwide, but what he wanted to do… So he’s one of the fathers of the enlightenment and he really put the modern formulation of the mind body problem on the map. It’s really due to him and ever since then, over the last 350 years, it involves terms that he conceived or he’s one of the first who conceived them certainly in the modern world. So he clearly distinguished two magisteria, two domains, one is the stuff that has physical extension, physical stuff. He called it the past extance, everything that has physical dimensions, our brains, our bodies, stars, dogs, trees, including our bodies, including our brains.

Christof: He actually studied the brain. But then in addition to there is this other domain and that’s a [rescovitant 00:06:01] cognitive stuff. And he said only we have it, only humans have it and this is the ability. This is to think, this is what we would today call consciousness. This is reasoning. This is speaking and for him of course, also the soul, he was a Roman Catholic. The soul was associated with this cognitive stuff. And he said, “Okay. So everything in the world is either a physical stuff today with a matter or it’s cognitive stuff.” And intuitive this makes a huge amount of sense, because they have two things and the physical is quite different from the mental. The physical has attribute that we can all observe, it may be very tricky to observe because it depends on fancy instruments like colliders or telescopes or microscopes, FMI, but we can all agree the physical is sort of concrete. It’s not private, you can measure it. It has so-called third person properties that scientists can measure in the lab like brain activity. That’s an instance of physical property of a physical system the brain. But then there is this other thing, mental.

Christof: And mental it’s the most real thing there is for me, because my entire world that I experience is mental. I see things. I hear things. I reason things. I hear voices. I think about my mom. I project myself into the future. All of those are mental operations. And the question is, how does the physical relate to the mental? Now, his thing is called dualism. It’s also called classical dualism or Cartesian dualism, because it’s a way we think about him because he postulates two sorts of stuff, mental and physical. Now, most modern philosophers, particular philosophers in Anglo-Americans University departments are analytic philosophers.

Christof: And they don’t like dualism. There is this very strong bias towards what used to be called materialism, but now it’s also called physicalism. That ultimately there’s only one sort of stuff, everything has physical properties. There isn’t anything above and beyond physics ultimately. That the laws of physics properly express should explain everything including the mental. And that has been the challenge. So the challenge with classical dualism is how does the physical and the mental staff relate? We know they closely relate because, if I hit you on your head hard enough, you’ll become unconscious. That’s one trivial way we can see that the physical, your brain and your mental, your consciousness has this close relationship. You can have a specific stroke in your brain and you lose specific aspects of what you see all of us sacks as what many beautiful stories about that.

Christof: So we know this is very intimate relationship between the physical and the mental, of how the physical exactly influences the mental, and then more important, how does the mental influence the physical? If the mental is really this ineffable stuff that can’t be measured, that doesn’t have physical property. Well, how can it make my brain do things. I want to lift my right hand and lo and behold, my right hand goes up. I want to lift my left hand and lo and behold, my left hand goes up, but that’s magic. It seems like magic because here it is my… It’s a mental concept. I want to raise my right hand, and the next thing I see my right hand shoots up.

Christof: Well, somehow there must’ve been a mental to physical jump. People call it mental causation. The mental has to have caused something physical, but how does it happen? That’s the mystery. And so you can do away with it by saying, “Well, there isn’t any dual, there’s only one sort of stuff. Everything is physical.” But then you have this great challenge explaining well, but what is this mental stuff. It seems radical different, for once it’s private, you never have access to my mental state. You can infer my mental state by saying, “Well, you’re telling me you’re hungry. So I assume you’re hungry.” But I could be an actor. I could be faking it, right? That makes it very different from the physical that it’s not private. That’s only the third person properties. So, that’s been the challenge relating the physical to the mental brain to our conscious mind. That’s really the heart of the mind body problem.

Jim: Indeed and it strikes, it reminds me of the intellectual confusion around people trying to understand complexity science of the sort we do at the Santa Fe Institute. I often use this very simple analogy to try to explain what complexity is about. I think may well apply here. At least it’s one angle of attack, which is thinking about reductionism is thinking about the dancer. What’s the dancer about, got two legs, can do this, weighs this much, this agile, while the dance is the thing that a number of dancers do together. And the dance itself has an existence separate from the dancers, tend to the over reductionism of some earlier science got people confused about the mind body problem and that they didn’t seem to feel naturally that the concept of an emergent dance from 50 billion dancers was something that’s a relatively commonplace in at least the biological world.

Christof: Well, Jim, that’s one possible answer that consciousness “emerges.” Right? So this is what many people say, it emerges from large complex systems like the brain. So in principle, that sounds fine. Obviously we find it in very big large brains like our brains or brains of monkeys and apes and dogs and cats and mice and whatnot. But what exactly do you mean by emergent? Do you mean, if you take 41 neurons, you don’t have consciousness and then you add one neuron and sudden you get consciousness, right? Then you still have to explain what is that consciousness? Is it just the collective property of all the… Of is it just something mechanistic of 42 neurons interacting? But again, it seems to be different. And so this emergent, which used to be a very popular sort of solution, many people still talk about it, the emergent from a metaphysical ontological point of view, it’s not quite clear what exactly you mean by that.

Christof: It is clear that if you look at things like wetness, so people say, “Well, the wetness of water emerges from not if you have one molecule of H2O of water or two or three, but if you have many, you get this idea of surface tension.” And this is really ultimately that water clings to the walls, let’s say of a glass of a pipette or something like that. That’s what we think of as wetness. It’s a collective property, but then you have to ask, “Well, what aspect of collective properties consciousness, do you just need enough neurons?” But clearly that can be the case because we know some parts of the brain. In fact the little brain at the back of my brain, the so-called cerebellum has four times more neurons than my cortex.

Christof: Yet, if you lose a cerebellum due to a stroke or tumor or something, you will not complain about loss of consciousness, people who had for example, either never had a cerebellum due to a birth defect or where the cerebellum had to be removed surgically due to tumor. They walk funny, they talk funny, they’ve lost sort of the smooth integration of their motor systems, but they don’t complain that they lost consciousness. So it can’t just be, “Well, if you put enough neurons together, you get this collective emergent behavior.” It has to be much more specific than that.

Jim: Now, the guy I have followed historically on this is been John Searle, and he would argue that it is a biological function. And as he likes to say, “If you think about consciousness as like digestion, you may not be far from the point.” In fact, I like to add the [guttural 00:14:05] area that at least humans, the end result might be very similar to digestion. Yeah. You know that it had gradually bootstrapped over many a long epoch of evolution. And our consciousness isn’t the same as a reptile’s or amphibians or dogs, but they’re all on a evolutionary tree. And they all serve relatively similar or analogous at least evolutionary purposes. And it’s a really cool book, by guys. I don’t think you actually reference it, Feinberg and Mallat on The Ancient Evolutionary History of Consciousness.

Jim: They make a pretty interesting case that at least in archery, they are evolutionarily connected back at least to the amphibians and maybe further. They also make of course, very interesting other argument that perhaps consciousness evolved at least twice. They’d argue that the cephalopods, some of the bigger ones like squid and octopi probably have a different form of consciousness, it’s not on the same evolutionary tree. And actually they demonstrates a large evolutionary gap between our tree and their tree, which is kind of interesting. Does it say that the complicated brains tend to produce consciousness? I don’t know. But anyway, I think enough on that, we’ll dig back into these topics in a lot more depth soon. Before we move on deeper, why don’t we define the terms that we’re going to use later on, which is the neural correlates of consciousness. You kind of hinted at it with your talk about the [ceribell 00:15:38].

Christof: Yeah. So this is a term that Francis Crick, the person who co-discovered the DNA molecule of heredity. He and I, we worked in the late ’80s and the early 1990s, when he had left Cambridge, he came to the new world to LA Jolla in Southern California. He was very interested in consciousness and he always ask, “Why aren’t neuroscientists interested in consciousness? They should study consciousness because it’s sort of the most central aspect of our existing and time to [inaudible 00:16:08] that seems just particular, seems a cop-out.” And so we brought many papers together and work closely together for many years. And we originate this idea of what’s today called the NCC, the Neural Correlates of Consciousness, which is the minimal set of biophysical/physical mechanisms that are jointly necessary for anyone conscious percept. So if you hear my voice, my Germanic flavored voice, there will be some neuromechanism in your brain that’s active. Let’s say, some set of neurons are firing.

Christof: That seems sort of a simple intuition. And if I somehow turn those neurons off by some intervention, by drugs or an electrode or whatever, then you wouldn’t hear my voice anymore, although I would still be talking. Conversely, if you can trigger those neurons, let’s say by an electrode during your surgery, you should be able to hear my voice, even though I wasn’t speaking. And so you can ask, “Where are those neurons?” You can ask, “Where are the neurons that correspond to my voice as this is your voice, this is the voice of someone else?” You can ask, “Where the neurons set in code visual perception, not auditory, but visual? And do they look similar? Are they off the same cell type? Are they in similar part of the brain? Are they in same layers? Do they have a common set of genes? Do they use a common set of neural transmitter?”

Christof: And you can ask that for every conscious perception, like the conscious perception of touch or pain. The most dominant one that are studied in the labs, because they’re relatively easy to study are sound, vision and touch. But for every one, the feeling of wanting, the feeling of desiring, all of those people have studied on a various condition in the lab. And this part it is sort of neutral of any particular ontology. So no matter whether you’re a physicalist or whether you dualist or whether you’re a panpsychist. You can always ask. Well, there is some set of… We know this for the clinic. We know this particular study like your previous guest from MIT of anesthesia. There are mechanism in the brain that are specifically associated with conscious sensation.

Christof: So what is the nature? Can we track them down? Will that help us understand the central mystery of consciousness? The link between the physical and conscious experience. That remains unclear, but it is clear that this by itself is a valid scientific project that’s done now in many labs, tracking the Neural Correlates of Consciousness. On the front of the brain or the back of the brain, I think we’ll talk later about that as part of the episode collaboration, when do they occur relative to the onset of the stimulus? How can I use some by electrodes or by transcranial devices? How can I manipulate them using psychedelics and other drugs?

Jim: Maybe give us a brief summary of what you believe the evidence shows where the main Neural Correlates of consciousness are?

Christof: Well, so it’s important to distinguish between enabling factors, background conditions that have to be present for you to be conscious at all. For instance, if your heart doesn’t beat, you will faint within five to 10 seconds, because your brain doesn’t have any extra power reserves and you will lose consciousness. That’s why if somebody chokes you for instance, within 10 seconds you lose consciousness. Now, unlike the ancients, including Aristotle, we don’t believe that your mind is in your heart, but your heart is necessary to suffuse oxygen into your blood and to power the brain. So, that’s an enabling factor. Likewise, in the brainstem, there are enabling factors that are absolutely essential for being awake in the house. And if you lose them, let’s say you have a stroke, even though the stroke can be very small. If you have a stroke in the brainstem particularly if it’s bilateral, you’re in bad shape, you may be in coma and never wake up again although your cerebral cortex is perfectly intact.

Christof: So we know that there are parts of the brainstem that are absolutely essential, but they by themselves do not mediate any conscious percept. So for instance, if your brainstem is intact, but your cortex is destroyed again, the evidence shows you’re not conscious. You have the background conditions in principle to be conscious, but there is no one home. So it turns out that all the clinical evidence points towards cortex, this outermost layer of the brain, it’s sort of highly convol, right? If you’re unfold it, it’s like a pizza. So each brain hemisphere is like a pizza. If you look at it, the brain looks like a walnut somewhat, and you can unfold this big walnut and unfold it, it’s 12 to 14 inch of course like a pizza. And roughly the size of a pizza with a thickness of a pizza with topping, two to three or four millimeters.

Christof: And you have two of them and they’re interconnected by these fibers called the corpus callosum, 200 million fibers that connect the left half with the right half. And it’s really part of this tissue. This is the most complex piece of art of map in the known universe. This two dimensional tissue is what gives rise to conscious in humans. It’s not to say that other animals, we talked about lizards and reptiles and amphibians that don’t have a classical neocortex that they aren’t conscious that they don’t have a different NCC. But humans in particular and mammals in general, consciousness seems to be closely tied to excitation in this tissue.

Christof: Now, even here in cortex at certain regions are more closely aligned with consciousness than other regions. Again, we know that so partly for the clinic, because you can lose some parts of cortex and you might have some deficits, but you don’t necessarily lose consciousness. And there’s now a big debate in the field, that’s this adversarial collaboration, whether consciousness in the cortex is in the front of cortex or is it more in the back? The front is more associated with cognitive functions like intelligence and reasoning while activity in the back of cortex is more associated with things like sensing, seeing, hearing, touching, imagining, visualizing.

Jim: Now the thalamus is also involved at least from what I’ve read that you whack the thalamus, at least in certain nuclei and consciousness will disappear pretty quickly.

Christof: Yeah. That is cool. If you have lesions in your thalamus, it’s sort of this small egg, like a duck egg or something, quail egg in the middle of the brain and it’s divided into 40 different nuclei sort of sub assemblies of neurons. And they supply the different parts of cortex with upgoing information and with downgoing information. And if you have a large-scale lesion there, you can also lose consciousness. What is not clear, if you take this cortex, this two dimensional tissue, think of it like a carpet. If you suffuse that with various neuron transmitters that are conventionally emitted by thalamus, can you replace a thalamic input? So is ultimately thalamus also enabling factor or is it really critically involved? That’s an open question at least to my mind. Because there is this possibility, if you came to place the thalamic input in a normal person, in a normal awaking state with some arousal factors that sort of you splits onto the cortex, then there’s this possibility that essentially just this tissue by itself, this pizza size very, very dense, very complex tissue of roughly 16 billion neurons that would give rise to consciousness.

Jim: And you mentioned that there’s disagreement about where in the brain, but you have your own views, what you call the hot zone. Where’s the hot zone?

Christof: Yeah. So the clinical evidence particular based on lesion and based on electrical stimulation that neuro surgeons do or neurologists do to test for, for example, the presence of epileptic seizures, that typically gives rise to conscious sensations, or if you remove it gives rise to lots of consciousness in sort of the back of the brain. Not all in the back, but sort of the posterior region, occipital temporal parietal, and that this really is the most critical parts for the conscious experience of everyday life. Seeing and hearing the sounds and sights of life that they depend, that sort of 99% of all conscious experience depends on activity in this part of the brain, that ultimately is its substrate.

Christof: And by the way, it’s not because of any, I’m biased against the front versus the back, I mean, it doesn’t really matter what’s in front or the back. The claim is ultimately one part is more important for conscious than the other, because the intrinsic connectivity is very different. That’s really ultimately what matters at least according to the theory, I say the integrated information theory. It’s not that I take up brain and I turn it around by 180 degrees and now the front is the back in the back of the front and consciousness shifts. No, it has to do that, the intrinsic connectivity in the back of the brain is quite different from the intrinsic connectivity in the front of the brain and that is ultimately what makes a difference for conscious experience.

Jim: And as we’ll talk later, we may distinguish these two leading theories. So let’s move a little bit from the material substrates to the more subjective aspects. The word that’s often used by philosophers of consciousness and scientists of consciousness is qualia. The plural of quale There are arguments about it, in fact some philosophers of consciousness like the Churchlands argue that it’s more or less an illusion that it’s epiphenomenal. What do you have to say about qualia and it’s reality?

Christof: Well, so as I said people particular in the modern era have challenges with the mental. What is the mental? How does the mental relate to the physical? So one way of eliminating that challenge just to say, “Well, the mental actually doesn’t exist.” You’re just confused about it. The churchland patent and Paul Churchland and of course, Dan Dennett are the most famous advocates of that. So that’s like saying, “Well, you’re simply all confused about it.” That simply doesn’t… I had this correspondence with Dan Dennett while I was on a climb in the Sierras, and I had this really bad toothache. It was so bad, I had to abort my climb and go home. And you’re lying in your tent, you can’t sleep, you’re in complete misery and your entire consciousness is filled with the agony of this inflamed tooth.

Christof: And to say, “Well, sorry Christof, you’re just confused, there isn’t anything like pain.” All the suffering, all the pain that people experience, that’s just all illusion, it’s all fake. It doesn’t exist. Well, it simply doesn’t cut it to say the least. I mean, if this is an illusion, then I mean, it just seems to be the most irrational manifestation of the human mind to declare the central aspect of my life, which is my conscious experience to just say why you’re just confused about it all, because I don’t right now have a rational way to explain it. There is this flat Cotard’s syndrome called Cotard’s syndrome where people claim that they’re dead due to various, I mean, some psychiatric condition. And it seems to be sort of the philosophical equivalence of that, that people say, “Well, you’re actually dead.

Christof: There isn’t any feeling at all. I know you claim it, but you’re just all confused.” So that doesn’t cut it, so we have to confront it. We have to confront the fact that there is this consciousness. It’s different from the physics, it’s closely tied to the physical, right? I don’t think we need supernatural explanation to understand it, we just need to sort of enlarge in the remit of what we call physical. That perhaps the physical, just like in the 15, 16, 17th century, people had to include this new thing called magnets and magnetic fields into their description of physical reality. And early on in the 20th century is we included the thing called spin that particles can have a spin up or spin down and some particles can have fractions of a spin. Just so I would suggest to explain consciousness, we have to enlarge somewhat what we mean by physical. Physical ultimately is anything that has causal power on other things, ultimately is physical. That’s one way to think about it. This goes back to a comment in one of the Plato dialogues. Anything that exists, exists because it has causal powers onto other. That’s how I know things exist, because they have causal power on me, or I may have some minute causal power on them. Like the moon, it exists because it has causal powers. It leads to the rise and fall of the water in the ocean.

Christof: It’s clearly visible. I have a minute influence on it, but it has a lot of influence on me and that applies sort of to everything. Consciousness also has causal power, but at least per this one theory integrated information theory, it has causal power upon itself, but it is something physical. So, that sense John Searle is entirely correct. Consciousness ultimately is like digestion. It is something ultimately associated with physical staff with certain types of physical organization. Not any physical organization like not a June or not a Hill of sense. But it’s associated with certain systems, particular with very complex highly networked system. It is something physical and ultimately if we sort of properly expand our remit of what we mean by physical, it’s just going to be part of physicalism. So you just expand your notion of what is physical, that’s it.

Jim: That’s good. That’s a very clear explanation. Let’s move on to the next piece, where I thought was very well done. Very interesting. Made me think about consciousness a little bit differently. Would you very carefully distinguished consciousness from intelligence and cognition? Maybe take our audience through that a little bit.

Christof: Yeah. So when most people think about consciousness, they think about well, particular consciousness in adults, particularly in literary adults. So people who buy books, and then you think it’s about reasoning and imagining future scenarios. And clearly it seems to be more present by this notion in sort of more intelligent people. And it seems to relate to intelligence. But then the challenge with consciousness and the falling of I stare at an empty black screen, the black mirror right of the show, right? That already is full of consciousness of space and we never think about it. Have an empty canvas like a black screen has already all this team, I mean, when you look at it, there’s space as distance relationship, there’s closer by and far away. There’s inclusion, the neighborhood relationship, all of that is already expressed just in a blank canvas without anything present, you don’t need a fancy scene.

Christof: You get it of course, also when you have a fancy picture, but even without a picture. There is already notion, there’s kind of relationships. Are there linear relationships? There inclusion, there’s exclusion. So if you think about it in this way, consciousness of smell, consciousness of hearing. When I hear this entire 3D space around me, those are very complicated thing and they do not necessarily relate to our conventional notions of intelligence, like in an intelligence quotient. The claim once again, of integrated information theory that they are priori are completely [dissocianable 00:32:39] that you can sort of plot. You can ma make a two dimensional plot, on the one axis you plot the measure of intelligence like IQ a G factor or anything else.

Christof: No matter how you want to measure intelligence and psychologist have deviced measures you plotted on the X axis and then consciousness if you have a measure like integrated information theory has phi, you can plot on the Y axis. Now, if you look at it evolutionary, there seems to be this relationship that if you take complex animals with complex brains like ours, they have high intelligence and high consciousness. And then you take a dog. A dog is smart. I did has canine intelligence, but it seems to be less than ours and its consciousness per humor is also less than ours. And so you can go down the evolutionary, a lab actually you get to maybe cephalopods and then you look at some simple snails, and you look at the fruit fly and maybe you look at a Paramecium. There’s a sort of monotonic relationship as the size of the brain grows, it’s intelligent grows and its it’s consciousness grows. But then you can take creatures like brain organoids, right?

Christof: So we have now stem cells engineers are very good at taking cells from your under arm, underneath your arm, the way from the sun, putting it in a dish, adding for transcription factors, waiting nine months, putting it in an incubator at the right temperature. So fusing it with nutrients and you can grow these things into so-called brain organoids, cerebral organoids that share certain aspects with brain cells in your brain. In fact even in your cortex, they even begin to show electrical activity now. So now you have people that can grow them, maybe half a million cells and they begin to show something like a very, very simple EEG. But of course, it has no intelligence there because there’s no input, there’s no output, but there’s no way to manipulate anything. And so we can’t measure is it intelligent by any measure. But at least conceptually, it could certainly feel like something. Conversely you can take something like AlphaGo Zero, the various computer program that DeepMind has built that certainly by some measure are intelligent. They can teach themselves to play Go and chess better than any living person.

Christof: So by some measure, their intelligence, there’s no evidence there at all that they’re conscious. And so in this plane, you can have the idea that computers in principle could have very high levels of intelligence, but no consciousness, while you could in principle, imagine you can grow brains. If you think again, about what we talked earlier, this cortex, these cortical carpets, you can go onto a relatively large and complex size where they may have no intelligence, but high level of consciousness. And then if you think about really what is meant by the term consciousness and intelligence, it really becomes clear that they denote very different things. Intelligence, ultimately it’s about the ability to manipulate our environment.

Christof: Either it’s to take rapidly information in, to infer something and then to act upon that immediately on the short-term, on the medium term, on the long-term. To plan for the long-term, that’s intelligence. And some people have a lot and other people have a less and it changes across species. Consciousness is a sensation, is this feeling, is this qualia that’s very different from behavior. One is essentially about doing, intelligence is fundamentally about doing, consciousness is fundamentally about being. They’re really two very different things, certainly conceptually. So it depends on your theory now, but at least at the conceptual level, we need to sharply distinguish between intelligence, which is about doing and consciousness, which is about being, being in a state of seeing, or being happy or being angry or being upset or being unconscious.

Jim: Let me run an analogy by you that I came up with when I was chatting with a friend over the weekend, he was asking me what I was reading. And I told him about your book and he was fascinated by it and I suggested that Christof would say that. Here’s an analogy that consciousness at least in the frame of conscious cognition, what we do in our system, two type reasoning, consciousness is like the building site and intellect is the building. Humans are a very complicated building, like an oil refinery, maybe a reptile is like a little small portable saw mill or something like that, but all buildings need the ground beneath them. So it’s essentially the substrate in which we do conscious cognition. Is that tolerable as an analogy?

Christof: You neglect aspect of consciousness at a much more basic it… Let’s go back to the toothache, okay. Have you ever been in bad pain?

Jim: Oh. Yeah.

Christof: Okay. So what is cognitive about pain? When you really have a bad pain, it is a God-awful feeling. It can be so bad that you want to kill yourself to end it, but there’s very little cognitive about it. It’s not abstract. It’s not an abstract manipulation. It may not involve memory or planning or reasoning or any of those other things. It’s just a God awful state of being. Conversely, think about sex and orgasm again, what about the orgasm itself? It’s this intensely pleasurable thing. What is cognitive about it? It is intense, it is this state, which you want to last forever. Or if you take a drug like 5-MeO-DMT, right?

Christof: When your entire being it’s reduced to this point of a measurable of… Everything has gone, except this point of icy blue light, where you can’t turn away from, you look at it because it’s the only thing that exists. Everything else has gone. You’re gone. Your ego is gone. There’s no more you There’s no more dreams, desire, fear. It’s all gone. What there is, is this presence of this icy blue point of unbearable intensity and a feeling either of ecstasy of thred or combination of ecstasy and thred. Again, there’s nothing cognitive about it. So these are all highly, highly conscious states that are either aversive or appetitive, they’re either very positive or very negative or somewhere in between, but without having associated sort of cognitive operations, that’s not to say that you can’t also have cognitive operations. I can do mathematics. I can think about metaphysics of consciousness and I have certain conscious experience with them, but we tend to emphasize those and by books about those while the many forms of consciousness are completely neglected yet they make up the very feeling of life itself.

Jim: Yeah. That’s why I used the distinction between the building site and the building that perhaps are thinking through, let’s say doing a math proof, we may use the machinery of consciousness. And of course, this is closer to Global Workspace Theory that we shuffle in and shuffle out various conscious contents onto this plane of consciousness, which by itself is much more subjective, but is able to have conscious contents within it. I know that’s more GWT and less IIT, but that’s where my analogy came from.

Christof: All right. Yeah. Again, I would challenge you when you mean it’s less subjective. What do you mean by that? Because for you as a conscious observer, you Jim Rath, it’s the only thing that is. In fact, how do you know a world exists at all? This is, I mean, surely that makes us very clear. It’s at the precondition for us doing physics is the fact that I can see these microscopes, I can see images, I can read off a volt meter or a scale, right? So for me to know the existence of a world, I depend on my conscious subject to feelings. So for me as an observer, it’s the only thing that is ultimately, it’s just subject to feelings. Now, of course, we also have science and science sort of deals in these so-called objective things that in other words, things that you and I can agree on. We can agree on that this brain has such and such texture and has this and this mass, et cetera, we can agree on. But for you as an individual observer or for me as an individual observer, ultimately the only reality is the reality of my conscious experience. I cannot escape my conscious experience.

Jim: Yeah. I don’t agree to disagreeing there at all. Like pretend for instance, if you add two numbers in your head, two small numbers, five plus three, it’s all in your head. But I think one could argue that the concept of five and the concept of three are conscious contents, which live in the same spaces of our consciousness, and we can manipulate them using conscious tools, et cetera. So there’s a relationship between the ground and the superstructures, is what I’m getting at.

Christof: Yes. There you are entirely correct, otherwise it wouldn’t have evolved. It’s not the random, I did in fact, highly, highly precise this mapping between this, I guess what you would call the superstructure and the ground, otherwise it would simply wouldn’t have evolved.

Jim: Yeah. Exactly. That’s the point I’d made earlier that it’s got to have a function because it’s not cheap, right? The energy in the brain is 3% of the body’s mass, but it’s 20% of its energy. If you assume something like the default mode network is somehow involved with consciousness, which some people think it is. That’s a non-trivial amount of the total firings if not the total mass of the brain. And the energy budget for support consciousness probably be pretty large and maybe an equally large amount of genetic information costs. So it’s not useful for something, it’s kind of hard to imagine why it’s persisted for more than 200 million years. We’d love to talk about this stuff but unfortunately we only have a certain amount of time. So I’m going to skip ahead across some things and let’s get now to the theory that you expound with great elegance which is the integrated information theory that was developed by… How do you pronounce his first name?

Christof: Giulio Tononi, an Italian-American neuroscientist and psychiatrist. Over the last 20 years. He’s at the University of Madison in Wisconsin.

Jim: Yep. I’ve read his paper when it first came out and I said, “Wow, this is really interesting.” And it’s gone through some iterations. So let’s start slow, but let’s see how far we can go on explaining first what integrated information theory is and why we might believe that has something to do with consciousness.

Christof: IIT is a fundamental theory, in a sense, it starts from axioms and then it proceeds from the axioms to postulates and the postulates then have certain consequences that you can then measure. Let’s take a step back. There are many people who say, who have a theory of consciousness. Most of the time what they have is a hunch. They have a hypothesis. So famously of course, Francis Crick and I had this idea that 40 hertz gamma oscillations are one of the critical signatures of consciousness, one of the NCCs. The fact that neurons can often fire periodically, roughly every 40 or 50 times a second, it’s called the gamma or the 40 hertz oscillation. Now that wasn’t a theory, that wasn’t just a… That was a hunch. It may have been right, it may have been wrong, we still don’t fully understand, but that’s not a theory.

Christof: So many people in that sense have a theory. They have a hunch that, “Oh, quantum mechanics is important.” Again, that’s not a theory. A theory really has to start with a few basic assumptions and then have to describe and explain as much as possible all of consciousness. And particularly you want a good theory to make predictions and extrapolations that weren’t assumed at the beginning, particularly to explain new phenomenon. So IIT starts out with sort of as assumption, this axiomatic formulation that consciousness exists for itself to be for freely the Cartesian insight, cogito ergo sum. I’m conscious, therefore I am and that’s consciousness exists for me. It doesn’t require an observer. It doesn’t require God or my parents or you or anyone else. It exists for itself. And it is structured, every conscious experience, no matter which one you’re talking about it’s structured.

Christof: So right now, I’m looking at this black screen, there’s left, there’s right, there’s up, there’s down, there’s far away there could be elements that have this thing qualia. There could be objects. It could be close by farther away, et cetera. It is the specific way it is, everyone conscious experience is specific in a particular way, right? I have a particular type of pain, which is different from a different type of pain. It’s only one at any given point in time. So it’s holistic, the onset of several consciousness once at any conscious experience is one, it’s sort of unitary. It’s holistic. Those are different terms that people say and it’s definite, which means it is very precise even though a fuzzy feeling that something is wrong. I’m going into this dark garage and I realize something is funny here, I can’t tell you what, but I know something is funny.

Christof: That itself is a very precise feeling and it’s definite in the sense, most conscious there’s this entire universe of conscious experience that I’m not having. So it has definite bought us and then you take your five axioms. So every conscious must exist for itself, it’s structured, is specific way it, is one in it’s definite and you say, “Well, let’s translate it into postulate where I take a mechanism.” So a set of neurons or a set of transistors or any physical system in a particular state. It doesn’t really matter what it is. Typically, it’s some simple neurons because we know conscious seems to be associated with neurons and but it doesn’t have to be neurons, it could be transistors or it could be the state of some crystals.

Christof: And ultimately you say, “Well, it is what conscious experience exists for itself in the way you found it.” That you see what conscious ultimately is about causal power. Ultimately any theory has to make a commitment and ontological commitment, what exists and ultimately in the view of IIT, what exists, the only thing that exists, I think that causal power. Either onto other that’s external causal power. And so physics only deals with external power, whether that’s a Coulomb’s Law or gravitational law or the laws of electrodynamics, that’s always causal power onto other. Consciousness is similar, except it’s causal power upon itself. So the theory says anything that is causal power upon itself that’s known that has some causal power at the system level upon itself, any system like that is conscious, whether it’s biological, whether it’s organic or non-organic, doesn’t really matter. It’s nothing to do with the substrate.

Christof: Any system that has causal power upon itself is conscious. And you formulate that in a mathematical formalism, you take a system, you have a transition probability matrix that describes how the system will evolve in time. And you say, so you can compute the causal power it has given the system exists right now in a particular state where some neurons are off and other neurons are on. You can compute from the state that it most likely came from in the past. And you can compute what is the most likely state it will evolve to in the immediate future and that gives you sort of a measure of it’s causal power. And then the theory says, “For any one system, these 10 neurons or these 86 billion neurons, you have to look at what is the ultimately the substrate of consciousness.” Because that also will have definite borders, just like consciousness has definite borders that the theory says, “Ultimately, you’re looking for a system that has a maximum of causal power.”

Christof: And so for any one system. So you look at, let’s say the brain. In principle, in practice, it’s difficult because of these numbers quickly explode because of combinatorics. But in principle, you either look at all possible combination of neurons and you always ask, what is the set of neurons or the set of mechanisms that has maximum causal power? The way you measure that as you compute this number called phi, it’s a number between zero and whatever infinity. And if it’s zero, the system doesn’t exist for itself, it’s fully irreducible to a smaller system. The bigger it is, the more it is irreducible, the more it is consciousness. And for any one system like the brain, you see, what is this set of mechanisms that give rise to the maximum and only that exists? And that is a substrate of the NCC, the substrate conscious experience. And that substrate has certain neurons that are in, those are part of the NCC and other neurons that are out, even though those neurons may connect of course, in the brain, everything it’s seven degrees of connection.

Christof: Everything ultimately is connected to everything else, but certain parts of it are a part of the substrate and other neurons are not part of the substrate, that is the NCC. And this you can measure and in fact the system has given rise to a practical device that’s now being tested in a variety of clinical centers here in Nepal called EZAP and [zip 00:50:48]. Essentially what you do, put some of you take a person like Emery Brown studies, anesthetize person or you take a patient who may be in a persistent vegetative states, what’s now called a behavioral unresponsive state. So you don’t know whether there’s anybody home. So what you do, you send a magnetic pulse into the brain. You ZAP it with a device called transcranial magnetic stimulation. And then you look at the reverberation of the EEG. So this patient has a 64 EEG, sort of this net of EEG electrodes on his head.

Christof: You shock it and then you see how the waves propagate over the cortex. And it turns out that those people that are deeply asleep or deeply anesthetized are totally not present. Let’s say totally in a coma, they have a very low brain complexity. Their phi is sort of is relatively low. What people who are normal about who are dreaming, which is a conscious state or people that are not properly anesthetized or aren’t really anesthetized, but use an ketamine, which is not a to anesthetic. It’s a dissociative, but people are actually conscious, but they’re unable to talk. They’re disconnected from the external world or people in a minimal conscious state where they might not be able to talk, but they are actually conscious. In all those people, the brain complexities relatively is high above a particular threshold.

Christof: And so this theory has come up with this practical measure that’s now being tested to see, is it a reliable measure to test the presence of the absence of conscious in subject that can’t talk to you, because they’re disabled, because they’re anesthetized, because they’re babies, because they’re catatonic or because of various other neurological or psychiatric conditions. The theory has a number of very unusual predictions about splitting brain, about mind bridging, about consciousness in connected minds. It also has a number of intuition that it shares was panpsychism, because it says, “On a certain condition consciousness may be much more widespread in the animal kingdom or in biology.” Then we typically like to think. So it makes all sorts of predictions that people can test and some are being tested.

Jim: Because I was reading it and I was, I said, I ever read earlier versions of Tononi’s paper. I don’t think I’ve read the current one, but I certainly read the first one and the second one. Could you dig a little deeper into this concept of internal or intrinsic causality? That’s kind of a non-obvious concept. Maybe put a little bit more color on that.

Christof: So how much of a difference can the system make to its own state? Right? So if you have a system that only can adopt can only be in one state, it has very little causal power on itself. But if a system is in a particular state and has access depending on the exact probability transition to very large variety of different states where they [inaudible 00:53:55] principle evolve to, then it has sort of more causal power upon its own fate. Unless for example, all these connections are subject to noise, right? If they aren’t reliable and if the transition to all these other states are totally determined by chance, and again, it will have very little intrinsic causal power. So the causal power is the ability of any one physical and there’s nothing supernatural about it. It’s not some sort of fuzzy ectoplasm.

Christof: You take any mechanism, even transistors in principle and copper wires and you hook the transistor gates together, et cetera. They will have some ability to influence sort of their own state and how much they influence their own state and determine which of how many different states they transitioned to. That gives rise to either higher causal power or lower causal power. And it turns out that if you look at now, connectivity, let’s say of computers of the CPU or the ALU where typically one transistor is connected to three or four other transistors. Or you look at a network connectivity like the brain where one neuron gets input from 10,000 neurons and projects to 10,000 other neurons. And that output overlaps with let’s say to 9,900 of those are also get also input from a neighboring neuron and from its neighbor and from its neighbor, the possibility for the system to choose one out of a gigantic number of states, depending on the exact overlap of the inputs and the output is vastly larger for the neural network with this 10,000 fan in and 10,000 fan out.

Christof: And with this heavily overlap, then it is in a computer. So in fact, you can formally show that you can have two systems. You can have simple computer, very simple, like three or four bit computer, and you can have a simple neuron network, although on a certain condition, they computer exactly the same input output function. The causal power of one can be very low in the limit, it can go to zero while the intrinsic causal power of the other can be very high. It’s not about the input output function. It’s very important to understand, it relates to how it transitions from its current state into the next state.

Jim: This is where I was a little quite unclear because, I mean according to a Turing Church hypothesis, we could simulate the whole human brain on a Turing machine, one of the simplest possible computing devices. And according to IIT, if we did that the simulation would have essentially zero phi, very, very low phi. And yet if we calculate it in the brain, it has a very high phi. Well, it turns out that the neurons actually run on their own substrate. They run on a substrate of biochemistry and I know a little bit about the biochemistry particularly of metabolism in the cell. And it has a very, very different kind of interaction and connection scheme than the neural architecture does and at least kind of a rough gauging it. I would suspect that as a much lower phi. So why does simulating the brain on a Turing machine, or let’s say not even assembling the brain, let’s say running a artificial consciousness on a Turing machine, be impacted negatively with respect to it’s phi by as a substrate while the way IIT is usually presented, they don’t talk about the biochemistry substrate for the neurons.

Jim: What’s the difference between the two?

Christof: It’s a very good question, but quite a bit to unpack. So A, what does the relevant substrate and what’s the relevant level of granularity? Because ultimately of course, below the biochemistry, there is a molecular, then there is atomic, then there’s a sub atomic, et cetera, until we come to superstrings or membranes or whatever the current best theory is. So why not just do it at all the way down at the bottom, right? Well, IIT says that the relevant level of granularity, is it let’s see atomic? Is it molecular? Is it sort of a cell organelles? Is it single neuron? Is it groups of neurons? Is it coulombs? Is it groups of coulombs? It is the one that maximizes phi. Same thing at what timescale is, is it nanoseconds? Is it milliseconds? Is it minutes? Is it hours?

Christof: So in principle for every physical system, it’s a maximum principle, just like many other principles in physics. It is a substrate and it’s a level of granularity and the level of temporal and spatial granularity that maximizes the phi. And it is only at that level that it is proper to speak about the substrate of consciousness. Our intuitions is right now, this might wrong and you might be right, is that it’s at the level of groups of spiking neurons. That’s what most scientists think, but that’s an intuition. That’s not a proof. And ultimately if IIT is a correct theory of consciousness, it will have to be shown that that is the level that maximizes.

Christof: And in principle, you could do that, you can compute these interactions at various levels of granularity and you could do it at the millisecond. You could do it at ten of millisecond. You could do it at the nanosecond if you have enough computer. So that at least in principle, I think it’s a very precise answer to that question. It is the level at which phi is maximum. Your other point is and of course, it’s too it’s a brain simulation let’s say a whole brain emulation, let’s say of a mouse. I can’t even do that right now, but let’s say 20 years forward when I can do a mouse and then maybe in another 50 years, I can do it off a human. Well, so if this is a correct simulation at the micro functional level, where assimilate of a neuron and did it correctly, then of course the thing the simulation should open its eyes or its cameras, or whatever the equivalent is and say, “Hey, I’m conscious.”

Christof: And if programmed correctly, it will do so. Alexa does this already, talk to Alexa, ask her whether she’s conscious. She’ll tell you, but it’s all a deep fake. And why? Because it’s not about the input output and this is sort of the subtitle of my book; Why Consciousness is Widespread, but Cannot Be Computed. Consciousness is not a particular type of computation. It is not an input output confirmation. It is ultimately a state of being, it’s ultimately the causal power of the system, not what it does. So yes, computers will be able to do in principle anything I can do, but they won’t be what I am, because a computer simulation in that sense, unless you build it into neuromorphic hardware, it turns out the connectivity of conventional CPUs, ALUs, TPUs, et cetera, is very, very low. So as you said, yes, I can simulate consciousness in principle, in practice, who knows, but in principle yes, it’s going to be a deep fake just like Alexa

Jim: Now. Well, let me dig into that a little further. If the neural consciousness that we have is a level above biochemistry and is able to preserve its phi, why does a… Let’s not talk about a simulation of a consciousness. Let’s talk about an artificial consciousness that actually is also hooked up to inputs and outputs and stuff and is functioning as a consciousness in that system. Why does its one level down substrate of transistors, which I agreed, I ran the math very quickly and yes, it would have a much lower phi, why does that lower level substrate of computation pull down the phi of the higher level software that’s running when the lower phi of biochemistry doesn’t pull down neural computation?

Christof: It’s about the physics. It’s not about some apps, plaque, high level software. It’s ultimately about the physics. So you take your CPU or whatever your robot, okay. And you analyze it at all the relevant physical levels. So you can take the levels of the atoms. You can take the levels of the solid state in the solid state the Silicon dioxide and the electrons in the gap [inaudible 01:02:20], et cetera. You can simulate it there. You can try to compute it at the various physical level. It comes down to the physics of the device, not its abstract mathematical function.

Christof: That’s different, that’s input output. That’s what gives rise to modern civilization, information, transformation, all of that. But that is a very different notion. And this is fundamentally where IIT conflicts with the other dominant theory of consciousness was Global Neuronal Workspace that I know you’re going to also discuss it at some point in the future. Global Neuronal Workspace believes and makes a claim that ultimately consciousness is just one type of particular type of computation assaulted with a particular type of structure called a global neuronal workspace. And once you instantiate that you will get consciousness. IIT is more physical. It’s not about an abstract computation. It’s about the physics of it. If you build in principle, if you build what’s called a neuromorphic hardware. So if you build, let’s say some Silicon hardware or Gallium arsenide or whatever your forbid hardware is that has similar type of connectivity as the brain, as particularly as the human brain. Then yes, then this thing would also be consciousness.

Christof: There’s nothing magical about conscious. It doesn’t require a soul or something like that. It just requires the relevant level of substrate particularly this very high level of integration, but at this level of the physics at the level of the metal, not at the level of the apps the computation. Jim, there is a good analogy here, look at a computer simulation of the central math at the center of our galaxy, right?

Christof: There was just a Nobel prize awarded for that. I’d say [Sagittarius 01:04:00] is star, right? There is a 4 million solar mass heavy object at the center of our galaxy. Now you can perfect and people have done this. They’ve done computer simulation of the gravitational dynamics of stars orbiting around that black hole. But funny, you don’t have to be concerned that those computer programs are going to be sucked into their computer simulation. Well, why not? If it simulates gravity, why doesn’t this entire computer disappear into a black hole? Well, because it doesn’t have the causal power. There is this app stack mapping between certain variables in its computer software and the space-time tensile and gravitation, et cetera. But it doesn’t have the same causal power as mass to bend space, time. Same thing to get consciousness, you cannot simulate it. In that sense again, John Searle entirely right, you have to instantiate it. You actually have to put it into the physics.

Jim: But again, I’m still confused the neurons, all kinds of stacks between the neuron and quacks, right? And that’s as much a stack as it’s and that’s why I say an artificial consciousness rather than a simulated consciousness, I did take your point that you have you simulate a black hole doesn’t produce one. I still don’t get it, but I’ll think about it. We got to move on unfortunately. Well, maybe we can talk about this another time. Oh. Yeah. This is the other part that I know that you write it very clearly. So I understand what you’re trying to say, but I don’t quite understand what it means. And that is the single pattern of connection that has the largest phi, what you call the hole and it could easily be and in the human brain it probably is, not only a subset of let’s say of cortex, but a dynamically changing subset of cortex that is the actual instantiation of consciousness at any given point in time. Is that more or less what you were trying to say?

Christof: Yes, that’s exactly correct. So when I’m conscious of you, there’s a particular substrate, but then when I think about my daughter in far way, Singapore I play with my dog, it’s going to be a different substrate. They may have certain things in common like in all cases I see something or I hear something, but that particular set changes on the same timescale as my conscious experience though.

Jim: Then quite literally, if IIT is true that only those neurons are neural correlates of consciousness at that moment. And if we could get to the point where he can determine that, then we could see if IIT was true or not. I presume that’s what you’re doing in this adversarial collaboration.

Christof: Including the neurons by the way that are not active, right? So people have this intuition that only neurons that are active, that are actually firing contribute, but that’s not true. The many neurons onset right now are not active. For instance, my phi engine neurons are inactive. My Donald [top 01:06:59] neurons are inactive, et cetera, right? They’re all part of my conscious experience. It’s not just the small set of neurons that are right now active. It’s all everything that is not part of in my conscious expense, all that is actively suppressed. So it’s a complicated thing. You have to measure it. So ultimately we have to develop devices that can measure it and they can visualize it. So I can look down at the brain of an animal or a person and see, “Okay. Here are the footprints of consciousness at any one particular time a point in time.”

Christof: And how are they distributed for example? Are there more than one footprint at a time? Is it possible that the main experience that’s me, but there’s also another smaller sort of independent entity in, let’s say the front of my brain or maybe the left brain and the right brain? Are they always connected or are maybe at some point sort of it splits off into two independent parts. Those in principle, you can all study. What happens if the brain is cut, right? If you get a Corpus callosum, then the theory says and expense seems to show you have to conscious entity. What happens if I take your brain and my brain and I start interconnecting it, will there come a point in time when our minds fuse of our minds blend into one conscious experience and when exactly would that happen? So in principle, there’s a really interesting number of experiment that one can think about and start doing certainly in animals, in mice or other animals.

Jim: I was going to come to that, which is brain bridging. Again, produces intuitively surprising result to say the least and for the audience. Why don’t you describe it in some precision, this idea that as you continue to connect neurons one at a time, at some point there’s a phase change where we go from two consciousnesses to one.

Christof: Yeah. Let’s first think about little bit the opposite, because that happens in clinical practice rarely, but it does happen. So you take a brain and it suffers from terrible epileptic seizures. They originate let’s say in one side and then spread across this bridge called the Corpus callosum, the 200 million fibers that connect the left and the right hemisphere. And so as a last resort, what surgeons do, this was invented 60, 70 years ago. It was quite successful. They can cut sometimes they have to cut the complete Corpus callosum. In successful, these seizures don’t spread anymore, but now, as far as we can tell that seemed to be two conscious mind.

Christof: This was first done by Roger Sperry at Caltech and he got a Nobel prize for it. That seems to be two conscious mind, the left hemisphere, which is typically the one that speaks and the right hemisphere, it can hum, it can sing, it can answer a simple question. It’s also conscious, but it doesn’t know that the two hemisphere sort of don’t talk to each other anymore. So they’re two conscious mind. Now I’m thinking of doing the sort of the inverse. So let’s take your brain Jim and my brain, let’s route some wires between my visual cortex and your visual cortex. So you can begin to see what I see, but you still you and I can begin to see what you see, but I’m still me, Christof, okay. But now we add more and more wires, we add wires between our auditory cortexes and between our frontal cortex and et cetera.

Christof: Now IIT says, so for IIT consciousness is always the maximum of integrated information. So as long as the connectivity between all brain is relatively low, the maximum there will be one maximum in my brain and my maximum and another maximum in your brain. So there’ll be two conscious entity called Jim and Christof. But if I ramp up this connectivity at some point depending on the exact geometry of the connections, the whole, your brain, my brain and its interconnection will have a higher phi than the phi was in just my brain and was just within your brain.

Christof: At that point abrubtly, Jim will disappear, Christof will disappear, and there will be this new entity. Suddenly there will be this new entity called… Well, I don’t know it’s some amalgamation of Jim and Christof. It will speak for two tongues. It will have four hemisphere, four arms, four legs, four eyes, et cetera. It will be one conscious entity. And then once I reduce those connection again, then suddenly this thing will disappear and you’ll find yourself again, you in your body and me in my body, to rather surprising predictions very precise in principle. And again, you can test this first in animals and ultimately you can also do it in people once we have the right technologies to do this.

Jim: It’s a very bold claim, right? And one that will be eventually subject to experimental verification, if only in neuromorphic computing where we could actually run that experiment.

Christof: I think we want to do it ourselves, I mean, look ultimately in the act of lovemaking, right? And you know this because ultimately you’re looking at your loved one, but you’re still you and she’s still her and sort of you can never meet, but in this way, if you make love using your brains interconnected, there is only one. This is really the unio mystica that many strive for this, there’s really complete union. So I think people will want it at some point.

Jim: Well, God knows what the side effects might be though, right? Oh my goodness. Yes. The people will probably want to, but at a minimum we could certainly try it with advanced neuromorphic computing, right? And see if indeed there is this and it just seems so far out that adding one more neuron, suddenly you go from two consciousnesses to one.

Christof: Yeah. But the inverse has to happen during the split brain. So if you imagine this surgery experiment and the surgeon has some technology where he can cut one axon after the other, not all 200 million [inaudible 01:12:54], but one after the other. The claim is that will also be something like this.

Jim: Yeah. It’s very intriguing. It is testable and it’s a bold claim. Be very interested to see how it goes. Now, speaking of which, one of the things that I always try to be a force for out at the Santa Fe Institute is to make sure that high theorizing doesn’t escape from experiment, right? As you and I both know, some theorists love to sit in their office, the door closed and just theorize forever and never test their ideas. But as you talk about in the book and as I tracked down elsewhere, yourself and the folks who put forth a different theory, the global workspace theory are working on a adversarial collaboration to try to find some solid lab experiments that will distinguish between IIT and global workspace theory. Could you talk a little bit about first, what items of data you’re going to look at and why they would point one way or the other? And then maybe a little bit about just the sociology of working together in an adversarial collaboration, which I find fascinating to contemplate.

Christof: Yeah. So perfect. Let’s start with the top, what’s an adversarial collaborations? So here we have two episeries not individuals, but two theories that make that have very different metaphysical assumption about consciousness and what it is and about being, which are difficult to test, but they make also some empirical prediction about the neural conscious the NCC, the footprint of conscious in the brain, where they are and the relationship between the timing of the NCC and the conscious experience. And so after sale in the senses we disagree, but let’s work out in a collegial way some experiment where we agree ahead of time in writing. If we do this experiment, and this is the outcome that will tend to support your theory. And if it’s that outcome, it’ll support the other theory. And we’ve done that for a number of experiments.

Christof: This is not easy to do talking about the sociology because of course, people are people, as you can see in politics and they’ll try to game it. And they say, “Well, I don’t really want to quit myself.” I think this is going to be the outcome, but I have a not safe. It’s not the outcome I feel is wrong. So you have to nail people down and this took like a roughly year. And so after a year, we started off as a workshop at the Allen Institute for Brain Science in 2018. And then there was a protocol that we submitted to the open science foundation, which inviting, we committed ourselves. These are the experiments and this is funded by the Templeton World Charity Foundation out of the Bahamas. And so now it involves 12 labs.

Christof: So the number of very innovative aspects besides this Epicel collaboration. Each experiment is done by two separate labs sort of checking each other. All the experimental labs aren’t aligned with either integrated information theory or global neuronal workspace. So they’re just experimentalist and they say, “To first order, I’m just going to do this experiment. I think it’s a cool experiments. And I’m not particular favorable to one theory or the other.” Again, that’s also important. Only the first half of the data will do all the training of classified from the first half of the data. Then we’ll show, well, what do we come up with? And only then will we test. Will we extrapolate and look at the second half of the data? All the data will be available ultimately to everyone with an internet access.

Christof: It’s quite innovative. It’s not easy because it’s a large group. So this involves essentially two experiments, two visual psychophysical experiments using free techniques, FMI so standard functional brain imaging. Then a combination of EEG and MEG that measure of electric field magnetic field associated with brain activity. And lastly in… So this is a normal people and in volunteers. And then lastly, experiments in epileptic patients where the surgeon has to implant electrodes so-called ECOG, electrocortical gum, where we are much closer to the brain. So we get more high quality signals than in a regular EEG because they implanted beneath the skull rather than the EEG, which is outside the scalp. And so it involves the 12 labs and then of course COVID hits. So we haven’t made as much advances as we would hope to hopefully with the vaccination, we can continue those experiments now.

Christof: The prediction really relate around two differential prediction between the two theories. One is that IIT predicts based on the brain’s connectivity that the neural correlate of consciousness will be in the posterior hot zone. So occipital temporal, parietal cortical areas, while Global Neuronal Workspace predicts the critical involvement of frontal cortex prefrontal cortex. That’s absolutely essential because that’s according to Global Neuronal Workspace, that’s where the global workspace resides. And also the difference in the timing. So IIT predicts that if I’m conscious for something for two seconds and there will be a substrate for two seconds that continues to be active for as long as I’m conscious of it, while IIT says what Global Neuronal Workspace exists primarily only the onset, when it enters the global workspace or when it leaves the global workspace that you get a lot of neural activity. So those things can be tested and that’s what we’re trying to test for. And we’ll see the experiment should and normally at the end of next year, we’ll see what COVID does to that timing.

Jim: I greatly commend you for guys doing this, as you say, it’s sociologically difficult. And I also love the fact that you took advantage of a pre-registration. We had Brian Nosek on back aways from its Center for Open Science and we’ve talked quite a bit about pre-registration. In fact, I actually provided them some funding to expand their operations around pre-registration. I believe it’s an amazing way to clean up the epistemic system of science.

Christof: So that’s exactly what we’re doing with pre-registering, we’ve worked with Brian. We pre-register everything exactly for those reasons. Very good. I wish everyone would do it, but most people, it is more difficult, pre-registering it because it ties your hand. You can’t do every possible thing because that you only thought about later, but it’s much better way to do science.

Jim: Yep. And of course, it’s hard to get the journals to agree also, right? They don’t want to publish negative results things of that sort. But more and more are so it’s a very, very important thing. The sociology of science is hugely important. We spend a lot of resources in our society on science and if the sociology isn’t optimal, the return on investment is not optimal. So I commend you guys for doing science the right way. We’re getting kind of late in our call here. So I’m going to skip a few other interesting topics and go to the last one, which is the panpsychic implications of IIT. Do you really believe that?

Christof: Well, personally yes, I do. So, what does IIT says? IIT says, “Any system that has phi different from zero feels like something. It may feel just like the difference between being there not being there, but it says in principle, conscious is much more widespread than us and maybe create apes and other charismatic megaphone or like big killer whales and tigers, right? That may be even a fly or maybe even a limit… If I look at a single paramecium, no one today, there’s not a single computer simulation of all the molecular, the vast cascade of proteins and protein to protein and put it into a receptor interaction in a single cell paramecium. Because it vastly exceeds our computational capacity and of course we don’t have knowledge of all the binding constant. So it is quite possible that even something as simple as a paramecium feels a little bit like something, now it doesn’t have a psychology.

Christof: The paramecium won’t feel sad or won’t feel bored, but it may well feel like something to be alive and when it’s membrane dissolves and it isn’t there anymore, it won’t feel like anything anymore. So once you wrap your head around it, it’s a little bit like saying, “Well, temperature let’s think about temperature.” Everything has a temperature. It turns out even deep space, right? It’s a black body radiation of 2.732 degrees Kelvin. So although it’s utterly unimaginable cold, we couldn’t survive in deep space. There is still some residual heat left, right? From the afterglow, from the [inaudible 01:22:07]. And so it may well be that consciousness is much more widespread, but again, you can measure it. In principle once you develop a phi scope, you can measure. Does this system have integrated information?

Christof: And you can measure it at least in systems like a fly or a worm or a beetle or bee that has a million neurons and see how does it relate to its behavior and institutes neuron substrate. It’s much more precise and panpsychism by the cheer face intuition, the simply intuition of panpsychism. If you think about the word pan everywhere or psyche soul, that everything in principle is in soul. Now IIT says, “No, it’s not everything.” A glass isn’t a soul, a June it doesn’t have high phi, but every physical system that has phi has some modicum of experience. Yes.

Jim: Extremely bold and of course, it has some linkages at least some of the subsets of theories of quantum foundations, which we won’t get into today, but it’s a very bold conjecture. It’d be really interesting to see how this plays out in the years ahead. I think we’re going to have to wrap it there. We’ve had too much fun here today. I’d like to thank Christof Koch for coming on the show today and sharing his thoughts. And I’d strongly encourage anyone who’s interested in the topics we’ve talked about to take a look at his book, The Feeling of Life Itself: Why Consciousness is Widespread, But Can’t Be Computed.

Christof: Thank you very much. It’s [inaudible 01:23:39].

Production services and audio editing by Jared Janes Consulting, Music by Tom Muller at