Transcript of Currents 028: Simon DeDeo on Explaining Explanation

The following is a rough transcript which has not been revised by The Jim Rutt Show or by Simon DeDeo. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is Simon DeDeo. Simon is assistant professor at Carnegie Mellon University in the Department of Social and Decision Sciences. There he directs the laboratory for social minds. He is also a member of the external faculty at the Santa Fe Institute. And unless he wins a Nobel prize, certainly a possibility perhaps he’ll be best known as the first guest on the Jim Rutt Show. Yep. Simon was EP1. I didn’t know what the hell I was doing, but thanks to Simon, it turned out to be a classic nonetheless. Check it out. Welcome, Simon.

Simon: Thank you, Jim. It’s good to be here again.

Jim: Yeah, this is I think your third appearance, always good to have you. We always have a good conversation. Today, we’re going to start out talking about a paper that Simon wrote with another fellow called From Probability to Consilience: How Explanatory Values Implement Bayesian Reasoning, quite a mouthful, but I’m sure it’ll all be much clear by the time we’re done. Simon’s coauthor to make sure he gets due credit is Zachary Wojtowicz is that pretty close?

Simon: Sure.

Jim: All right. So let’s jump in. I mean, the subtitle or the sub subtitle, something, basically you said what your goal here was, was explaining explanation. What do you mean by that? At the highest level.

Simon: Well, so Zach and I, both of us and I should say Zach’s a graduate student. This is, him building his career. It’s very exciting to see. We have both been enchanted by, excited by, certainly interested in machine learning. The use of AI to get things done. Get things done perhaps in a more efficient way in a fairer way, one hopes. Machine learning computers, the way we reward them, the things we strive for, we want them to be really good at predicting the world.

Simon: And I think we have missed something crucial about the human experience, which is actually Jim… First of all we’re terrible at predicting the world, right? You and I. I mean, maybe you’re better, but I’m terrible at it. We’re terrible at predicting the world, and actually most of the time when we talk about the future. And in fact, most of the talk we do is not about prediction, but about explanation, trying to make sense of the things that have already happened. So you and I might say, “Why has social media gone bananas?”

Simon: What we’re doing there, and the kind of conversation we’re going to have is a conversation that attempts to in some way, make the events of the past comprehensible to us. We want to make sense of them. We want to say, for example, why they happened or perhaps how they happened. Only as an ancillary to that do we care if anything we say is useful for predicting what happens next. So Zack and I, one of the things did is just focus on this really fundamentally human task. We come at it as cognitive scientists, but we’re also hoping to, in some way, connect to the work that’s being done in AI. If AI is obsessed with prediction and humans are obsessed with explanation, maybe we can tell a story mathematically, the language of AI, but about human things.

Jim: Very, very interesting. Then you guys talk about the fact that explanatory values appear very early. Kids are always trying to explain things and sometimes the explanations are pretty funny. Talk about that a little bit, how from a human development perspective the idea of explanation starts to come into our lives.

Simon: This is something that was really exciting for me, I had no idea the extent to which something I thought of as a really advanced thing. Let’s explain the electromagnetic field, that’s fine but looking into the research on child development, it’s amazing how quickly children are interested in building explanations. In fact, we can see some of the biases, some of the same predilections in how children explain things that adults also have. One way to think about it, a couple years ago we, people, got very interested in how our sense of beauty was quite universal and learned quite early, at least some aspects of it. We see a preference for symmetry. We see children look longer at things that adults consider beautiful as well. There’s a sense that children share aesthetic values with adults and these develop very early. In a similar way it seems like, and this happens at the age three, four, children have similar tastes for what makes a good explanation that adults have. Some of the basic features are already in place. So some aspects of explanation they can maybe learn that in college, but a lot of it, a lot of really low level operating system stuff is baked in really early.

Jim: Sort of makes sense but it’s also kind of surprising. We haven’t been doing linguistic explanations for all that long as a species somewhere between 35,000 maybe, and a hundred thousand years. To have a developed a consistent taste in explanation already is kind of cool. Going a little further into your paper, you talk about two descriptive lenses, or explanatory lenses for explanation. The lens of description and the lens power. Why don’t we talk about those a little bit?

Simon: So, I really want to bring this down to ordinary stuff. It’s like, my computer won’t turn on, why? My friend didn’t call me back, why? These are things we do all the time. One of the first points we make in this paper is that you always have two things in play. On the one hand you have what the explanation does for the data you have to handle. So my friend didn’t ring. One piece of data might be, is he chronically forgetful? Another piece of data might be, was it late at night? Another piece of data might be, did it happen on a Saturday as opposed to a Thursday?

Simon: So there’s all these facts that are sitting there and one of the things an explanation does is it makes sense of these facts. We evaluate the explanation in terms of how well it does at predicting or at least making more or less likely the facts you have. So why didn’t my friend call? He didn’t call because his phone was broken. That’s an explanation, his phone died. Then you might say, “Susan got a call from him 10 minutes after he was meant to call me.” So all of a sudden this explanation doesn’t look as good and the reason it doesn’t look as goo is it just doesn’t fit the facts. So that’s one thing that’s always in play.

Jim: And that’s the descriptive lens.

Simon: Exactly. Description is part of that. It’s like, that’s a terrible explanation because you’ve only explained half of it, or it even is contradicted by the other stuff I know. That’s one thing. Another thing is sitting on the other side of this divide and that the theoretical side. If you have the empirical aspects of an explanation, you also have these theoretical values. The theoretical values are about what the explain itself looks like, how it fits pieces together. Not just facts but other things, things that you might not observe, stories about the world, the ways in which different aspects of things fit together. What potentially causes what, what could cause what? So why didn’t he call? Well you know what the guy is just fundamentally a bad guy. Now one part of that explanation that you might value is, “Oh, it fits all these other things I know about him.” Another thing is you may like this kind of hidden common variable story where there are good people and bad people in the world.

Simon: This was the first step that Zach and took, was to tear apart these two pieces and look at them in isolation. So one of the things is once you realize these two pieces split apart another thing that you have is, maybe these go to war. So maybe there are explanations that don’t fit the facts very well, that get out competed by other explanations that fit the facts better. Yet that bad explanation from the point of view of the facts may be good for you, perhaps as a fallible human being or a human being with values, that explantation might be good because of its theoretical merits. So you start to see maybe we can explain who somebody will hang on to an explanation that from your point of view and from the point of view of the facts is just completely inadequate. Yet potentially this person might be holding that belief because of other epistemic values that are sitting in there. These are these theoretical values, power is one, unification is one, parsimony, simplicity. These are things that are sitting on the other side, that are sitting in a more abstract platonic space if you like.

Jim: Yeah. It’s funny I thought of parsimony and simplicity as somehow different than unification and power. What do you see about those that unifies them?

Simon: I mean what you’ve done there, Jim, and this is something that Zach and I had all the time which is really funny. Obviously this paper is about explaining explanation, so hopefully we could make sense of our own paper in our own framework.

Jim: One would hope.

Simon: One would hope right? Then actually I had a wonderful conversation with a group in the philosophy of science department, University of Pittsburgh, and this was pointed out. Unification is a really fun value. A theory that is unifying says that lots of things in the world, including stuff you haven’t seen yet, lots of things in the world are connected in some occult fashion. I use the word occult in a kind of funny way, because of course the most unifying theories of all time are the ones in the physical sciences.

Simon: The Higgs boson, the story of quantum electrodynamics tells you out the other side of that thing you get an enormous number of phenomena, all of which are driven by some underlying field, some magical number that is sitting everywhere. That’s wild. Physics is a paradigmatic place where valuing unification has taken us a long way, has done really well for us. When I use the word occult, another way to use it is like crazy. It’s all the nutcases. It’s all of the people who hold on to beliefs. I was going to say astrology but there’s something to astrology. Beliefs in Gaia is guiding all of human life towards some great apotheosis, that the chemicals in the water are affecting our precious bodily fluids and driving us all crazy.

Simon: This value of unification, it’s a double edged sword for us in many ways. It is, to go back to this original split that Zach and I identified, it is one of these things that sits over on this theoretical side. Thus, to a certain extent it’s a little bit in the eye of the beholder. Its like if you like unification, and I’m someone who’s prone to it, Jim. I’ll tell you a story if you like about this. But if you like unification you’re going to prefer certain kinds of explanations. You’re going to have an intrinsic drive towards them for better or for worse.

Jim: Yep, and we talked about that I think in our first episode. I called it Hari Seldon syndrome, right? From the foundation trilogy where many of us impressionable 10 year olds when we read the foundation trilogy for the first time said, “Damn psychohistory, that would be cool. Have one set of equations that could predict all of human history, holy shit.”

Simon: Right exactly.

Jim: Knowing when unified unification type theories make sense and when they don’t might be an important second order human skill. As far as we know every electron has exactly the same mass, which is pretty God damn interesting. We can, at least at the moment, take that to the bank. Yet other unification theories, all the world governments are controlled by reptilian humanoids from Tau Ceti. It explains everything right?

Simon: It does right. Jim I love this point you make that these are second order questions. I think, I won’t speak for Zach on this, but one thing I think is maybe a question for us as species going forward, can we get better at these second order questions? Can we start having conversations, not just about explanations but also about the principles upon which we accept and reject them. This is something at the Santa Fe Institute, when I first showed up, this is the cocktail party of every explanatory preference of all time. It’s like the San Francisco bar, Mos Eisley Cantina where the physicist shows up. For them the more a theory is unifying the phenomena the better the theory is.

Simon: I remember a wonderful talk given by David Pines, who’s no longer with us. David was presenting some work he had done on spin glasses, which is this very beautiful obtuse chunk of advanced metaphysics. He’s going on and on, half this room is just peeing themselves, they love it. This biologist raises his hand. The biologist is David Krakauer, who you know so maybe it’s wrong to call him just a biologist, but David raises his hand. He says, “Professor Pines, is there anything your theory cannot explain?” The speaker David Pines says, “Uh no, isn’t this wonderful?”

Simon: For him this a sign of a good explanation, but for many people in the room, certainly biologists, this is the sign you’ve gone off the deep end. Because life is a complicated thing, it’s messy, its genealogical, it’s evolved, it has histories, the tape gets run only once. So you know that a theory that is too unified is probably off the walls. It’s probably missing something. They’ve gotten used to this very quickly at the Institute of saying, “Let’s have a second order debate for a moment.” Let’s put aside whether or not we can explain the collapse of this Aztec civilization. Let’s go over to the non collapse, let’s go over to how do you as an anthropologist value these things? How do you as a sociologist value these things, as a mathematical physicist value these things, as an information game theorist? All of these people are bringing different kinds of second order values to the table. That’s great, it’s a lot of fun.

Simon: What I wonder is if we might benefit from having that conversation on a much broader scale. If this is the conversation we might have in the university. Students go to University, they major in a subject, I see this at CMU, great but at the same time we train students to see the world a certain way. So when I teach a class and I have computer scientists and social scientists, psychologists in the room as undergraduates you can just see they’ve been trained into a certain set of explanatory values and the clashes are amazing to see. The ways in which they struggle, often very successfully, to say, “It’s not that this person is not valuing my explanation because it doesn’t fit the facts. It’s because they have a fundamentally different aesthetic for what explanation should look like.”

Jim: You know what that reminds me of, there’s all this debate about what the freshmen distributive courses should be. I got proposal for you. Maybe you should propose it to the head honcho there at Carnegie Mellon. How about a first semester freshmen course on how to use many lenses and have people come in from each discipline and attempt to distill in one hour what lens is it that is the principal lens in their work. Maybe have a reading to go with it for the next class. Maybe it’s two classes per lens. One discourse on the lens by the practitioner and then some readings and then the guy comes back and has the students say how did you see this reading through my lens. Wouldn’t that be interesting?

Simon: So obviously [inaudible 00:18:37] at Carnegie Mellon we’re miles ahead of the game and we’re on this. We run seminars here, freshman seminars. One of the ones that I did not participate in but I would love to have been in was a history if I understand correctly and an epidemiologist talking about the black death, the bubonic plague in 17th century, 16th, 17th century in Europe. The way a historian talks about and attempts to explain that, the way an epidemiologist talks about that and attempts to explain that, they’re complimentary. But at the second order, you also start to get a sense of what makes a good historical explanation before you even get to the data, before you even get to the archive. What does a historian value? What does an epidemiologist value? That kind of stuff, could overdo it you could make everybody a… They all have second order opinions but no first order abilities.

Simon: You obviously want students to get really good at running a set of equations forward or understanding system one and system two in psychology. You want them to have the first order explanatory powers. Maybe this happens freshman year, maybe you come back to it senior year, you wrap all the way back round on the tourist to reflect up on the last four years the explanatory values you got really good at and the ones you might have neglected or in fact even been trained out of.

Jim: That’s even more interesting. Then the second, or is it the third order which is the discernment to know what lens to use in which situation?

Simon: Well right, wouldn’t we love that. We have eight different selves, we know the right self to pick at any point in time. A big chunk of this as metacognition, not just knowing things, not even just knowing what you know, but having some awareness or some ability to understand the ways in which you came to know something. Knowing here, for Zach and I in this paper is explaining one part of knowing but that’s [inaudible 00:20:56]. Being able to say, “Well what is it about this explanation that gives me the sense of knowing the answer?” Is that a valid, is that a reliable, is that a productive way to go about the knowing task in this particular context?

Jim: And we’ll get a little bit to how to trade those off a little bit later. Let’s dig down just a little bit. If you could make the distinction between unification and power? They’re similar but they’re different, why don’t you distinguish between those two?

Simon: Right so a powerful explanation is one that can cover a lot of different facts. It’s making predictions about all sorts of different things. So, obviously electromagnetism is a powerful theory, but if we get a little bit lower level, you may have an explanation for your social circle that is really good. It says, you know what, if you say this to Jim, I’ll tell you, he’s going to throw you out of the room. But if you say this to Sam, Sam is going to know meditate on it for weeks and tell you you’re a genius. If you say it to Mary, she’s going nod like she believes you, but later she tells you what an idiot you are. You can go on and on.

Simon: So a powerful explanation is one that makes a lot of specific claims about the world. Now, maybe they’re accurate, maybe not. If you buy the explanation, maybe you think they’re accurate or going to be accurate. For example, it’s the kind of explanation you want a consultant to have. You want him to say, “Look, I got an account for what’s going on in HR. I’m going to tell you what’s going on and your cashflow.” Maybe this is something you want out of a car mechanic. “Well, I’ll tell you what’s going on here. Your air conditioner shot your alternator is about to go and it looks like the undercarriage is starting to rust.” So, he’s explaining all of the bumps and knocks that’s going on, and it’s a powerful explanation in that case. It’s telling him what to do, how to work on your car.Powerful explanations are maybe we like them. To be clear, these are theoretical values, meaning that you’re looking… When I call an explanation powerful, I’m saying, it’s not about how well it fits the stuff I have. It’s about the way it looks at, in isolation, from the facts I happen to have.

Simon: Powerful explanations though may not be unified. They may not, for example, tie the world together. They may not say this thing that we see is dependent upon that thing. The car mechanics explanation for all the things that are going on in your car maybe powerful, but probably it’s not going to be unified. Cars have many different things that go wrong. They probably go wrong roughly all at the same time, but the underlying cause of them is not correlated. What caused the alternator to go bad is the plastic on the insulation started to fragment and fracture. What caused the undercarriage to rust is that you parked it on the coast. So, unification rates, separate value and in fact, these can sometimes compete. So let’s take, reptilian lizards from Tau Ceti. That explanation will be very unifying, but when you actually say, “All right, what are the the reptilians from Tau Ceti doing tomorrow?” He’ll say, “Well, you know, these guys are pretty unpredictable. I can’t tell you. Maybe they’ll shut down the internet, but maybe they’ll shut down the air travel. Who knows?” So again, you think of these as different axis along, which an explanation can be valued or not valued.

Jim: Interesting. Okay. And that’s closely related to the concept of co-explanation. Why don’t you lay that one out and how it fits with both of the ideas of unification in an hour? How about that?

Simon: Well this is great, Jim, because you always worry. It’s like you give your [inaudible 00:25:18] speech. It’s like, is anyone actually listening? So it’s great. Thanks for reading the paper. That’s great. So, if we have unification on the theoretical side it’s got its partner. It’s got its empirical partner. You know, in physics you have the super symmetric partner. This is the empirical partner. So if unification is a property of a theory in isolation. Karl Marx, it’s all the dialectic of history and the war of the proletariat. That’s the theory, the co-explanation is all right, let’s look at the actual facts that we have right now. Let’s look at the stuff we’ve got sitting there, and the extent to which your explanation says that those particular facts are related to each other. I mean, there are exceptions, generally a theory that’s high in unification, if it’s correct, we’ll also be high in co-explanation. It will say, “Oh, that came up that way. This other thing came up that way. That’s exactly how we predict. We predict when one is up the other’s down and vice versa. So co-explanation, you can think of as let’s say, the cash value of the theory, of unification.

Jim: That makes some good sense, actually.

Simon: It’s funny one of the things that’s in play for us and this came out, I didn’t really realize this, but it came out. We presented this to the philosophers, one woman said, “Look what you’ve got here, sort of an Aristotelian theory.” What she meant by that is, there’s a lot of different virtues. There’s a lot of ways in which an explanation can be good. The key here is moderation. You don’t want to be a coward, but neither do you want to be foolhardy. Courage, the virtue of courage is somewhere in between. You don’t want to be somebody for whom the world is just a budget, disconnected like fuzz, but you also don’t want to be the person who believes that aliens from Tau Ceti are controlling everything.

Simon: You don’t want to be somebody who forgets about unification, who says, it’s not of value. Neither do you want to be a person addicted to unification. You want to be somewhere in the middle, you should value it. And the real challenge here, and again, I’m not an Aristotelian but I was told I came up with an Aristotelian theory. The virtue here is to value the values, correctly, not to overweight, not to go to an extreme. Not for example, to be too chanted by something like co-explanation not to be too obsessed by it, not to be too wowed by it. To value it, but not to go overboard, to not make it the only value, let’s say an idol out of it.

Jim: Except suppose it really was reptiles from Tau Ceti?

Simon: You know, we do have a line in the paper, “Sometimes conspiracies are true.” And I think that’s the case. If you will never believe, if you have some sort of psychic block or you have some prejudice, you’ve listened too much to teacher. If you will never believe that occasionally a small number of people get together in a room and decide something that ends up affecting billions of people, then you’re missing out, you’re missing out on life. You’re also missing out on the ability to make sense of the world because yes, sometimes a small number of people working in secret do actually make things happen.

Jim: It’s relatively rare. Being someone who actually has worked in the White House, Wall Street and big corporate America, I can tell you those people, ain’t that smart first. And the old mafia rule, two people can keep a secret as long as one of them’s dead is also a truism. So while there are the very occasional examples of successful conspiracies kind of my meta heuristic is those two things. Them people ain’t that smart and they all fucking blab. So the expected value of a conspiracy is low. Hence, you wouldn’t expect to have very many of them. But that’s not to say that the number that exists is zero, which is interesting. So now let’s turn to another area and I think this was one I probably fall into using too much. I’m a sucker for the parsimonious, concise, simple explanation. Could you for, our audience, tell what you mean by that then what both the good and the bad of looking for arguments or shaving with Ockham’s razor provides when we’re, when we’re thinking about explanation.

Simon: So parsimony and it’s funny. So when we wrote this paper, I can’t remember if we actually kept this sentence, but the more Zach and I talked about this Keller word simplicity. The more it started to look like a million different things. So our line I think it’s in the paper, “simplicity is complex.” What simplicity is, we know that this is a value. Just roughly speaking, it’s simple, I love it. If you or I were to say, we’re debating something. And I said, you know what, Jim simply said, X. When I use that phrase simply said, what I’m really doing is saying X is a good explanation, because it’s simple. I would never say if I were to say, said in a super complicated way X, or to be complicated about it X you would be like, you would think I was criticized and explanation. But if I said, let’s get simple, simply said X you know that I’m trying to promote it.

Simon: So better question is what the hell do we mean? What is simplicity? One answer that we have to give some definition. So our definition is how easy is the explanation to stick, but now the word easy is doing all the work. Parsimony is an example of simplicity. Parsimony is literally things like count the words. You have more advanced versions, like count the number of bytes in the computer program that produces the explanation. All sorts of ways into this problem. But simplicity is up to us. It’s this kind of value that we get closer and closer to. We can touch it on different edges, but we can never really get right to the center. There’s all these different ways that people will look at an explanation and say, simple. For us, we’re still kind of wrestling with exactly what simplicity means.

Jim: Of course, historically, I think one of the great examples of where the, scientific community came to use Ockham’s razor simplicity to make a big leap was the Copernican revolution. At the time Copernicus’ theory of the sun and the earth rotating around the sun didn’t actually fit the data any better than the Neo Ptolemaic with its epicycles within, epicycles within, epicycles, but it was a shit-load simpler. I think people were attracted to the simplicity of the argument, as opposed to by that time, the very Baroque extensions, upon extensions, upon extensions of the old Ptolemaic theory to make it continue to predict as well. And then of course, we got better observations, the development of telescopes soon thereafter. Next thing you know, the problem was resolved. But I think that was a very interesting example where simplicity actually was a big thumb on the scale of people’s evolution of thinking about that particular question.

Simon: I think that’s a great example. And sometimes, Jim, the Copernican revolution is our origin story for simplicity. It’s this great moment in intellectual history, now of course we over blow this a little bit. I can talk to any historian of science, they’ll cancel you in a minute, for the way we’re making a cartoon here, but let’s use the cartoon. It’s such a clear case where simplicity is obvious to us and it’s the right call. That’s a great one. In fact, Zach, that was one of the first ones we debated, how can we quantify the simplicity of the Copernican revolution? Obviously, many people have tried to do that and we can talk about those efforts. That’s a really let’s call it a simple case. Like that’s a case where it’s pretty easy for us all to say, “Yeah, I get that.”

Simon: It’s a little bit like why is Botticelli a painting, beautiful. You know, we all agree it’s beautiful. We know that’s something in play. But when we come to, the 21st century things are starting to look a little trickier. It turns out that there are many different ways to quantify simplicity and that they’re sometimes in conflict. One of the ways that Zach and I give as an example, one of the ways people have tried to quantify uncertainty is something called the Akaike information criteria, AIC. So AIC is a little piece of machinery. You dump your theory into the machinery to grind it out the other side, it gives you a rating, It’s a number that starts at zero and goes up and the higher it goes less simple the damn theory is.

Simon: People love this. So in fact, it’s sitting inside most of the contemporary statistical programs. So if you’re an economist or a psychologist or an epidemiologist, let’s say, and you boot up Python or R one of these programs, and you say, “Hey, look, do a linear regression on some stuff I care about.” It will say, “Okay, look, here’s how well it fits.” Meaning, here is the empirical value of your regression. And it will also say here’s the AIC. Here is our measure of how simple your linear regression is. And the simpler linear regression is a theoretical value. It tells me nothing about how well it works, it’s just saying, you know what, it’s pretty simple or not so simple.

Simon: That is a way to cash out simplicity in a number and to sort of reliably compare and reliably meaning not correct, but just gives you the same answer every time. Reliably compare relative simplicity. The only question is I have it and many people do have, is that the right measure? If all of science just switched over to using AIC, would we have solved the simplicity problem? My sense is, no, God forbid we should ever offload our judgments of simplicity to an algorithm, but it’s certainly, as we’ve gone from this intuitive sense that the Copernican revolution was good for us. You get all the way to the other side and be saying, you know what, maybe the problem is harder than we thought.

Jim: Yeah. And not all problems are reducible right. We don’t understand what is driving the complexity. There’s the famous H.L. Mencken quote, which goes the other way, which is “Every complex problem has a solution, which is simple, direct, plausible, and wrong.” Now he’s gone a little too far the other way, which is to avoid simplicity as being itself wrong. So this kind of gets get to our conclusion here. It seems like what you guys are saying is there’s lots of different ways to think about our explanation, but perhaps there needs to be some higher order way of thinking about how we think about explanation. Sort of the end of your title, let me pull it back up here. How do you apply Bayesian reasoning to thinking about these various explanations and which methods to explain explanations are valid. Even how to make sense of all this?

Jim: What can you say as kind of closing comments for our audience? Now that we know that we have lots of different ways of thinking about explanations, how do we think about when and how to use them and how to evaluate whether our using them is sensible? All that kind of stuff.

Simon: There’s two things that I think this paper is trying to get people to. One thing is to give us a better sense of how humans work. How the human brain works, how the human mind works I say mind not brain. We want to know what are we doing when we explain? What are we doing when we accept or reject an explanation and as somebody fascinated by the species I want to know that. This work here that we’re doing is you might say it’s a little bit like figuring out what a human’s vital signs are. What are the different vital signs of an explanation? What are the ranges in which we think there’s health and illness. So that’s one piece. We want to understand the mind.

Simon: There’s another piece here, which you might call the engineering side which is, can this give us a path forward to thinking more clearly? And in a better, let’s say second order fashion about how we should be making explanations. So you and I touched upon this a little bit, Jim, in the discussion of education. If the first part of this work is figuring out the vital signs, the second part of the work as well, how do we make ourselves healthy? What is health? What are good explanations? If variability of heart rate is a sign of health, how do we get people’s heart rates to be more variable? Is it exercise? Is it diet? In the same way for the question of explanations we want to ask not in a top-down way, not in an authoritarian way, but that’s not our style. How can, having these axis help us improve, help us get better?

Jim: Interesting. I think that’s good. Any final, final thoughts in the explanation of explanation?

Simon: There’s a lot of stuff we’re looking forward to doing and you and I have touched on this a little bit, Jim, in passing, which is the defects. Which is the aliens from Tau Ceti problem, where from our point of view, the things that are good for us, they’re also bad for us. Right. You know, if you don’t have enough carbohydrates you starve to death, but if you have too many, maybe you have a metabolic syndrome. So one of the big questions we have now looking forward is understanding the pathologies better. Some philosopher, Quassim Casssam, great philosopher, has, gave us the phrase, vice epistemology. And for us, I think the real interesting social level question is the vices of explanation. Not necessarily the virtues right now.

Jim: That’s a that’s wrapper up right there. Thank you, Simon, for a wonderful and interesting and hopefully useful episode.

Simon: Thank you, Jim. It’s wonderful to be here. It’s wonderful to be online with you again,

Jim: And we’ll do it again soon.

Production services and audio editing by Jared Janes Consulting. Music by Tom Muller at