Transcript of Episode 31 – Forrest Landry on Building our Future

The following is a rough transcript which has not been revised by The Jim Rutt Show or by Forrest Landry. Please check with us before using any quotations from this transcript. Thank you.

Jim: Howdy. This is Jim Rutt, and this is the Jim Rutt Show. Listeners have asked us to provide pointers to some of the resources we talk about on the show. We now have links to books and articles referenced to recent podcasts that are available on our website. We also offer full transcripts. Go to That’s Today’s guest is Forrest Landry, the founder and CEO of Magic Flight, a company that was among the first to introduce the portable vaporizer to the world.

Forrest: It’s great to be here Jim, and I’d love to talk to you.

Jim: Glad to have you. This should be an interesting conversation. In addition to his interest in business, Forrest has many other interests. Under the auspices of the Ronin Institute, he does research into things including the manner and degree by which products and systems design influences culture and ecology, and he explores the nature of the interface between the organic and the inorganic, particularly as realized in the relationship between concept and computation. And he also researches the manner and models by which effective personal and social governance could potentially be achieved. And that’s just on his research side. He’s also a philosopher with some serious work in both ethics and metaphysics. Big stuff. But we’re for it. We go there. Why don’t you tell us just a little bit about Magic Flight?

Forrest: Well, as you mentioned, the organic and inorganic. So it’s a woodworking company. And so we try to emphasize the use of materials that are renewable, and experiences that are essentially very much first person, not necessarily defined in terms of some interface or some compute boundary. And to really basically look at what are very, very clear and simple solutions to what would otherwise be difficult problems. So in effect, design has a lot of interesting challenges associated with it.

Forrest: So we try to essentially go beyond just the traditional methods of thinking about design, and look at tried and true, but also to really bring that forward and to learn what is a complete system solution in this space of problem solving. Magic Flight looks at that from a point of view of how can we do this from the industrial design? How can we do this with various kinds of tooling? What is it like to talk about the experience of the employee and the experience of the customer in an integrated way? So we don’t really believe that there are necessary trade offs between those kinds of things. A lot of times we can find design solutions that allow us to get all that we want. And that’s what we strive for.

Jim: That’s great. Beyond your business life, you’re working on some of the biggest issues in the world. What is your motivation? How do you see the current state of the world? What is it that’s motivating you to do the work that you do?

Forrest: Well, in short, I have a great regard for the fact of being alive at all. So in effect, I consider to be able to breathe and to look at the stars and have an experience [inaudible 00:02:55] to be really a very fundamental gift. And I think that in one sense I’m wanting to be in service to have essentially other people be able to experience that gift as well. So we’re thinking about our children in the future and stuff like that. So I guess you could say that in a fundamental way, I’m really in service to nature, I’m in service to the future of humanity, and to thinking about those things in a very strong way. I love the wild, I love to preserve essentially really good healthy experiences, and to essentially preserve the capacity for us to thrive.

Forrest: I think that it’s not necessarily the case that we have to have some sort of again, trade off between human wellbeing and natural wellbeing. I think both can be achieved in a really good way. And that that’s a lot of what I’m looking at is what motivates me? What is it that I’m really passionate about? How can we create thriving in the longterm, not just in the short term, but for the next 1000 years? How could we make it so that the world is a genuinely beautiful place and a genuinely healthy place to live in and the people really like being here?

Jim: What do you make of the current status quo, the way the world is currently operating with respect to those goals?

Forrest: Well, I don’t think I need to go very far to say it doesn’t look very good. Right now a lot of our choice making processes is not grounded in things that actually have a longterm stance. In fact, they’re not even optimized very well for short term stance. And so in effect, when I look at the choice making that is happening in the world today and the ways in which we’re building things, I see a lot of lost opportunities, a lot of lost resources that are essentially squandered. We are currently in a situation where there’s tremendous opportunity, and we hardly even know it.

Jim: Yeah. We’re on an [inaudible 00:04:36] seems to me where the future is ours. It could be more glorious than anything most humans have ever imagined or it could be the worst of disasters, and it’s all up to us.

Forrest: Pretty much. And so in effect there is a real choice here. I mean, we can either go into a dystopia or we can make the world a healthier place to be in. So in effect we have to know how to make a choice like that. Like what does it mean to be conscious of the values from which we’re choosing?

Jim: Yeah. We’ll get to that in a minute. But before we do, I actually dove a little bit into your work on philosophy, and I thought some of that might be useful to surface as a foundation. Let’s start with your thinking about ethics. To the degree that I understood it, and I am by no means a philosopher, you seem to make a distinction between ethics, which was essentially a fundamental process thinking about the world and morality, which is more arbitrary and defined within a specific domain. Could you say a little bit more about those?

Forrest: It’s a bit like the difference between principles and rules. Principle effectively is a general sort of sharistics that we say, okay, well if we’re in a world where these kinds of things are the case, then it’s probably going to be important for us to think about these kinds of issues. And so in effect, when we’re looking at rules, we’re looking at things which are very, very specific to a particular situation. So say on an email form or something like that, a rule might be don’t type everything in capital letters because you can’t distinguish between common words like Polish and polish. So in effect, there’s these background ideas that are very, very general, that get translated into specific ideas that are relevant within a particular context of communication.

Forrest: And when I say communication, I’m thinking of all forms of communication. Physical action, building things, driving down the street, all of those kinds of interactions can be thought of as a communication between self and world in a general way. So if I’m going to talk about choices, first of all I have to say, what is the basis of the choice? Am I just basically following a rule that is specific to a particular world, or have I checked to make sure that that rule actually makes sense within a greater context of values?

Jim: Could you give us an example?

Forrest: Well, as I mentioned on the internet forum, there are certain things which make a lot of sense as far as good behavior and what actually constitutes communication that fulfills the function of communication. We make sense of our lives, we understand choices that we’re making a little better and so on. When we’re talking about ethics, what we’re basically saying is, okay, well what are the general principles of communication that apply in every forum? So, for example, if I have a medium of communication where if I just say the word hang up that it hangs up the phone, then all of a sudden I now have a limit to what I can actually say in the communication channel because things that are content are now going to effect the context.

Forrest: So in effect, we identify that there’s this principle that if I have a relationship between content and context such that there’s a conditionalization on the context based upon the content, then in effect, I limit the capacity of the communication channel to carry information. In effect, there are certain things that happen in the world that basically limit communication, and we find that those are lesser desirables. We translate that into a specific situation. So for instance, if I am in a conversation with somebody and I pull out a gun and I shoot them, that ends the conversation. I mean, there’s a very fundamental change of state there. In a sense of recognizing that there are these general principles about communication itself, what does it mean to have reciprocal city? What does it mean to have symmetry?

Forrest: Information theoretic kinds of ways of thinking about it, which are very, very abstract, translated into concrete realities of thou shall not kill as a commandment, that there’s a relationship between these two things. And a lot of times as the world changes, we have to rethink the relationship between the principles and the rules. So as everybody knows, the world has gotten much different than it was, say 200 years ago. The introduction of technology, and the internet, and cars, and automobiles, planes and all that, all of this stuff has really drastically changed the world in which we live.

Forrest: And so in effect, what we’re trying to do now is to say, okay, how do we understand what are the choices that we need to make? What are the principles that we need to apply to in effect, even understand what good set of codes would be in a particular domain of action? So in effect, it’s like the conversations that are the basis of policymaking or the conversations that are the basis of judicial process or things like that. In effect, we’re just trying at this point to understand how do we understand the situation well enough so that the choices about how to implement things at the level of policy, at the level of rules actually make sense relative to what our underlying values are.

Jim: Yeah, that does seem to be the challenge of the time. And can we find a firm basis on which to craft a set of operating rules, which I think you call morality for our time?

Forrest: Yes, that’s correct. So in effect, you asked about why is the philosophy important? Well, if we treat the topic of ethics as being the principles of effective choice, right? So now we can say, well, immediately, what are those principles? But we also have to have some notion of what do we mean by effective and what do we mean by choice? One of the great dilemmas of the time, and this is one of the reason why metaphysics is actually important, is that we really need coherent thinking about the nature of the relationship between choice and causation. At the moment there’s a lot of really good thinking about causation. We have science and technology, which effectively disciplines of how to think about causation in a really good way. We’ve learned about reason. We’ve, in effect, developed clear thinking tools in that space. But we don’t have similarly good tools about thinking about what do we mean by choice? In fact, if we take science and technology at face value, they would assert something along the lines of maybe there isn’t a choice.

Forrest: Maybe the world is deterministic. Maybe there’s some sense in which the notion of choice is an illusion. And without really addressing what is the fundamental basis of choice, how do we understand that concept? And the concept of ethics itself doesn’t end up having a foundation. So one of the things that the metaphysics does is it gives us a clear way of thinking about the notion of choice, and therefore a clear way of thinking about the notion of effective choice as a subclass of choice. And then we can start thinking about what are the principles of effective choice. And all of this is pretty important because if we don’t have a clear sense as to what those principles are, then there really isn’t any clear thinking about the nature of what an effective choice actually is in any particular contexts. In effect, we can start with these really abstract things, but ultimately it ends up being about pretty practical issues.

Jim: Yeah. Let’s get back to choice soon, but let me take a little dive into metaphysics. People who listen to the show know I’m known to say from time to time when I hear the word metaphysics, I reach for my pistol. Historically, one of those people go metaphysics, what am I [inaudible 00:11:31]? Just give me epistemology, that’s all I need. However, I did read some of your material on your metaphysics, and it was at least intelligible. Let me run back what I thought I heard, and you can tell me whether I was hearing it right or not, and then I can give you my critique on what you had to say. My take was that you were trying to bridge the gap between realism and idealism.

Jim: However, my take was that the realism that you put forth was essentially a straw man version that really isn’t relevant to what realists think today. And that it seemed to be only about the hardcore physical matter, when practicing realists now take into consideration the dynamics of all sorts. I mean, we know things like systems theory, network theory, complexity science, higher levels of cognitive neuroscience, et cetera. They’re all about change over time. Distinction I’ve called the distinction between the dancer, which is the matter, and the dance, the dynamics. And if we consider all those to be part of realism, do we actually need metaphysics? And if we do, tell me why.

Forrest: Well, actually, I actually agree with everything that you’ve been saying. So, first of all, let’s be clear about what we mean by metaphysics because I think that the observation you’re making about, hey, when we’re talking about metaphysics, what are we actually talking about? The whole new age movement has largely made it a topic that’s not intellectually coherent. The first thing that says is, what are we talking about? Well, metaphysics is the questions of what is and how do we know? It’s more or less an academic discipline the way I’m thinking about it. So I have great appreciation for the sentiment. So in effect, it’s not so much from my perspective about bridging the relationship between realism and idealism so much as actually creating a foundation for both of those concepts.

Forrest: Instead of saying one or the other of those weightings is true, what we’re really trying to do is we’re trying to say, how do we support these concepts from an even deeper basis so that this doesn’t end up being a contest? The other thing is that I actually agree with the notion that it’s not just about matter, it’s also about the interactions, and the dynamics, and then their progression through time. And in addition to that, I would add actually there’s even a third category. So there’s the stuff in space perspective, then there’s the forces in time perspective, and then there is the probabilities and possibilities. So in effect, what we’re essentially saying is, is that it’s not just even about the dynamics, but it’s also about the sequences of possible histories.

Forrest: When we look at quantum mechanics, for example, we look at things which have to do with alternate situations like if this particular measurement happens and this state is determined, then these other states become the possible futures. And so in effect, there’s a many worlds interpretation, and we see ourselves as basically being in one context or in another as dependent upon measurement process. So in effect, we could think of measurement processes, not just being a distinct thing, but also a kind of dynamic that is the interrelationship between observable objective states and virtualized potential states as mediated by say one of the wave equations. I think that the notion of dynamic or the notion of realism is actually a very strong and very coherent point of view, and it’s actually bracketed by these two other frames of reference, the actuality and potentiality.

Forrest: I think that when people look at realism and idealism in the older way, you right to tend to think about it as either being deterministic in the sense of defined by external phenomenon or defined by matter versus in some sort of subjective sense as in who’s the observer, who’s making the measurement, what moment in time is the selection, or which possible worlds we’re going to end up in. In effect, it’s not so much that the ways in which I’m thinking of these are a strongman or caricatures, so much as they’re trying to look at what is the fundamental idealized state by which to understand these concepts.

Jim: Okay, that’s good. Yeah. I continue to remain a realist, but with the distinctions that I made that I do believe there is a real world out there. There’s only one of them, probably not a multiverse, at least not in the quantum mechanical of many worlds theory. In fact, I put my flag down on the as if many worlds interpretation, which is do the math as if many worlds, but by some process we don’t understand only one of them becomes real.

Forrest: I think we actually [inaudible 00:15:58] on all of this. I mean, the way you’re describing realism to me sounds very coherent. I think the only adjustment that I would be making is instead of attaching the notion of real directly to the objective, which is what realism would normally do. I mean, idealism in the complimentary way would say that the notion of real is attached to the subjective. That in this particular orientation we’re saying that real is attached to the relationship between the object and then the subject. So in other words, the interaction is real. And anything we project as being what that interaction tells us about the objective world or what that interaction tells us about the subjective, those are things which are epiphenomenon of the notion of real which lives at the interaction level.

Jim: Yeah. I think it’s also, and this is yet to be fully determined, but at least my sense most likely that we’re going to discover that the subjective is less spooky than it seems. That once we fully understand how consciousness works, for instance, we’re going to be surprised that we ever got ourselves so tangled up in knots. That the subjective is a biological process like any other, biology of course is compounded of first chemistry and below that physics, and it just is what it is. And it’s an emergent phenomenon from the way matter and the dynamics of matter are ordered. And particularly in animals is essentially an emergent dynamic process at the intersection between perception on one side, memory on the other, and memory can be both genetic memory and lifetime memory, and some form of processing in between. And it’s probably no more than that. I which case the whole idea of a hard problem just goes away. It just is what it is.

Forrest: Well, there’s a lot of subtle things associated with that. We’re going to get far field from thinking about choice and how to do governance and things like that if we go down this particular road. I mean, personally am not a favor of the many worlds theory myself. I tend to be more on the Copenhagen interpretation side. But when we get to the whole notion of can we derive a first person perspective from a third person mechanical world, or can we think about choice purely in terms of causation, can we actually model consciousness in terms of compute? Based upon, again, a lot of thinking about this kind of thing, I actually think that the notion of compute and consciousness are distinct. And that in effect, we don’t really have a coherent notion of consciousness because we’re trying to think about it in terms of compute.

Forrest: But that’s a whole can of worms, and we can spend obviously days talking about this sort of thing. I guess the thing is, is that if we’re looking at what are the principles of ethics and how the ethics translate into choices about the world and so on and so forth, we can either try to reconcile the relationship between choice and causation at a fundamental level, or we can do pretty much what Western legal systems have done, which is assume that there is consciousness, assume there is choice, assume there is a subjective, and that the subjective has some sort of notion of choice, and then try to figure out what jurisprudence looks like after that. Again, it depends upon what conversation you want to have.

Jim: Yeah. And I do like to dive into these foundations a little bit, and then we’ll pop up to a more applied level. But again, when I hear conversations about things like free will or determinism versus random, it strikes me that that all misses the point that what we have is a specific mechanism. Let’s call it a human body with a brain in it, a series of resonances between the neurons, and the body, and the stomach, and the perceptions from the census, et cetera. And stuff comes in and then affordances are triggered, and that’s it I think. To say is their free will, that seems to me a question that has no meaning. We know that we do trigger affordances, that is we do things in response to our genetic and lifetime memories and in response to our perceptions, but is there anything actually useful in trying to call that freewill?

Forrest: Well, I don’t tend to use the notion free will. I mean the notion of free and the notion of will both have certain complications associated with them. I mean, I use the word choice fairly specifically, that is there’s a range of potentials or some sort of selection and then there’s a consequence. In the way I think about things, there’s no such thing as absolutely free without having some at least maybe multiple limitations. And there’s no notion of limitation that doesn’t have some degree of freedom associated with it. Ad these are implied in the structure of the relationships. In a sense, when we’re getting into … Can we model the whole thing just in terms of molecular biology or can we model the whole thing in terms of some sort of determinism?

Forrest: I’ve looked at those questions pretty closely, and I think that really the answer is just no. In effect, when we look at like reductionism as a fundamental principle, we basically say that everything that is knowable about chemistry could in principle be defined purely in terms of our understanding of quantum mechanics. Or take the general standard model, and try to see if that in combination with quantum mechanic can be used as a basis to derive everything that is knowable about chemistry. Well, as some people have observed who have deep knowledge in both topics, it turns out there’s phenomenon in chemistry that just in principle can’t be explained on the basis of the standard model.

Forrest: So in effect, there’s some kinds of emergent phenomena that we just don’t have the capacity to essentially derive in any kind of first principled way based upon the notions of causation on the ‘prior domain of physics.’ And this is actually very surprising because the thing is, is that if we look at the notion of reductionism as a philosophy, as a principle, and we take what is the place, what is the exemplar of the strongest likely relationship where we would be able to see a relationship between physics and chemistry as being the quintessential exemplar of a hard science reductionist perspective, it turns out that not even in that case does it work. Therefore, the notion of reductionism itself has to be called into question.

Jim: Yeah. And that’s very true. And of course, that’s one of the core foundations of complexity science is that in fact, the absolute definition of emergence is that it cannot be predicted from lower level states. Actually, then we can get out of having to worry about the deterministic worlds even easier than that from a practical perspective. Even if we retreat the physics, some surprisingly simple physical models fall prey to deterministic chaos where even the most tiny differences in the initial conditions produce wildly different trajectories of even a very simple physical system. Therefore, from any practical perspective, even if you had a whole universe made out of computronium, you still couldn’t calculate the trajectories of some surprisingly simple physical systems. The worry about the determinism from either a practical or theoretical perspective seems to me not worth pursuing.

Forrest: I think we’re agreed on this. The interesting thing about what you just described, I call it micro state amplification, that things that are in the extreme right side of the decimal point end up becoming macroscopically important. And so in effect, we can look at, well, what is knowable about the micro state of the domain? Well, ultimately, there’s an information boundary there. There’s a place that we can see that there are actual limits as to what is possible to know at all, even in principle. In effect, when we go into these things and we say, okay, well there is a notion of other than what is knowable, there’s essentially a fundamentally not just the known and the unknown, but there is actually the unknowable.

Forrest: So then when we go thinking about the nature of choice and so on, there’s now a basis for us to say, okay, well there’s more than just the feedback of the physical system that we’re looking at. There’s also this something else. And we could again skip over a whole lot of intermediate levels, but at a certain point we can say something like utilitarian ethics is based purely upon feedback mechanisms and objective metric and criteria and stuffs like that. Optimization problems, to some extent, isn’t the whole story that there needs to be some notion of a value ethics, a way of thinking about what do we really care about? What really matters here? What is the kind of things that are fundamentally our basis of choice that doesn’t come from some sort of feedback mechanism?

Jim: Yeah. Let’s go there. But before we do it, I want to do one last aggression into foundations. And this was something I did not know you were interested in until I started doing my research a couple of days ago, and I was very interested by the little tiny one page paper you wrote called The Epistemic Sandwich because this is an area I’m very interested in is time. And I am generally aligned with the group of people like Unger and Small who believe time is real, and that the time may be more real in space. And you made a pretty interesting argument. I wouldn’t say it was a lead pipe cinch, but it seemed a good way to get to a non entropic arrow of time, at least the range of space in which we live. I thought that was quite interesting.

Forrest: Great. Well, I’m glad that you had an interest in that. I tend to find myself in a fairly interesting position. On one hand I can definitely count myself in the camp of saying the time is in a certain sense more fundamental than space, and more fundamental than possibility. So in effect, there’s this bisymmetric relationship that effectively comes out of that. But then that’s only one way to think about it. It turns out that there is this thing called the increments ration theorem, which basically says there’s a completely complimentary perspective of how to think about it, which is the perspective that modern physics largely seems to be taking, which is that we have this third person orientation to what you’ve been treating as a classical realist perspective as time is an illusion that effectively we can talk about the complete world state of the universe.

Forrest: And so in effect, there’s this really deep interrelationship between the symmetry and the discontinuity that we see in the foundations of quantum mechanics and general relativity and the physicalist realist perspective. And as a third person perspective ultimately, and then there’s this first person perspective which treats the asymmetry of time as fundamental, and the notion of the continuity of consciousness as fundamental. And then in effect, we can be in one way of thinking about it or in the other way of thinking about it, and that those two in effect are mutually [inaudible 00:26:38] ways of conceptualizing the universe. In effect, the whole reason that metaphysics is of interest is because it’s the basis by which we can derive this principle of this orthogonality between those two perspectives. It in a sense becomes the ordinating basis by which the concepts themselves are really understood. And that makes it actually pretty profoundly important.

Jim: I have to dig a little further in that because I would say that I tend to align myself more with the arrow of time is real more in the least small in model. And by the way folks, there is an episode on the show from Lee Small, where we go into some considerable detail. I have to go back and re look and see at your argument on the orthogonality, and that there are essentially dual ways of looking at the world, which would be quite interesting. I think we’ve probably spent as much time on these kinds of foundational questions as it makes sense to do. Let’s switch to value ethics. It’s certainly highly important when we think about operating in the real world.

Forrest: Okay. What do you want to know?

Jim: Tell me what your theory is of value ethics.

Forrest: Well, there’s a couple of specific things. First of all is that when we think about values, it’s important to contextualize that there’s a lot of other concepts that people tie in with this. And again, coming from a sort of abstract perspective, I tend to think in terms of meaningfulness and purposefulness. In other words, the notions of meaning, values and purposes have this distinctness that those three concepts are [inaudible 00:28:06]. They’re not interchangeable.

Forrest: Obviously, they don’t mean the same thing. And by one, we don’t really want to substitute the other. And a lot of people do this accidentally. They’re thinking about values in the way that they would think about purposes. And unfortunately, that ends up creating a lot of confusion because the way in which meaning works, the way in which values work, and the way in which purposes worked are actually very different. They are truly different modalities of how to think about relationships, and life and so on.

Jim: Why don’t we start with that? I mean, I will say that there’s been a lot of talk lately about meaningfulness, and a lot of it left me scratching my head. I would love it if you could go through those three, and provide your take on what they are and how they’re distinct from each other.

Forrest: Great. Let’s take a very object example. So I have a toaster. Okay? And as far as I’m concerned, the purpose of the toaster is to cook toast. I put some bread in, I push the button, and a few minutes later I get some breakfast. So as far as the relationship between myself and the toaster is concerned, if we consider the toaster, the purpose of the toaster is defined by something external to the toaster. In other words, I am not the toaster. The toaster has a purpose assigned to it by something other than the object of the toaster. When we talk about values, we’re actually talking about something which is completely different.

Forrest: Like if I have children, and then I think about maybe I’m a farmer on some [inaudible 00:29:37] context, and I might say my son is to replace me and is to essentially assist with the farm and so on and so forth, well, the son might have his own desires as to what he wants to do with life, and the value of the son isn’t necessarily going to be defined just in terms of the functions that he’s going to perform for the family. He may decide that he wants to go to school or does something else, and of course these are somewhat modern interpretations. But the idea here is that when we think about the value of something, we’re thinking about it as being innate. In other words, not something imposed from the outside as from the father to the son, but something that occurs from the inside, that is internal to the son itself.

Forrest: And the person, when we talk about values, we talk about them as coming from inside of ourselves and manifesting in the choices that we make into the world. So there’s a kind of origin of expression versus target of expression aspect of thinking about it. So in effect, if we want to look at what the notion of meaningfulness is, meaningfulness in the sense of whether or not the word dog means a furry creature that has legs and barks, the sound in the air, dog, and the particular associations that we subjectively make internal, and the object reference that it has to a free creature that happens to be in the room sitting next to me, those associations are not held purely in an external way nor purely in an internal way.

Forrest: They happen to occur in the relationship between the subject and the object, if they are inherently transpersonal. So in effect, if we were to sum up, we would say that purposes are defined in such a way as that they come from the outside. Values are defined in such a way that they come from the inside. And that meaning is essentially something that is between and in then the relationship between the inside and the outside.

Jim: Okay. Now of course, sociologists would tell us that at least from a human perspective, our values don’t really come from inside us. They are actually very strongly creatures of our community, our family, our teachers, our religions, et cetera.

Forrest: Well, I would think that this is actually a classical case of the sociology isn’t really as interested in trying to come up with definitions of these things that would be metaphysically coherent in the way that I’m using it. So in effect, it’s like we actually want both. We want to have the clarity of the terminology and the real results of the anthropology, sociology, psychology really integrated well. And I think that to some extent, the things that the sociologists are pointing to has much to do with meaningfulness. The meaningfulness of relationships in the community, the communications that are occurring and so on and so forth as much as they’re referring about values. So obviously, when we talk about values, purpose and meaning, these things, although they may be distinct concepts, they are not inseparable.

Forrest: That wherever any one of these concepts occurs, the other two will for sure occur, whether we’re conscious of that or not. And so a lot of times because of the necessity of this co-occurrence, people tend to align the differences between them, and that may or may not be important depending upon what we’re trying to do. If we’re thinking about ethics, and we’re really trying to have a rigorous way of thinking about this, to really be clear when we’re making choices that are profoundly impactful, if we’re thinking about existential risk or if we’re thinking about a civilization, possibility of collapse, and economic forces that are of huge magnitude, whether to set policies that may go to war or whatnot, there’s a real importance that we think about the ethics from a principle point of view with deep, deep, deep clarity.

Forrest: In effect, there’s some real necessities around this. For instance, if we look at the relationship between man machine and nature, and we realize that in the technology that we’ve developed that we have created capacities to create and destroy the whole world, nuclear war or biotech or things of that nature, then in effect we now find ourselves in a position of really needing to make very, very good principal choices. And so as a result, the clarity and the quality of the ethics that we’re using has to be of just absolutely paramount quality. I mean, if we have the power of gods, we effectively need to have the kind of ethical coherency that a God would have. Not to really take this into religion, but just to really give an indication of just how important it is to be profoundly clear about some of these concepts.

Jim: That’s perfect. Why don’t we try then to integrate a specific example that gets to or applies ethics in the context of meaningfulness, values and purpose. Let’s pick a real example. Say I’m making a decision on whether to do some experiment with CRISPR for instance.

Forrest: Well, there’s a lot of different directions we can go with this. You might be familiar with the work of Dave Snowden who talked about the difference between complicated, that is the kinds of things that we can do with computation and simulation and so on, and complex, which is situations that are more like nature, where there’s lots and lots of factors interacting, and we don’t actually know the complete state of the system at any given moment. In one sense we can say, okay, how do we make choices in the space of the complex given that the only thing that we really can control the stuff that’s happening in the domain of the complicated. And it turns out that the relationship between these two realms of operation is actually, it’s pretty important to really understand that because there’s some things that, for example, we can’t predict with any amount of accuracy.

Forrest: When we’re making choices in this particular space, we’re refining that we really need to account for whether or not we can do a safe to fail probe, that is can we interact with the complicated world, gather some information about it, use that information in some way to at least do a proxy of some sort of vague simulations, so we have at least some indication as to what the outcomes might be. But in that situation, we’re still having to have some notion of value. What outcomes do we consider to be successful? What are we actually looking to have happen, and is that the desires of what we are wanting to have happen? On what basis are those things clarified? So for example, if we’re looking at, okay, can we do a safe to fail probe in the sense of CRISPR?

Forrest: Well, already it may be the case that we can’t, like for instance, we don’t necessarily want to experiment on biological systems because if the thing has this replicating capacity, then it might be that once we’ve done the experiment, we’ll never actually ever get to do another experiment again because we destroyed the system in which we were performing the experiment. In that case, we see already can’t do safe to fail probes if the probe itself is too consequential. So obviously you don’t test whether or not global nuclear war is effectively going to create nuclear winter because that would obviously be bad. You don’t get to do that experiment more than one time. So in effect, there’s a notion here of first of all, what do we care about?

Forrest: Well, just getting the data from the safe to fail probe is really only a proxy for our capacity to predict the future, and the capacity to predict the future is itself relative to some sort of value statement. The purposes of our safe to fail probes are effectively connected to the meaningfulness of our values. There’s another level here, which is also kind of really important is that when we think about can we actually do experiments of this particular kind, somewhere along the way we realize that we really just can’t always know what the impacts are going to be even in principle, and that therefore, we have to think about can we get clear as to whether we’re doing the right simulation at all? Are we even asking the right questions?

Forrest: For instance, if I do a whole bunch of simulations about factor X, but it turns out that the thing that I’m really looking for is a solution to problem Y, then to some extent I’m investing lots and lots of energy into the wrong problem. In a lot of ways what we’re really trying to do here is essentially to ask better questions. Can we get to the place where we’re even looking at CRISPR and things like that? When we’re thinking about it from an ethical point of view, it’s like do we actually know what the criteria of success is? Do we have a real sense as to what is the whole systems approach that’s actually going to be satisficing to a whole range of criterias rather than just the one or two that we happen to be conscious of at this moment? So it’s those kinds of things which would be directions in which we would go with something like this. And there are others, but those are the ones that occur to me just off the top of my head.

Jim: Yeah. When I was talking with Dave Snowden here on the show, one of the things I pointed out then and he agreed is that one of the things you have to keep in mind is that every complicated system is inevitably embedded in at least one complex system. And that there is a flow between the two. Take, for instance, a business. A business is basically a complicated system, and yet it interacts with a complex system called the marketplace. Right?

Forrest: Yeah.

Jim: And a farm is a relatively complicated system that would have some complexity to it, but it’s mostly complicated, and yet a farmer’s embedded in an ecosystem, which is a classic complex system. And so trying to understand the couplings between the complicated and the complex has to always be done in the context that every complicated system is embedded in at least one complex system.

Forrest: Yeah, I completely agree. The main thing is to understand that between the complex and the complicated, that the complex is stronger, is actually in a certain sense, the foundational basis. When we think about the relationship between purposes and values, for example, that in effect, the values have to be stronger than the purposes, that the values provide the basis for our choice making as to what we’re actually going to attempt to do in the world. And that those values aren’t coming from some sort of feedback mechanism. They’re coming from some sort of deeper basis, I hesitate to use the word, but some sort of transcendent perspective.

Forrest: And again, I’m not necessarily advocating a particular interpretation of the notion of transcendent in this particular case, but I am saying that if you take the values and you treat them as having come purely from the object of world, then they are always going to be subject to some sort of feedback mechanism that effectively is deep based values. You effectively get gamed by the system. Having some clarity as to the distinction between inward and outward is important. And then having some clarity as to things which are conditionalized in a causal sense and things which are non conditionalized are in basis of our choices does actually become important.

Jim: Let’s probe on that. I think I just came up with an interesting idea that maybe will illuminate this for me and hopefully for the audience. I mentioned farming, something I’m involved with. I’m a farmer. My wife and I support local agricultural businesses, et cetera as well. And farming is sort of complicated. It has some complex elements, but it’s mostly complicated, but yet it’s embedded in the ecosystem. And we make decisions as a civilization on way bigger scale than me on what kind of farming that we do, and that has huge implications through its coupling from the complicated to the complex. And that would at least be my argument that industrial farming is doing such damage to the outer complex ecosystem, that it’s not sustainable at all, even at the current level, let alone at the level which it has to be to provide a Western level standard of living for 8 billion people. So how would your analysis try to think about what that should mean for how we do agriculture?

Forrest: Well, first of all, I just want to state for the record that I’m in complete agreement with your point of view, that we really do need to move to a sustainable way of doing things or else that which does not sustain life will not continue to live. How do we apply ethics in this particular case? Well, it’s not just about the ethics, it’s also about what are our values? And our values in this particular sense say, well, we want to be sustainable. We want to be adaptive, right? So for instance, not just sustainable because if we just implement sustainable, it’s a bit like, how do we keep a complicated system going no matter what happens in the complex world? That is we build some sort of thing that ‘lasts forever.’

Forrest: But as we’ve already mentioned, given that the complex is the basis for the complicated, that ultimately that doesn’t work. So we need the capacity for the complex systems to both be sustainable and to be adaptive. That is to evolve, to basically work well with the context in which they exist. So for instance, as the natural world changes, we want to adapt our farming practices so that they continue to be sustainable over the long term. And in order to do that, we have to bring in this third element, which is … I know you’re probably going to hate that I’m going to use this word, but there’s a certain consciousness involved, that is that we’re not just evolving in a blind sense the same way that say evolution would do because although evolution …

Forrest: If you think about the scientific method for example, which is to perform experiments and to just passionately review the results, and then based upon the results to perform new experiments, the nature of implementing the evolutionary model is effectively the perfect scientist. It tries everything, and it is completely dispassionate absolutely about the results. So in effect, what happens is, is that from our perspective, some of those experiments obviously fail. We could end up with entire species dying off. We could have entire continents basically becoming deserts. So in effect, we’re wanting to say, okay, if we’re going to actually do thriving, if we’re going to want to move it beyond just what can be done through a pure feedback based methodology, we’re going to have to have a kind of consciousness that transcends just evolutionary process.

Forrest: When we’ve forced ourselves as a species into this particular position, because of our use of technology, technology has essentially enabled a very strong top down way of thinking. And currently it’s kind of like we’re driving the top down method as if it was based upon the kind of unconsciousness of the market system, the unconsciousness of evolutionary process, and we expect that that actually is going to have a good outcome. It’s effectively a kind of unconsciousness. The market evolution is unconscious in the fundamental way. And so as a result, we end up with the worst of both possible worlds. Pure evolution working by itself or pure technology working by itself.

Forrest: So in effect, what we’re needing to do is we’re needing to introduce a kind of values based, meaningfulness based way of thinking about these things, which effectively means that we need to be clear about what is our basis of choice? How are we thinking about choice in a way that is holistic, that going to account for both evolutionary aspects and sustainability aspects? And how do we reconcile the relationship between sustainability and evolution in a conscious way, which for example is connected to the notions of our values of thriving, the values of life, the values of why we even thought that adaptability and sustainability were important in the first place?

Jim: Yeah, absolutely. Well said. And by no means do I disagree with you. I mean, the evolutionary approach worked great for billions of years in biology in the more loosely coupled social evolution in humans until we passed a certain threshold about 250 years ago, which was when we learned how to harvest fossil fuels, then the game changed entirely. The scope of what humans could do started to go up exponentially. A couple of that with the scientific method, with the ability to build information systems, to manage large scale entities, and human capabilities started to dwarf the resilience of the human space. Prior to 1750, there was really nothing humans could do to seriously damage the ecosystem.

Jim: Now we can utterly destroy the ecosystem, or at least so badly that nothing more complicated than a cockroach will survive. So obviously, once we’ve reached this level of power, as you said, we have the power of gods, we need to have the wisdom of gods, and hopefully not the Greco Roman gods, who are pretty damn capricious. Right? Remember the [inaudible 00:46:15], all the wacky shit the gods did there. Right? We’ve gotta be better gods than those. And we have to think we now are in charge, and we so damn well have the responsibility to figure out how we can use this unbelievable power we have responsibly to not wreck the ecosystem.

Forrest: Yeah, that sounds right to me. I mean, I think that again, in proportion to the degree that there is an asymmetry of power, that we need asymmetry of strength. And so in effect, there is our continuity of strength. It actually might be an even better way to state this is that our wisdom, our capacity to really feel through and think through these kinds of issues and to do both really, really well. Again, to make good choices in this space isn’t just going to be dependent upon intellect, it’s also going to be dependent upon a quality of feeling through the issues. And I don’t see those as in any sense in opposition to one another. This goes back to part of the reason why it was so important to distinguish between values, meaning, and purpose in the first place.

Forrest: When we think about values, we can have all values. Values are not defined in any kind of mutually exclusive way. Whereas when we talk about purposes, purposes are mutually exclusive. You can basically do one thing at a time. And so in a sense, there’s real need for us to have a good holding of both the ways in which we move from values into purpose, which essentially is gated by this notion of meaningfulness. In other words, that there’s a kind of flow, that there is a multiplicity of values, that having a lot of values allows us to identify something which is really meaningful. And that when we have a clear sense of many things that are meaningful, then we can begin to start to think about a single purpose. In effect, there’s a kind of progression here that is really important for us to implement because if we don’t, as you mentioned, the emotionality of the early depictions of Greek gods as an example, and the kind of pseudo family dynamics that they had as an exemplar was not wisdom in the sense that we would need.

Forrest: In fact, if you look at most of the religious idealizations of deity forms, they actually have what would be, if I were to apply the DSM-5, tremendous pathologies, Narcissism, magical thinking, all sorts of stuff, which effectively would show up on the dark triad of personality disorders. In effect, what you end up with is a need to have a much, much deeper, much, much more coherent notion of what it means to make good choices that is based upon something that is even deeper than just I want in some unique sense, but in the sense of we want in some sort of collective sense. And to have a genuine capacity for that. This is where we get from the notions of these things as thought about in an individual sense. United States thinking in terms of life, liberty and pursuit of happiness, which is a very individual perspective to how do we collectively make choices about things like industrialized farming or resource usage.

Forrest: And whether or not it makes sense to allocate resources in this direction or in that direction. Whether or not finance is defined purely in terms of market forces has actually got the level of intelligence necessary to deal with the kinds of extra side effects that are currently occurring. In effect, there’s a need for us to come down to a much, much deeper level of principle to handle the scope of the questions that we as a species are now currently faced with.

Jim: Yeah. We have some giant challenges to go from where we are, where most people are blind to these things, to where we need to be. I mean, first we need some form of sense making, right? The people even figure out what’s going on realistically. Then we need to make some choices, and then we need to execute some actions. What are your thoughts on how do we move from where we are today? We’re completely muddle, blind, deaf and dumb to sense making, choice making and action taking.

Forrest: Well, that’s actually the right question. One of the things that I heard when I was first learning how to be CEO and so on is when you don’t know what to do, look, see and tell the truth. So in other words, the first piece is, is that we need to actually enter into a kind of observational state, and then to … very dispassionately. I mean, because again, if we’re looking to try to respond effectively, there’s a certain degree of … We’re not going to filter the information. If you look at the way emergency first responders show up in a situation, they’re just saying, I see X. And they just broadcast out to the other people that are in the group. And everybody in a sense is sharing information in a pretty much unfiltered way. They’re really trying to create a ecosystem of just informational resources. And from that to basically begin to have some sense as to what’s going on.

Forrest: In effect, what we’re looking at is can we create the right kind of information ecology? Well, as you may notice on the internet for example, we’ve actually done the exact opposite thing. That the market forces have created huge incentives for disinformation ecologies. And so in effect, one of the first things that is important is for us to recognize that we’re not sharing information from any personal benefit perspective. Does this help my cause? Does this help me as an individual? Will this enable me to get laid with this particular chick? To some sort of community process of I’m sharing this information because collectively, we live or die on the basis of how well this works.

Forrest: In a sense, there’s a deep need for us to first perceive the world as accurately as we can, share that information as transparently as we can, so in other words, no filtering, and then to very much have a real [inaudible 00:52:15] as the beginning of the sense making process. Are we asking the right questions? What questions do we need to ask in order to answer the important questions? What are the important questions? In effect, before we even get to the point of doing choice making, we’re asking questions about what is the state of the world? What is actually our current position? And then we’re asking questions as to what is our compass? In order to really guide choices in an effective way, it’s almost as if we need three things. We need a map, we need a compass, and we need our current position.

Forrest: If you subtract any one of those things, I would say, well, I could lose an awareness of the map, I still need to know where I am in a real sense of the world so that I can tell if there’s a cliff in front of me. But if I absolutely had to have just one thing, I would say have the compass. Obviously, these things by themselves don’t really do as much, but if I have a compass and the compass is saying, you want to go in this direction. I’m looking for true North. I’m looking for what are the criteria of success in a design sense? What are the things that essentially is going to create a true, holistic, comprehensive solution to a problem such as restoring the Amazon or creating health in coral reefs or making sure that every man, woman and child is fed or making sure that the world has a future in the sense of not destroying itself through some sort of ecological accident.

Forrest: That in effect, what we’re wanting to do is to say, okay, given our compass and the knowledge of our current position, if we go in this particular direction, are we going to run into a cliff? Are we going to go off a cliff? Are we going to go straight into brambles? And then by looking at the map, we can start to think about where are we going and how do we get there, that makes sense relative to where we are now. In a lot of ways, sense making in this particular capacity is … Again, what we’re trying to do is to show what is necessary and sufficient to make good choices in the areas of existential risk, in the areas of civilization collapse, in the areas of what does it mean as a community to solve the kinds of problems which are critically necessary for us to solve, to continue as a species, to continue as an ecosystem given that with all of these highly asymmetric powers developed by technology, that we now find ourselves in both the possession of and the responsibility of making good choices in this particular space.

Forrest: In that particular sense, we’re looking at a relatively nuanced approach. If I look at neurology for example, and I take a look at an individual neuron and let’s say that we were to treat that neuron the same way that we currently treat life, liberty and the pursuit of happiness as values, that means that every impulse signal that that neuron is receiving on it’s dendrites is going to be run through some sort of filtering process as to whether that neuron thinks it’s going to be to its own interest to broadcast that signal onto it’s Axiom to send it downstream to other neurons. If it was defined in that particular way, that the neurons were operating on their own individual personal benefits, then the level of disinformation in the overall sphere of action ends up being so low, so poor that you don’t end up with either computation or consciousness.

Forrest: In effect, there’s a real need here for us to recognize that communication is not for our own sake. I mean, there’s an evolutionary capacity that has been created in the human species. We have language, but from my perspective, we don’t know how to communicate, which is a real irony in a very profound way because in a sense we’ve taken this adaptive capacity, which may originally have been for things like assessing the health of a potential mate to think about longterm family raising, and to create culture in the child so that the tribe could survive. Then all of a sudden, once we ended up with city, states [inaudible 00:56:30] capacities and then the enlightenment and so on, that we took the perspective that communication is really something that is for individual benefit. And unfortunately, we’ve reached a crisis where we’re recognizing that that is no longer the case. That is no longer true.

Forrest: We can’t operate just from the perspective that communication, that what we say and do with one another is purely for our own individual benefit. That we have to go beyond market process and start looking at non rivalrous dynamics that effectively have the capacity to create the conditions for good sense making to occur because obviously, the failure to do this is essentially the end of the world. In this particular sense, these are the kinds of things which really come to mind immediately as being absolutely germane and relevant to how do we do good sense making as a prelude to doing good choice making. Itself as a prelude to say non corruptible, non capturable implementation process.

Jim: Any thoughts? I mean it looks like a shit show from where I sit out in the world. If anything, our social sense making has gotten worse by the introduction of the internet.

Forrest: Well it has, and this is part of the thing is that when people introduce technologies like the internet and so on and so forth, they have a sense of the potential of it. And for every way in which something is good, there are two ways in which it could be bad. And even ironically, for every way when something is bad, there are two ways in which it is to be good. And so I think that as technologists, we have … And again, I’m speaking as someone who has industrial capacity. And I’ve started a company. I’ve got resources and tools. I can build stuff. And also as a longterm software engineer, I’ve got a work deployed at the Pentagon, I’ve got work deployed at various three ladder agencies and such.

Forrest: In effect, there’s a real deep sense as to our capacity as engineers. But as engineers, we need to go beyond the stereotype of the Asperger’s perspective of the world, and really get a depth at thinking about the sociological aspects, the anthropological and psychological aspects. And so in effect, when we’re looking at the internet for example, we need to, again, not necessarily from a market perspective, but from a principal basis to say, we really need to change the orientation of how this actually works. How we’re actually using social media technology. How we’re actually thinking about social media technology as it currently stands is very much deployed purely in service to corporate interests to the shareholder benefit. And is basically neglectful of community benefit of the user benefit.

Forrest: One of the things that has been the real interest to me is the work of the center for humane technology, which has really been promoting a real understanding of social technology as basically being in service to community as its primary objective rather than being of service to some sort of surveillance capitalism or attention economy. And I think that without really making some good inroads to recognizing broadcasters as broadcasters, moving some policy and legal phenomenon into place, which has to do with the ethics of that, creating moral codes that effectively say, you will be responsible in your relationship to the community. And that that is effectively a service and a devotion that is fundamental to the institution rather than maximizing shareholder value on a quarterly scale.

Forrest: Because obviously if we’re trying to think about sense making over evolutionary timescales, again in terms of farming practices and adapting to nature and so on, because that is the reality in which we live. Then in effect, we’re going to need to be thinking on a much, much longer interval than say market ecologies can actually do because they are too much and compulsively driven on a short term perspective. The multipolar trap rules for rulers dynamic and all that kind of stuff, very much essentially means that we really need to go to a completely much, much more comprehensive way of thinking about these things. And it starts from, again, an individual perspective of while I can operate on my own benefit some of the time, I can’t operate on my own benefit all of the time.

Forrest: Which is the supposition that most people have in regards to what they’d say and do. And moreover than that, collectively, we need to get much, much better at holding responsibility in association with authority. When a technology company has the authority to define a platform, we say, hey, either be responsible for that platform. I actually do the things that are necessary to preserve the wellbeing of the community on that platform, and so I can point to things like Facebook and say, you do actually need to be responsible for the impact of whether or not you have politicians basically spreading this information and disabling the information ecology. To some extent we can really ask the question of, is it even the case that a representative model of governance is actually the right one?

Forrest: Do we need some sort of sense making that is effectively broader than that? Some sort of direct democracy process. But again, in a larger context, we’re realizing that at an individual level, we definitely need to move beyond the notion that communication is purely for our own individual life, liberty and the pursuit of happiness to our collective life, liberty and the pursuit of happiness because those two things are not separate from one another. They are deeply entangled. In effect, we at this particular point have a very naive understanding of the nature of choice and value and ethics and all this kind of stuff. So getting educated, getting clear as to what really matters, talking to one another about how these things actually work I think is absolutely essential.

Jim: Yeah. I agree with you about Center for Humane Technology. I’m a great fan of their work. In fact, Tristan Harris is going to be on the show next month, and we’re going to dig into this quite a bit. It is interesting. I’ve been involved in building what we would now call the internet since 1980. I’ve built some of the earliest things that we would now call social media. And at the time we had no idea of the potential negative implications. That we thought we were doing not only a good business, but we were doing something great for the civic soul of humanity.

Jim: We figured, how could it be any better than having access to all this information, let anybody’s voice be heard, et cetera? How naive we turned out to be. And what’s interesting is I think we all know Facebook’s a shit show, Twitter’s worse, but here’s something that’s kind of interesting. There’s many of us now who know this, but nobody that I know of has built a decent sense making platform for the 100,000 or million woke people, awake people. I don’t want to use the word woke. It could get me confused with those idiots, but awake people about these issues for us to start to rally together. I wonder why that is.

Forrest: Well, I don’t necessarily know why that is. I’m not going to speculate about that. I mean, you mentioned naivete earlier, and to some extent there’s a need for us to have, again, both technological experience but also sociological and anthropological experience. And in addition to that, a deep sense of understanding of ethics and metaphysics under it. In a sense, I’m saying, well, okay, we have as a species been largely unconscious, you do actually point to essentially the real thing here rather than trying to diagnose why hasn’t this happened? I can definitely do some extensive diagnosis as to what got us here won’t get us there, what is really the ultimate basis of the nature of the problems that have come into the world and so on and so forth, but the point of such analysis isn’t so much to lay blame so much as it is to provide litmus tests for the success of a solution that we might propose.

Forrest: So in other words, if we were to think about, okay, well how are we going to create a platform that effectively is going to be the basis of good sense making and good choice making at a species level, what are the characteristic that it’s going to look like? Well, ultimately, immediately we already start asking questions. And the questions is part of the sense making we’re doing with respect to the platform itself as well as the thing that the platform actually does. And one of the questions effectively just immediately becomes, it’s not a platform. Why should we have a platform rather than a protocol? Because if I create it as a platform, then effectively I’ve already created the dynamics of capturebility, and the market forces themselves, centralization that is already identified as having been somewhat disabling to the kind of choice making processes we need to do, which to some extent really need to be distributed.

Forrest: There’s a recognition that ultimately there’s just so much information that needs to be processed in order to make a good choice. That it’s just simply not the case that any amounts of centralization is effectively going to be able to handle the bandwidth that is required. For example, if I look at institutional design and it’s based upon some sort of top down model, then just from information theoretic constraints, I know that there are certain problems that it can’t solve. Things that, for example, are so complex, so complicated that no single human being in their entire lifetime could learn all that they would need to do to make good choices in that space.

Forrest: That even if we were to have that person surrounded by a team of advisors or we would have a team of say 12 people that were trying to make choices with respect to 100,000 people, it can immediately be observed is there any possible way that a person can have the wisdom of 100,000 or that a person can have the wisdom of the billion of people that their choices actually influence? Or even if we look at small subgroups, we can ask the same question. Can any possible organization of 12 people with any amount of system structure supports on have the wisdom of a billion people? Well, probably not just because it can’t … Just sheer logarithm understandings of the dynamics of the information flow will simply say, okay, well we can get maybe a square root level of reduction in the number of nodes needing to process this amount of information, but we really shouldn’t try for better than that because otherwise we start losing the critical information necessary to make good choices.

Forrest: So even though we know that from a market sense that distributed systems are less efficient in the sense of energy usage, in the sense of time, in the sense of responsiveness, that they are ultimately necessary in order to do the work that needs to be done because we’re actually looking at not an efficiency basis, but a quality basis. Is the quality of the choices being made by not the platform, but now the protocol, does the underlying protocol have the characteristics necessary such that the quality of the choices is sufficiently good to be responsive to things like existential risk? So in effect, what we’re really looking at here is not a relative threshold but absolute ones. It not only has to be better than our current solutions, it has to be good enough to actually be satisficing relative to an absolute metric.

Forrest: And that in effect, this sets hard limits on what the level of minimum quality of the outcome of the choices being made and their implementation actually are, which itself sets limitations on what is the minimum quality of sense making, which itself sets questions on what is the minimum quality of the questions being asked? We’re really looking at are we asking the right questions? Do we have a way to know and to very much develop a process in that particular area? So in recognition of all of that, what I’ve been doing to respond to all of this, I mean, I spent the early part of my life developing the metaphysics to have the tool set. And now I’m basically anticipating spending much of the rest of my life basically putting together things like a femoral group process, which are ways to enable a much, much higher level of quality of question asking, so that we at least have a chance at doing sense making necessary to respond to the kinds of issues that humanity is faced with.

Forrest: My hopes and dreams at this particular point, again, speaking back to why was I ultimately doing this? Loving nature, loving the wild, basically being thankful for the gift of life. That in effect, I’m basically saying, well, it seems like, as best as I can tell, that the most effective use of my time is to put it into developing these capacities to do sense making at civilization level scales. In effect, at this point I’m trying to gather like minded people to essentially assist in transcription, to assist in editing, to assist in basically providing the time and resources necessary to develop protocols of this kind that could potentially be tested to see whether or not they achieve the level of quality necessary, and to continue to develop in that direction. As a result of all that, that’s why I’m wanting to really give voice to some of these notions.

Jim: Now, [inaudible 01:09:39] been some very harsh critique on the notion of sense making out in the so called Game B world. You’re familiar with Game B, I believe. You’ve been associated with some of the other Game B players. Interestingly, I did Google, and your name and Game B do not appear anywhere on the internet, which is kind of interesting.

Forrest: Well, I think we’re Design B rather than Game B. I think if we think about it in terms of games, there are winners and losers, and we really need it to be a win, win solution or win, win, win solution in other words, if we really want to be technically correct. But I haven’t really been that active on the internet. I mean, I’ve been around for a while, so you will find my name connected to a handful of things, mostly just incidental stuff, but I haven’t really taken very much of a public perspective. I mostly just put my head down and try to do the work that needs to be done.

Jim: That’s a good thing. Maybe let me get it back to this critique about sense making because there’s been a lot of talk about sense making and some attempts at it. But one of the critiques which does resonate with me to a degree is that from where we stand today, where we’re heading towards these existential risks at an accelerating rate, sense making by itself isn’t even close to enough. Right? Because even if we knew what the right things to do are, and truthfully, probably you and me and three people that we know could sit around a table and come up with a better trajectory than we’re on today by a shit load, where is the mechanism to use the levers that can move the trajectory of society? So, since making a loan is just a waste of time, frankly, and basically a wanking exercise, unless there’s the ability to actually change this trajectory of our society. What do you say to that?

Forrest: Well, actually, I agree. I never claimed that sense making by itself would be enough. I mean, without sense making, choice making isn’t enabled. But choice making by itself isn’t enough either. Let’s say you and I got together and we spent a bunch of time, figured out the answer, made some clear choices, but then didn’t have the capacity to implement those choices, didn’t have a capacity to bring them into the world as manifest results. Then to some extent, those choices are choices. There’s this thing called the principle of identity. That which is indistinguishable must be the same. So in effect, if I can’t distinguish between having made a choice and not having made a choice, for example there are no consequences, then in effect, there’s no real sense of choice making. So I definitely agree that bringing things in a manifestation, having the capacity to actually implement is a crucial thing.

Forrest: But I think that to some extent, what we’re looking at here is not an either or a question. Again, sense making by itself is not enough. Choice making by itself is not enough. Implementation by itself is not enough. Because even if we had the capacity to implement whatever it is that we chose, the implementation capacity without any guidance is obviously also worthless. So in effect, there’s a sense here of what’s necessary and what’s sufficient? So if we had sense making and choice making and implementation capacity, then maybe we have a chance. Is that sufficient? It might be. At this particular point I think that if we hit the quality of implementation that is needed, if we hit the quality of choice making that is needed, if we hit the quality of sense making that is needed, then those things being necessary, is that sufficient to actually move ourselves out of crisis and into some sort of comprehensive responsiveness?

Forrest: Well, I think that it is. I mean, at this particular point, I’m working on a hard proof of that. But the ultimate results here, as I said, we do know that those things are necessary because if we just look at implementation capacity by itself, I mean, well, we actually have that already. Institutions, federal governments, and a lot of businesses and so on and so forth already have all the resources necessary to implement enormous change. I mean, obviously, changes have actually happened. The technology has been deployed, social media platforms do exist, railroads and so on.

Forrest: And in effect, what we’re looking here is saying, okay, well given that as individuals we can do great sense making, but as individuals we have terrible implementation capacity, and given that as institutions we have tremendous implementation capacity but terrible sense making capacity, that to some extent we need to upgrade the sense making capacity because without that we can’t have the institutions make choices that actually matter. It is the structural design of the institutions as we currently conceived them are able to essentially have all three at once. Sense making, information gathering capacity, the quality of choices that are comprehensive enough, responsive enough to non complex, not just complicated design questions. And then finally, can we have those implementations occur in a way that is not incorruptible, that actually represents the choices that are made rather than say being co opted by some third party for private benefit?

Forrest: When we look at implementation systems as they exist currently, we notice that corruption is an issue. Again, there’s lots of governments in the world where effectively due to the necessary internal dynamics of those institutions to some extent to just even ensure their own survival that they end up having to make very, very poor choices with respect to the wellbeing of the community. And so in a lot of ways we’re looking at if we just focus on implementation criteria and basically look at it from that point of view alone, then we discover that we already need to do sense making and choice making about institutional design so that we can ensure that the way in which those institutions manifest, the things that they do in the world don’t end up doing more harm than good. If you look at game theory, game A particularly, it’s very much the case that people want to create markets.

Forrest: Particularly, if you manage to create a new market and you create a platform in which that market happens, then you get to essentially do a Metcalf’s law growth process, and also at the same time to be extracting value from that market. Charging taxes in one way or another, charging rent, or taking a percentage of profits or something like that. That the upside for you as a market developer is enormous because … And you’re not just doing additive increase in wealth by the results of your own efforts or a multiplicative increase in wealth to say the way you would as a banker, but that now all of a sudden you’re doing exponential increase in wealth because you’re effectively creating a tap on an entire ecosystem. Then in effect, the problem is that that process is parasitic. And as you increase the number of parasites in the system, eventually the wild beast dies.

Forrest: In a lot of ways, what we’re looking at here is we actually need good quality process for all three. And that if we don’t hit sufficient quality thresholds for all three, that for sure this isn’t going to be enough. So yeah, I actually agree with the critique that sense making isn’t enough. But on the other hand, if we don’t at least apply good sense making with respect to institutional design, then whatever it is that we’re doing will probably end up being captured by some hidden party and used to their private benefit, and just end up as a side effect, externalizing considerable harm to the wellbeing of the commons, to the wellbeing of the community. Personally, when you think about do I have any real belief that the taxes that I pay are actually going to go to my wellbeing? And if my answer to that question is no, then my overall faith in the system is low enough that I’m not interested in increasing their capacities in the implementation sense, I’m interested in increasing their capacities at the choice making and sense making levels.

Jim: And though, unfortunately, the gap is that the sense making and choice making needs to be at least to essentially wrestle control of the implementation from a self serving status quo, that’s the real challenge, right? In many countries it’s just pure corrupt than in many countries. In our country, it’s not quite pure corruption, but it’s soft corruptions of the political process by the vested interests.

Forrest: Yeah, I agree. There are some real serious challenges here. I mean, this is, in some respects speaking as an engineer, seriously the single most difficult problem I could even attempt to imagine. There’s clearly some real challenges here. You mentioned that one specifically, but there are hosts of others which are also quite gnarly. And again, this is the kind of thing where quite frankly, I would love to have some qualified help. But in this particular sense, I agree these are real issues and these are real challenges. Obviously, if someone were to come up with some way of addressing these kinds of things, even then they’d have to be very careful about it because trying to suggest, hey, I’m going to wrestle control away, I mean, well first of all, it’s not even a question about control. It’s actually a question of influence.

Forrest: Does any neuron have control over what the whole brain does? Does the brain even have control over itself? I know that in terms of just my own subjective self, I can try to control my state of feeling, I can try to control my behavior and so on so forth, but the best I’m going to only do is learn good skills, and to become increasingly adaptive, increasingly healthy. That’s probably not going to be some sort of overthrow or some sort of coup. It’s going to be something more like a gradual adoption of better practices. And let’s hope that that happens sufficiently well and sufficiently fast enough.

Jim: Yeah, that’s what I was going to say. That’s all well and good. We know we’ll gradually make [inaudible 01:19:24] smarter, but do we have time? It all depends on do we fall into a positive feedback loop around climate? Is there a pandemic? There’s lots of ways we could run out of time. And I think that’s a really interesting question. Is the attempt to influence a sufficient tactical strategy or does there need to be something more decisive?

Forrest: That question is a very good question. It’s like do we have enough time? Well that’s an unknowable thing. I have no idea. Right? I mean we can say, hey, these are issues of the feedbacks that are going on in these particular areas. They’re crucial issues. There’s an increasing number of crucial issues that are near term that need address as it becomes more and more obvious to people that hey, we are actually out of time. We really need to be better at these kinds of things. The problem with the reset processes is that you might hit reset but you’re not necessarily going to end up with a better state. You may have slowed things down a little bit, but you still haven’t developed the capacity to prevent it from happening all over again.

Forrest: So for instance, civilizations have come and go. If you look at the historical record that we have, there’s a few hundred civilizations that have lasted a few hundred years, some longer, maybe a thousand years or so, but very few of them … In fact, none of them are a civilization right now is essentially just on one of the longer sides. But there’s no guarantee that our civilization is going to endure any longer than any of the other examples that have come and gone. In effect, what we’re really concerned with here is can we upgrade the capacity in some fundamental way? Because regardless of how much time we have, we still haven’t changed anything. So say we do the reset, and we come out on the other side of that.

Forrest: But all of the institutions that we make at that point, all of the new civilization practices that we put in place, coupled with the technology that has been discovered or rediscovered from our civilization end up creating those same existential risks, those same civilization collapse dynamics, those same fucked up market economies, conflated interests, and perverse incentives and all the rest of that stuff. That in effect, we’re just going to end up in this boom and bust cycle. The problem is that given the level of technological development that we have, we’re now entangling the entire ecosystem. So reset at this particular point means reset for maybe a billion years or maybe never. I mean, it might be that we’ve damaged the ecology so much that the human species just doesn’t endure.

Forrest: And so in effect, there’s a real question here of we can’t do the boom and bust cycles given the asymmetric power of the technology. So we have to actually develop new capacities and sense making and choice making an implementation. Then we have, even if we were looking at running out of time, we would still need to do this. And so in effect, we’re looking at it in a sense of, okay, well, it needs to be done either way, regardless of the amount of time. I can’t control or even know how much time I have, but I do know that doing it while the current civilization context is in place that I can effectively do better design, do better engineering and social process in the context of our existing civilization to create design for new capacities to respond to these kinds of issues using the context we have.

Forrest: In that sense, I would say reset’s not a good thing because coming out of a reset, we’re not going to have the capacities to even think about these issues because we’re basically living hand to mouth in the stone age again. In effect, what we’re therefore saying is, well, I can’t control the time period. I can’t control the necessity. The only thing I can do is use the resources that we have today as best as we possibly can to respond to this thing as quickly as we possibly can. So as a result, I end up basically asking for support. Can people provide good capacities to try to speed up our capacity to do good sense making, good choice making, good institutional design in this space, good community design, ultimately community design?

Forrest: Because that’s the crux of it all is that in effect, we have a better chance of doing it now than we would at any other point given that we can integrate things like all the knowledge we’ve collected in science and technology, sociology, anthropology, psychology, philosophy, metaphysics, and just the works. I mean, literally at this point we have the maximum amount of information and resource that we would ever really have to be able to solve the problem. And we have the necessity to do it now because we don’t know how long we’re going to have to work on this. So in effect, the only thing I can really do is to say very much that we really need to do a lot of development in this space, pretty much stat, and to really have the process that is creating this be very much a non commercial thing because we know that the commercial process is messed up.

Forrest: So in effect, we’re asking for donations rather than investments, we’re asking for time commitments rather than just dollars because without having some real good way to develop this in the way that integrates the comprehensive field of knowledge that has been the outcome of Western civilization and Eastern philosophies and all that, what we have as the world essentially at this particular point is both the need and at this point, the best enablement we could ask for.

Jim: That’s good. We’re getting close to the end of our time here and unfortunately, I have lots of things I wanted to talk to you about, particularly, some of my very favorite things from your thinking about small group practice. Your paper on the nature of human assembly. I went and read that carefully. I have a lot of questions. And we’ve talked before you and I offline about your ephemeral group process, both very interesting. But we don’t have time for that. So my exit question is something I found during my research. You gave a TEDx talk called the Accident of Unconsciousness, where you talked about how exceedingly unlikely it is for us to exist. And that gets very close to a topic that we’ve talked about many times on this show, which is the Fermi paradox. The Fermi paradox is from Enrico Fermi during the Manhattan project where a bunch of young physicists were talking about, oh, there’s got to be 100,000 intelligent species in the galaxy, et cetera.

Jim: And Fermi came by and said, “Okay, where are they?” And that’s been where we’re at. We’ve now been searching fairly seriously for other advanced civilizations for 60 years, and haven’t found any. And the reason I raise this because you were actually close to it, and in fact I think you were there, but you didn’t make it explicit, is it strikes me that this is a hugely important, maybe the single biggest question about humanity and its role in the universe. If indeed we are alone, and we may be. When I was 13 year old nerdy kid, I would’ve said just like the young physicists at Los Alamos, “Oh, yeah. There certainly got to be other intelligent species in the universe.” The more I’ve learned about it, the more I’ve studied it, the more I’ve thought about it, the less sure I am. I’m now to the point where I’m entirely agnostic that maybe we are alone.

Jim: And of course there are other folks like Stuart Kauffman we had on the show not long ago, he believes that life always forms if the situation is even close to right. Maybe he’s right. Maybe the people that say it’s an exceedingly rare event are right. But where I think you were going was until we know more, it’s probably safe for us to assume that we are unique. And if we are unique in having reached the level of general intelligence, one could say that we have a purpose as a species, which is to bring the universe to life. And if we destroy our life here, or at least destroy our ability to leave the earth and to move out into the universe, then we’ll have squandered one of perhaps the actual purpose of our existence.

Forrest: I couldn’t agree more. I think this is absolutely the right direction to be thinking about this. I definitely have spent a lot of time thinking about the Fermi paradox and its implications and relevance as you’ve definitely mentioned. In fact, that was part of the reason why I even put that section into the TEDx of the Accident of Unconsciousness. And so in effect, yes, I do very much agree. And there’s a lot of good thinking in this space. I’m thinking particularly of Anders Sandberg and some of his recent work in this, which essentially uses information theoretic methods to really address that question. I would definitely recommend taking a look at his work because he’s addressing the question in a unique way. And it’s based upon how do we think about questions like this using the tools of information theory and risk modeling and stuffs like that?

Forrest: Another direction that I think is particularly important as you mentioned, is that let’s assume for example, that we are the unique species, the unique life in the universe. I don’t necessarily need to make that particular claim because even if there were life all over the universe in the sense of lots and lots of different manifestations, I could say that just from an information theoretic perspective, there’s still a huge degree of unicity associated with this life. So there’s a gift there. There’s an enormous miraculousness associated with that gift. And to really appreciate that and to understand that means that to some extent, we actually have to get the ethics of that right.

Forrest: One of the connections back to the Fermi paradox is that one of the ideas about why don’t we see other intelligent life in universes is that they very prudently would be aware of, well, I don’t want to contact another civilization. I want to ensure that I’m absolutely invisible because without an acknowledgement that there is essentially a sufficiently developed level of ethical thinking and behavior, if I don’t see that the basis of choice that that particular species is implementing, that it’s too dangerous to talk to them. In effect, think of it from a first contact perspective. Given the background, I’m thinking a little bit of the way Nick Bostrom would think of this, is that, okay, so we have the possibility of initiating a first contact situation. But let’s assume that both sides, the contactee and the contactor both have enormous existential technology.

Forrest: In other words, they have the capacity to leverage something like nuclear warheads, and there’s no defense. Let’s say that the technology is so asymmetric in its power that if one side were to initiate an attack on the other, that the defending side couldn’t possibly defend. We say, okay, well to really make the point you say, not only is it the case that there are hugely asymmetric technologies that either side could have against the other, but that there’s essentially thousands of different kinds of technologies, each of which is asymmetric in this particular way. And so if contactor A and contactee B both have any one of thousands of completely asymmetric technologies, which from the perspective of the other one is both unknown, we may have developed completely asymmetric capacity and nuclear technology.

Forrest: But they may have developed completely asymmetric capacity and biological technology, and some other party may have developed a completely asymmetric capacity and compute technology. Now, those happen to be ones that we know about. But let’s assume that some other species, some other race out in space somewhere [inaudible 01:31:10] has developed three completely different technologies that we don’t have names for, and that effectively are just as life destroying, as civilization destroying, as planet destroying as those three could be. And so in effect, we would basically say, well, since we have no idea, if I’m the contactor and I’m basically saying, hey, I want to talk to this other species, but I don’t know that that other species, I don’t know what technologies they’ve developed, that just even the fact of letting them know that we exist is an existential risk to myself.

Jim: That’s called the dark forest theory on the Fermi paradox. Or nobody’s willing to speak up in the dark forest because we believe the forest is full of predators, or at least we’re not sure. And that could indeed be the explanation. It’s one of a hundred possible explanations. And these are all important. I think the biggest part that is what we talked about just as we introduced this, which is these are reasons why we need to be smart about preserving ourselves because we might be the only one, in which case if we blow it, this is huge. And even if we’re not the only one, to your point, we do bring a very high level of uniqueness to the universe. And over time we should be able to see our perspective through the universe, and to blow that is to blow something huge. And that should be at least one of our motivations to think through these existential risks, to find a way somehow to get the wisdom of good gods, now that we seem to have the power of gods.

Forrest: That’s part of the reason why I mentioned the dark forest thing is because there’s actually an additional reason. If we assume that there’s this background that I’ve just set up, the only context in which any species would ever actually talk to another one is if they knew that the ethics that was implemented by the opposing party was at least sufficient to be able to be safe to talk to them. So in effect, it’s a bit like … I mentioned the non relativistic ethics as the second path or the second aspect of that Ted talk. And it’s critically important because if we as a world have an ethics which is to that standard, to that level of thing, then it becomes at least possible that other species, other worlds would want to talk to us, and share resources, and actually be in some sort of relationship.

Forrest: Without that level of development, at least at a minimum, then we’re just basically too unsafe to talk to. And effectively the predator nature ends up being the dominant factor. And so in effect, we’re essentially saying, if we want to find other intelligent life in the universe, we have to be a fit receptacle for that information. In effect, it then becomes incumbent upon us even in the sense of maybe we’re not to see the universe, but just to essentially be a participant, a citizen of the universe. That we have to be a good citizen, and a good citizen to some extent means that we need to do enough self work not only to survive, but also to actually participate

Jim: An even better reason to get our shit straight.

Forrest: I agree.

Jim: All right. On that note, thank you. This has been a amazing conversation. I wish we could go on for two and a half hours, and maybe we’ll have you back on again to talk about some of these other topics which we didn’t get to. So thank you, and this was great.

Forrest: Have a great day. I’ve been appreciating this, and it’s been a wonderful conversation as well. And I look forward to next time

Jim: Production services and audio editing by Jared Janes Consulting. Music by Tom Muller at