Transcript of EP 152 – Gary Bengier on Hard-Science Futures

The following is a rough transcript which has not been revised by The Jim Rutt Show or Gary Bengier. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest is Gary Bengier. After a career in Silicon Valley, including a stint as CFO at eBay in its glory days, Gary has pursued passion projects, including studying astrophysics and philosophy. He spent the last two decades thinking about how to live a balanced, meaningful life in a rapidly evolving technological world. I met Gary on the board at the Santa Fe Institute. He and I are both long time trustees. I think we came on probably about the same time, sometime in the middle double ops as I recall and enjoyed getting to know Gary and his family over the last several years. Very, very interesting fellow. Today we’re going to talk about Gary’s recent foray into writing science fiction as we talk about his novel, Unfettered Journey. And at the same time, in parallel, we’ll dip some into a separate book that he published at the same time, Unfettered Journey, the Appendices, which is a true nerd alert deep dive into some of the philosophical and scientific issues surfaced in the novel. So welcome, Gary.

Gary: Jim, I’m delighted to join you here. Thanks for inviting me to your podcast and looking forward to explaining these ideas to your audience.

Jim: Yeah, it’ll be a lot of fun. I really did enjoy the book a lot. It’s a very heady rich book. It’s classic sci-fi of the old school, but it also has a lot of serious exploration of philosophical, scientific, and that interesting liminal space where philosophy and science come together.

Gary: Yeah. Well, thanks. I like to say the book is lots of layers, like an onion. I think someone famous said that. What was it? Shrek or something, I think? So yeah, it delves down into a view of the future and philosophy and it’s a hard science view of the future, which I think and I hope will cause people to focus on the important issues in our future so that we can build the future that we really want to have rather than let some craziness evolve.

Jim: Yeah. Let the future be the future, probably not a good idea, especially not where it is today and its trajectories that it’s on.

Gary: Right. Well that overlaps with some of your Game B thing I think. We’ve got to maybe think anew about how we organize our human society.

Jim: Yeah. I put a tweet out yesterday, save the world, play Game B. A good excuse to put another plug in there, make sure you look at the film, Game Bfilm.org. Thank you, folks. But yeah, the level of which your world was imagined was really, really, really good. In fact, let’s jump in there. I was kind of scratching my head. When is this. I’m sort of guessing, maybe 2045, 2050, something like that. But one of the first things you did, you can address that if you want, or you can say, “None of your business. It’s my imagined world.” But one of the really, really interesting things that I would say or true imaginative leap is the ecosystem of various robots and AIs that you have envisioned that are part and parcel of the world, PIDAs and PIPAs and NESTs and mechas, MEDFLOW, ARMO I guess it is, medbots, all that sort of stuff. So talk about this world. If you want to, tell me when you think it is.

Gary: Okay, well actually I’ll tell you when I think it is. It’s exactly in the year 2161. So it’s about 140 years in the future and that says something too about my view of the future. This is a hard science view. And so, one of the meta things I try to do is I wanted to reset the conversation on that because I think a lot of science fiction takes us off in crazy directions. It takes certain trends today and it takes them off to the extreme to have a conversation about the extreme. But the problem with that, I think, is that it gives the general population a bad idea about what is highly likely in the future. And so I think science fiction does a disservice because we’re not really focusing on the real problems that we have. So let me pick through some of those and just explore that topic for a while, Jim, if that makes sense.

Gary: So it’s 140 years. So I think that this century that we’re in, the two most important technologies that will affect humanity are bioscience and AI and robotics. Let me pick this across very quickly because I don’t actually deal with bioscience too much, because my view is that bioscience will change our lives dramatically. I think in 140 years we’ll cure cancer. I don’t think we’ll live forever. I’ve talked to a lot of people about that, but we will live a lot longer on average.

Gary: In this book, Unfettered Journey, I make a side comment that the person is 31 years old and my main character, Joe. He’s lived of quarter of his life and what has he done so far? But here’s the issue about bioscience. I suggested in 140 years we won’t notice, just like my sister-in-law when she was a child worried about getting polio and it affected who she could play with as a child in the summer. We don’t think about that anymore. And so the diseases that we won’t get, et cetera, we’ll just live with it and we’ll live longer and we’ll live more healthy lives, but we won’t notice.

Jim: That’s actually a very good point. For instance, one of the giant inflection points in world history, which we don’t even notice or even understand anymore was until 1900 or 1890, depends on what country, cities were net killers of people. And the only reason cities continue to grow is because people were sick and tired of being on the farm and came to the big city. And it was the invention of public health and clean water and smart things like making your sewer outflows downstream from your water intakes that changed that around. And think about what it would be like today if cities were still net killers, where the death rate was higher than the birth rate in every major city on earth until about 1890 or 1900. We don’t even think about that anymore. So I think it’s a very interesting insight that the basic improvements in things like medicine, public health, even genetic magic that’s coming will retreat into the woodwork. Very clever.

Gary: Yeah. Yeah. And I think that’s true. So then the other major technological change affecting humanity are AI and robotics. And in contrast, I think, out a ways, that that will have a dramatic impact on how we physically live.

Gary: Now, let’s talk about the timing of that. You had it mentioned, you thought maybe this was in 2050 or something. So I have a very different view than this typical view about robots. You look at the Boston Dynamics robot that’s doing these amazing flips and things. We have the Honda robot at the Olympics that was shooting free throws from the center court. And everyone has this idea that it’s going to be tomorrow that they’re all going to be here. In fact, I think it was McKinsey did a study a decade ago and suggested by 2050, 70% of the jobs would be done by robots or AI.

Gary: I think that that’s all nonsense. I think that the development of robotics and AI, well, particularly robotics is more akin to the automobile. Sure, we had Henry Ford in 19-something, we had cars, but it really took a century for autos to get to what we imagine is an automobile today. Because you needed roads and you needed legal infrastructure to deal with what happens when people get injured or killed, and there’s lots of other pieces to it.

Gary: And so I think that robots will take a lot longer to totally infiltrate our lives. So that’s the first point. But the related point is I think it’s inevitable that it will happen because the economics will drive it. It will just make more and more sense, people will make money and to have robots do more and more things. And so I think that’s inevitable. So now what does that say? That says that in say 140 years, it is highly, highly likely, I think inevitable quite honestly, that we will have robots doing all kinds of stuff that we normally have as jobs and we will not have jobs. So that’s a very different world. And so that highly likely future is the first time in human civilization that we will have actually lots of stuff and no jobs.

Jim: Yeah. That’s what some people call fully automated luxury communism. Right?

Gary: Exactly. That was an interesting book. I like that book. I think he goes off a little bit on some things, but yeah, the prime is correct. And in fact, I did a model. I’m a financial guy among other things and I modeled the GDP of the world and the US just taking regular growth rates that we’ve had for the last several decades. And when you get to 2161, without this huge increase in efficiency even, just take averages, the average person in the world will have something between 12 and 16 times as much stuff per capita as we do today. 12 to 16 times as much stuff per capita. Now, assumption is that the demographics mean that the world population doesn’t go crazy and most of the demographic trends, if you look at those are showing lower fertility rates, particularly as women around the world are more educated. And in fact, we’ve got many countries where the birth rate is below replacement.

Jim: Now don’t forget about Jockeys. Jockey shorts I’m convinced are a big part of it. Sperm counts are down by half tucking them balls up too close. That’s not good for your sperm, people. So boxers if you want to have some kids.

Gary: So yeah. So think about that now with a lot more stuff. And then there’s another thing about jobs I think is important. So three years ago, one of the things I… Quote, unquote, doing research, I attended a workshop at the Santa Fe Institute with the title AI and the Barrier of Meaning. And there were a bunch of experts on AI and we hung out for several days. And one of the presentations, a gentleman described jobs disappearing as they became automated as sort of a typology landscape with hills and valleys and a water level rising. The water level being an analogy for jobs disappearing. Some of those jobs in the lower area are going to disappear first. So the question is, well what jobs will be gone first? What’s the top of the hills, the least likely ones to… And maybe Jim, your job as a podcaster’s up the top, it’s harder to recreate. Lawyer, attorney is actually going to get subsumed earlier than we think because we can automate that stuff.

Gary: So my thesis is that one of the jobs at the top of those hills is roofer, because it’s a difficult job to automate a robot to climb up on a roof with a load of shingles and tack them down. It’s going to be really hard, but when roofers are making $500,000 a year, we’ll eventually automate that too.

Gary: And so now imagine that world where this is highly likely to go. We will have robots about our human scale. In my book, there’s these mechas that are a little taller than us and they don’t say anything, and you’ve got these pipabots that are a little shorter than us and we talk to them. And they’re shorter so that they appear less threatening, but they’re going to be organized and fit in human scale because why would we re-engineer trillions of dollars of infrastructure? So I really do think at some point they’ll be walking around among us and they’ll be doing many of the jobs we had before.

Jim: Yeah. That’s certainly possible. I will say our Game B hypothesis is that we won’t go to 12 to 16 times as much stuff. In fact, we’ll probably go to more like 30% as much stuff as a say, 70th percentile western person has today because stuff is a bad addiction. We don’t need as much stuff as we have and that it is actually depleting our wellbeing. And we’ll see.

Gary: So I use the 12 to 16 times formulas to say what happens if we just have the normal economies going forward, but here’s the thesis, we’re already seeing it. Robots are showing up in Amazon warehouses and they’re building Teslas, et cetera. We’re seeing that all the time now, more and more of this automation. What’s happening in automation is the jobs are going away. And then the harder to automate jobs are left and eventually robots will build robots, because robots will smelt the metal… They’ll mine the ores, they’ll smelt the metals, they’ll build the factories, and they’ll build the factories that build the robots. So when that happens, you have robots building robots. And in some sense the cost falls to the cost of the raw materials.

Gary: So it’s just this huge mechanistic thing. You could have 20,000 robots going around planting trees. They could pick up all the litter on the streets. I mean, they can do all kinds of stuff at scale if robots build robots. So again, I think yeah, we won’t have that many times more stuff than we have now, but what that says is that the future world is one where we have lots of stuff, enough stuff and we won’t have any… We’ll have far fewer jobs to keep people busy. And who owns the robot factories though?

Jim: Always a good question. Financial plumbing, the concept of ownership even. Is it still the idea of financialized capitalism or is it more like co-op structures or is it more like [inaudible 00:14:50] and economics, where land is the principle economic resource? Very interesting set of questions, which you didn’t really surface too much in the book that you probably had an imagination in mind for what the ideas around the equivalent of finance and capital, et cetera, were in your imaginary world. Do you want to speak to that at all or do you want to move on?

Gary: I’ll say this. I think you and I are both capitalists in our careers and everything else and I think… Well, you can speak for yourself. I think that capitalism as a system with its competitive angle makes things more efficient and that efficiency has led to the output per person, if you will, the amount of stuff that we can have. And so we have a lot better lives than we had 300 years ago or 1,000 years ago. That’s a good thing. But what happens when robots make robots? I think that the future with that inevitable system is that you can’t have individuals owning the robot factories. They have to be owned collectively or we will have such an imbalance. If we think we have an imbalance now, in terms of wealth and [inaudible 00:16:07] and everything. Hold onto your hat. This is going to be massively worse. And so, I don’t think that any system, civil system is sustainable with that.

Gary: So that suggests sort of a world that is non-capitalistic. Here I am a capitalist, but I think that ultimately after we get enough stuff that everyone finally has less to fight about, which is not yet here, that this system can be replaced. And moreover, think about it, capitalism has been based on supply and demand. So if there’s more demand, then the mechanism creates more supply and by the profit [inaudible 00:16:48]. But what will happen is, is we’re going to collect data on not only what people want, what they buy, but what they want to have. We’re already doing that. That’s all the data collected by Amazon and Facebook and everyone else, eBay. And if you fed that data right into the production system, arguably the system can be a lot more efficient at predicting the need for supply and what the demand is. And the factories can just chuck that stuff out.

Jim: Yeah. This is the response to the so-called calculation problem. Ludwig von Mises, right? It was his insight in 1922 that central planning with the technology of that day was impossible relative to the emergent price signals that organized the capitalist society. And he was correct for 1922. He was correct for 1962. He was correct for 2002. Probably not correct, as you point out, for 2100. I would say 2050. We may well actually be able to answer the calculation problem by some other purpose. Of course, we don’t have the social, financial, or political structures to do that. Hence Game B, people. Look at Game B to figure out how we could get from here to there. But yes, I think you’re bang on, on that.

Gary: And Jim, by the way, von Mises, I mentioned my economist in the book, Mike, mentions von Mises and that problem in the book.

Jim: That’s right. That’s right. I remember that now. Now, the one thing that is interesting or another thing that’s interesting is the idea that you assume deeply into the book and lots of little fiddle bits and plot points, et cetera, around the Neural-to-External Systems Transmitter, the NEST. This is the super version of whatever the hell that thing is Elon Musk is working on, Neuralink or something.

Gary: Well, it’s sort of like that. Yeah. And again, I have a conservative view. I look sideways at the Neuralink that Elon did a demo of not so long ago. So here’s my thought, let’s take what we have now. You got your phone, your cell phone, your smartphone, and where’s that going to evolve? Well, it seems pretty natural to me that at some point that’ll just be a chip and you could stick it here behind your left ear and insert it. And then the other piece of this is, you got a little corneal lens or something implanted and the two can talk to each other. And this is just connected to something like a Bluetooth to the net as I call it at that time. So here it is, we talk to each other.

Gary: You can talk to your Siri. Well, this is like Siri 5.0 or something. You’re walking down the street and you say, “Gee, I feel like a pizza.” And talk to yourself, you say, “Where’s the closest pizza shop?” And it connects to the net and paints a little map on your little corneal implant and you see a little red line and you can just follow that walking along to find the pizza shop. So how far in the future is that? Well, by 140 years, I think that will happen. I mean, Microsoft is already having these heads up displays that they use in factories.

Jim: Yeah. HoloLens, very cool system actually. Though, not suitable for consumers, but a very interesting real life use. I’m really glad Microsoft did that, rather than screwing around with shit like Google Glass or even some of the Oculus stuff. Though I have to say the Oculus Quest 2 is fairly cool, but HoloLens is very different. It’s an industrial application of AR and VR, mostly AR and I think there’ll be a lot that’ll come out of that.

Gary: Yeah. And think where Google is going. I’m making this up. I don’t have any insight information, but if you’re walking down the Champs-Élysées today, you can actually hold up your phone and you can pull up a little map and you can move your phone around. It will actually link into the street you’re on. And now imagine you’re walking down there and you’re going, “Geez, I feel like a croissant.” On a little walking tour the phone says, “Oh, this shop over here’s got the greatest croissants in Paris.” Well, if you walk into that shop, there’s a Euro that gets paid to whoever gave you that system because the shop pays them. So this is an idea that is worth many billions of dollars. So it will get developed. And that’s my Augmented Reality Map Overlay, my ARMO, which is an idea of something that we’ll have built in.

Gary: So you start with the NEST and I guess I’m suggesting that when you first read this, you’re going, “That sounds…” One of the reviewers said, “Eerily authentic.” But I’m suggesting this is a hard science view of the future. It’s highly likely that something like this will happen. But it’s not so weird. It’s not so weird. The difference is this is not this crazy future where all kinds of brains are uploaded into computers or that all robots are going to be Terminators and try to kill us or any of those ideas. This is just a likely future that in many ways feels kind of like the way it is today.

Jim: As I was reading each of these things, I applied my little skeptical lens to them. I said, “Yeah, these will all happen.” And there’s not a huge leap here, but where I might push back, love to get your reaction to this, is I suspect that you’ve significantly underestimated the progress in AI that’ll happen in 140 years. Now, of course this is a matter of great controversy. I mean, I have a friend who works in AGI. He, in fact, is the ring leader in one of the more well known AGI efforts on the planet and he thinks that we will achieve artificial general intelligence, IE, human-like true general intelligence in 10 years. I don’t think so. I think the most recent poll done by Institute for the Future, one of those other quality think tanks that came out with like 55 years with large standard deviation, but 140 years would be way out there on what the experts think. And in your world, AGI still hasn’t even come close to being realized. It’s certainly not generally deployed.

Gary: Yes. Okay. So you are right. This workshop that I mentioned at Santa Fe Institute with all these experts on it, I was shocked by how pessimistic they were as a group about the development because they had lived through many of these AI winters, where there was a lot of hype and then it fell apart.

Gary: And yeah, I mean the latest, the deep learning algorithms, we actually practiced going through backprop and all the algorithms and got deep at the technology. And yeah, that’s doing great things today, but how far can that get us really? I’ve been playing with GPT-3. Actually, there’s a friend of mine at Stanford, and so one of the scenes in my book has a barbot. So our character Joe goes into a local bar and the bar is a barbot. And he has a conversation with her. And so we are recreating the barbot scene with GPT-3 in the background and you can talk to the barbot. So we’ll see if that comes out soon, but it’s really hard even to sustain a conversation that feels realistic for 10 minutes. So there’s a long way to go, I think.

Jim: Yeah. I play with GPT-3 a fair bit. I had a back door into it for a while. Now I’ve got a paid account and it’s impressive in some ways and utterly unimpressive in others. And personally, I don’t believe that deep learning is actually the road to AGI. It’s a very important piece part, more or less similar to our perceptual system, plus the indexing to episodic memory, but it’s missing a whole, whole lot. And indeed, I don’t think we’re going to make good progress towards AGI until this over concentration on deep learning starts to dissipate a little bit and there’s an increasing amount of investment into other approaches, because deep learning will have a big part, but I believe the people at DeepMind are wrong and OpenAI are wrong that deep learning by itself and its related technologies like reinforcement learning, et cetera, are actually the road to AGI.

Jim: But that doesn’t mean that AGI is still necessarily all that far off because there are people working on other approaches, people working on evolutionary approaches, people working on neuro-symbolic approaches where I think actually the win might come from. And so I think that’s where your prediction of the world could be way conservative because AGI does change everything. The AIs will fit out how to trade the robots to get on the roof to do the roofing. So even a fundamentally difficult problem like that, the AGIs, particularly if they reach super human capability relatively quickly, and of course that’s another whole argument, how quick is the AI takeoff could change everything in a way that would make your world of 2161 kind of laughably old fashioned.

Gary: I guess we will see maybe. You and I won’t, but someone will see if I’m right or not. My view in 2161, we have robots that the general population, many people think are conscious, walking around. That’s sort of the conventional wisdom, but my main character, Joe, an AI scientist, knows the truth that this is all cheap tricks all the way down. So it’s not really true because they haven’t really broken through that. And that comes from one of the threads behind why I wrote this book. After I finished my career 30 years in technology and I went back to school and backfilled an astrophysics degree, then I backfilled a philosophy degree, and then I got a master’s in philosophy, focused on theory of mind and I spent a lot of time thinking about what is human consciousness.

Gary: And spent a lot of time debating those issues with philosophy professors and I have come, and several other philosophers and I have this same view that is fairly cynical about developing robots that can reach AGI, general intelligence, because we don’t even understand what human consciousness itself is and I think we got a long way to go to sort through that before we can figure out how to create consciousness in AIs or robots. I’m not of the opinion that it’s going to suddenly magically happen and then we’ve got this robot that’s… There’s that fear that the robot will suddenly be smarter than we are and in nanoseconds because the speed is so fast that they decide that they’re smarter than us and maybe they kill us or something.

Jim: Let’s hop into this here. I’ve got it in my notes later, which is to carefully tease apart issues around and between artificial general intelligence, artificial consciousness and consciousness. Deep learning folks in particular don’t believe AGI is related to consciousness. They believe that maybe you get some consciousness, might come along with it, but it might not be anything at all like our consciousness. The philosophers are all over the place. The cognitive science folks, I think, are in the more reasonable middle ground of also saying they can imagine high levels of AGI without consciousness, but this is my view that consciousness may turn out to be trick, a hack, that mother nature came up with that actually makes it quicker to get to AGI and that the first AGIs may actually use a consciousness-like approach. I really like John Searle and his views on consciousness and he reinforces and untangles so much bullshit in the field.

Jim: He likes to describe consciousness as a biological process and it has high costs, both energetically. Our brain is 20% of our energy consumption despite being about 1% of our mass and probably almost as much in the genetic code is invested in wiring up the brain and the various chemicals that work inside the brain. So it comes at a high cost. So it must be there for some particular reason and the consciousness might be 30% of that. So conscious is very expensive, but it’s intimately interwoven with our biology in every sense. We’ve learned for instance that the chemicals in the gut impact the brain in a real way. And so the concept of gut sense, there’s actually some sense to that. And so he says that it’s kind of naive and shallow to say machine consciousness. What does that mean exactly?

Jim: Because it can’t possibly be the same thing as what an animal has in its consciousness. The animal I use to study consciousness is the white-tailed deer, for instance. I’ve been to deer hunter for… Damn. I’ve figured it out the other day, 53 years. And so I have theory of mind on them little buggers and I use that to build simulated deer and write consciousness-like AIs to control them and unity virtual reality and good stuff like that. And so the [inaudible 00:30:24] perspective would be that no, a machine consciousness is not tremendously like a human intelligence, but it could be analogous to it. Searle famously says that consciousness is like digestion. It’s a process. You can’t really point at it. I can’t point at you and say, “Gary, there’s your digestion.” It’s a combination of your throat, your tongue, your teeth, your esophagus, your stomach, your liver, and on down to the output point.

Jim: The Ruttian corollary to Searle’s comparison [inaudible 00:30:51] and sometimes the output of consciousness is very much like the output of digestion, ya ya, ya.

Jim: But so the next step is that Searle also says he can easily imagine things that are like, analogous to human consciousness. And I’ve extended his other analogy to say, “Yes, in the same way that there are digestors that are used in the food industry and the chemical industry and the pharmaceutical industry, which use yeast or bacteria typically to modify and change one classic chemical to another classic chemical that has a higher value.”

Jim: And one could argue that those are analogous to human digestion, even though they’re not architected at all in the same way. And so I think when people get out of thinking that a consciousness just like ours will suddenly appear in their Roomba and say that no, consciousness is a thing that does a thing that’s expensive and gets a return on its investment to biology, some of those ideas may be repurposed into artificial minds operating in robots. And once you make that leap, I think you get rid of an awful lot of the baffle flaggery in this field.

Gary: Okay. Okay. Well, let me respond with three points. The first is, I love John Searle. And in fact, in my book, I highlight the characters talking about his Chinese Room example. I think that points to a conundrum of getting from machines to what consciousness really is. So if someone’s not familiar with that, it’s a fun little mind experiment.

Gary: Second point is, is that in philosophy of mind there is a strong view that embodiment has a lot to do with our human consciousness. And in fact, in consciousness in general to bring it about and we are embodied in our world… The way our senses relate to the world gives us feedback that actually is fundamental to creating consciousness. And so some of the views in terms of how robots might start to approach consciousness would suggest that you’ve got to have them working with sensors that embody them in the world and working back and forth to gain information about their environment that bootstraps up consciousness. So AI without a robotic body, it’ll be an interesting thing to try to get to anything like whatever consciousness is.

Gary: Third point, you mentioned digestion. So I explore in that third paper, in the Appendices, what is mentality and digestion is considered not to be… It’s just a chemical process, but the mark of mentality is intentionality. And so I’m trying to remember who said that definition. So the mark of the mental is intentionality. So digestion is a chemical process, but once you can show that the organism, the thing that is exhibiting mentality demonstrates intentionality, then you’ve crossed this chasm from digestion to true mentality. And in that third paper, by the way, I talk about the simplest exemplar of the mental and I used sea elegans, the little flatworm that has been studied for many decades and it’s a wonderful exemplar creature because it’s transparent. It has 302 neurons.

Jim: Plus or minus one, right?

Gary: Yeah. By the way, some of those neurons are identical to the ones in humans. Does this thing show a basic mentality? I would argue yes, because this little nematode, it looks around and tries to see if there’s any food, which are these certain little bacteria. And if there isn’t any, it may actually get a ride on another organism to some environment that has them. And when it finds that there’s food around where it’s been taken to, it gets off and then it eats. And so it has an intentionality to demonstrate a very, very early kind of mentality.

Jim: We’ll get to that later. But by the way, I did not intend to make an argument that digestion was functionally like consciousness, but merely that the idea of mapping an analogous function from the biological to the non-biological. You could do that and digest, and then I would argue, per Searle, that that’s actually how we will see things that are analogous to consciousness and to expect them to be just like human consciousness is not right in the same way, the tires on a car are sort of analogous to our legs. They’re the way we move, but the tire is quite different than our legs. And it shouldn’t surprise us if machine consciousness is that different from human consciousness.

Jim: Now the last bit of scene setting is one of the artifacts in your world is the WISE, the World Interstellar Space Exploration, orbital base where you say that significant international scientific project [inaudible 00:36:03] construction base. It orbits the moon and will launch a series of probes towards promising exoplanets. Yeah. Regular listeners to the show know that I am utterly obsessed with the Fermi paradox. I’ve said this to many people, including world famous Nobel Prize winners, scientists, et cetera, that I consider the answer to the Fermi paradox the second biggest question in science. And none of them have been able to refute that. Where do you think your world is in 140 years with respect to getting some handle on the Fermi paradox?

Gary: Okay. Well, if the Vulcans show up tomorrow, then we’ll be surprised, but that will be one answer. Barring that, again, I take a hard science view and I try to deflate some of, I think, the crazy ideas out there. Here’s the thought about the future. I think that we’ve been polluted by ideas of Star Wars and Star Trek. And the next thing you know, we’ll be traveling faster than the speed of light and doing all this exploration.

Gary: And the reality is, is the following. It is that we’re going to be really disappointed I think because as I point out in the book, it’s ridiculously hard to go any distance because the galaxy itself is so incredibly huge. Quite honestly, I have this WISE orbital base and the idea is they’ll be sending these probes. Even that is hard because the science suggests you’re not going to go through any wormhole, not going to send anything through a wormhole, and you can’t go faster than the speed light. So if we’re lucky and there’s a planet worth trying to find out more about within say five light years, to send something there, even something tiny is really hard. You have to accelerate it up to some fraction of the speed of light, I don’t know, a 10th or something. By the way, you have to slow it down before it gets there to be able to get into orbit.

Jim: Yeah. That’s the hard part it turns out. You can get it there, but making it slow down, you need a lot of fuel or you do some very clever gravitational slingshot thing, right?

Gary: Yeah. Yeah. So the result is it takes a century or so. I mean, it’s just these ridiculous timescales. And so, here’s actually an interesting thing. We’ve got the Webb Telescope up there. It’s probably not that one in my guess. It’s probably another generation or so past that, but by 2050, I predict we will know the answer to one of the most fundamental questions for humanity for the next millennium or two. And that is, is there any habitable planet, exoplanet, within some reasonable distance, let’s say five or 10 light years of earth, we’ll know the answer to that. And if the answer is yes, then there’s some hope that humanity can get there. And if the answer is nah, there’s nothing within a 100 light years, then the reality is we will know for the first time in human history that we’re never going to leave this solar system ever.

Jim: Don’t say never. Not in the next 10,000 years maybe, but 10,000 years is an eye blink in universal time. Keep that in mind. Don’t say never, 10,000 years.

Gary: Okay, okay. So I’m with you. But again, back to this, how does science fiction mislead the general view? Is that there’s this view that, “Oh, sometime soon we will be doing this stuff.” So my book actually tries to point in that, that even 140 years from now, we’re kind of stuck here. And so the reality of our human existence, I believe, is that we will have lots of stuff, lots of robots running around doing stuff, we’ll have hardly any jobs, and we won’t be going any place, and we’ll to spend time figuring out how we find purpose in that kind of world.

Jim: I think that’s a reasonable, conservative case. Of course, we’re getting some of the other things along the Fermi paradox line is we’re getting close and Webb should be able to give us a positive, if there is a positive, is we may well be able to determine whether life exists elsewhere. Now, of course, it’s only going to be probably the ability to determine very simple life. For instance, we find certain concentrations of oxygen and CO2 in plants, lots of oxygen, a little bit of CO2 in a planetary atmosphere that we can do a spectrograph on as its backgrounded against its star. We’re right on the verge being able to do that. And if we look at, say everything within 100 light-years and the answer is nah, then it starts to adjust one of the figures in the Drake equation, which is how often does life start?

Jim: I’ve had Eric Smith on the show a couple times and we’ve talked about… One time, we went pretty deep on origins of life. And he’s still a strong believer that there could be lots of life out there. I went from being, say a 14 year old nerd that assumed [inaudible 00:41:06] were right and there had to be hundreds of thousands of galactic intelligences to the point where I am now entirely unsure if there are any. And this is why I think this is such a huge question. If we are actually alone, at least in the galaxy and maybe the universe, and the burden we have, not the moral obligation we have not to blow ourselves up suddenly gets huge, right? Truthfully, if we’re one of 100,000 truly intelligent species in the galaxy, well, if we blow ourself up, oh well.

Jim: But if we are it, and maybe it is our moral obligation to bring the universe to life, then it should fundamentally change how we think who we are. And we’ll have some information between now and 2161 on that question. We’ll be discovering whether independent life could have evolved in some of the Jovian and Saturn moons, for instance, these seas under the ice. Turns out that there is a unique biological life form there. That’s a big plus on life elsewhere. Of course, there’s a fair chance that we’ll find life, but it’ll be the same life as we have, I suspect, as a possibility of that.

Jim: So anyway, some of these things will be figured out, but I think in your world, it didn’t seem like we had received the message from the little green men and that we were still in that trying to figure it out stage. That’s a good bet, I’d say. Fair bet.

Gary: Yeah. You asked about the Fermi paradox. There’s a book called Fifty Answers to the Fermi Paradox, which I think is one of the best ones.

Jim: Now they’ve updated to Seventy-Five Answers to the Fermi Paradox, one of my favorite books.

Gary: Oh, okay. Okay. If I’m trying to figure out which one fits with me, it’s either some singularity where there’s the unknown, unknown that we suddenly discover, and then we don’t care anymore about sending out signals in the galaxy. Or you made the point about that thousands of years is a fraction of the time.

Gary: And so one answer is that there have been other intelligent species and they became intelligent and sent out signals into the world, but they didn’t last long enough, because the universe has been around for 13.7 billion years, life on earth, probably three or 4 billion, a long time. And we’ve just had 400 years where we had science and we have had enough knowledge to send out signals of some sort barely. And will we be around 10,000 years from now maybe still stuck in this solar system or will we end in some bad way? So yes, I’m with you that we have a moral imperative to try to preserve our world and be an exemplar of an intelligent species.

Jim: Yeah. And of course, that’s one of the terms of the Drake equation. It’s this long multiplicative polynomial and one of them is, how long does a communicating species last? We’ve been communicating, say with TV level of power since about 1946 or thereabouts, so is it 100 years? Is it 1,000 years? Is it 10,000 years? Is it 10 million years? We don’t know. As you point out, if it’s only 100 years, no surprise ain’t nobody home. They may have come and gone many, many times. Anyway, now let’s dig right into the book. We spent a lot of time on the prelims, but I think we hit a lot of interesting things. Your protagonist, I had to laugh, I didn’t notice the name when I was reading it, but when I created my notes, Joe Denkensmith. Denkensmith’s a pretty funny joke, I have to say, right? That could not be an accident, right?

Gary: Yes. No name or anything in the book is an accident actually.

Jim: Yeah. Yeah. That one I had to laugh about when I saw it this morning when I was typing my notes out. He’d worked at the AI ministry, presumably for this mysterious government and is on a sabbatical at Lone Mountain College, somewhere on the West Coast, I suspect. He’s an expert in AI and robot consciousness and AGI, and he’s a former hacker and all that kind of stuff. But he also seems kind of numb. I mean, he doesn’t seem like a very smart guy, which is weird. I mean, why would he be working in these fields if he’s a numbnuts?

Gary: Well, he’s only a level 42. And so in this society you have levels, one being the top and 99 being the bottom. And he’s only a level 42, if that number rings any bells with anyone.

Jim: Right.

Gary: Well, he was probably a level 50 and he made his way up because in this society, you can go up and down levels based on merit. And so it’s supposedly some way of organizing society that seems kind of fair to Joe, at least when he first thinks about it. Yeah. So you can make your own way. It’s a meritocracy supposedly. And yeah, so he’s [inaudible 00:45:51] days. All the kids, are above average. He is too, but he’s just an average guy.

Jim: Yeah. Which is interesting. Today, average guys aren’t working in AI and robotics and AGI. So I guess that’s to your point that under luxury fully automated communism, average [inaudible 00:46:08] will be working on these things because they don’t need to be working doing accounting in the back office at eBay, right?

Gary: Yeah. Yeah. Well, and in fact, the average academic, even from the tiniest little college needs at least two PhDs to do anything. Of course, you got 120 some years to live. So yeah, you have plenty of time to get better, but if you want to find new knowledge, you better be knowledgeable about more than one field so that you can find new knowledge at the interstices.

Jim: Interesting. And so, he’s on sabbatical from the government. It’s not at all clear he’s going to go back. And then the story proceeds from there. So let’s go into what you alluded to, which is an important part of this world is the idea of levels.

Gary: Yeah. Every novel needs conflict. That’s my conceit to create some conflict. The question is, if you take that model that I described earlier where you’ve got robots building robots, you’ve got lots of robots, you have lots of stuff, you don’t have very many jobs, what do people do with their time and how do we organize it? Well, the conceit in the book is that in many countries around the world, as this system evolved, they picked egalitarian answers. Maybe Game B kind of answers, but in the United States, because of our history of property rights, the oligarchs who controlled all of the means of production demanded a quid pro quo for letting everyone own the robot factories. And that was that they want to put in this set of laws called The Levels Acts that set up this precise hierarchy. And supposedly it’s, as I said, meritocratic, but some in this society wonder whether there are some legacy in terms of what level folks have. So that [inaudible 00:48:03] levels leads me to ask you, do we have levels today?

Jim: We don’t have formal levels, but we certainly have economic stratification, at least roughly, social classes, things of that sort. But at least I would suggest they’re not nearly as clear because in your system, they’re almost as clear as caste systems were in India, though they’re different of course and they’re much more permeable. My answer would be in the sense that you’ve constructed, I would say in the United States, not really. What’s your answer to that?

Gary: Well, I would say that one of the purposes of science fiction, not only to talk about the future, but is to make comments on our current society. I mean, I think the reality is that we do have levels in some sense and I question in the future, how those things might go away. If one were to imagine some egalitarian future at some point, I don’t know, I think there’s something in human competitiveness that’s part of our species that will always be trying to differentiate one from another. In my society, I have everything’s free except for the top 10% of stuff.

Jim: [inaudible 00:49:15] whiskey comes up a lot. Right?

Gary: Yeah. Yeah. And because I don’t think that even if you had lots of stuff that people still wouldn’t compete for wanting to be different. As Joe says, “Well, everything can’t be equal. There’s always the penthouse at the top of the building that everyone wants.” It has more value than other things. So you can’t get rid of things that are unequal in society and we won’t ever. So that’s a comment on the future and I think it’s a comment on what will be the same in any future and what will be the things that we have to focus on as humankind? I think our fundamental social relationships will continue to be central and how we think about interacting with each other will be central to how we work things out.

Jim: That’s interesting. The Game B answer to that is of course it’s antithetical to the idea of ranked order levels, but we do, I think, understand that humans are interested in distinction and excellence, and that not everybody is excellent in everything. And so we imagine a world kind of like Boy Scout merit badges, where there’s hundreds of dimensions in which one can excel. And so, I might turn out to be a really excellent marksman, which I am actually. Don’t piss me off. I’m an excellent marksman, a mediocre photographer, and a pretty good video editor, as it turns out I discovered accidentally. So I might have a high ranking with three chevron merit badges in marksmanship, have an equivalent of a C grade in photography, and B plus in video editing.

Jim: And so my distinctions are part of my social persona, but one of the things that we think that’s very useful… Actually, this is an idea I stole from James Madison, oddly enough, which is if you have enough different dimensions, no one dimension can dominate the other. And the problem with Game A, our current world, and of course in a hyperized version, this is where you’ve taken current trends and just amplified them in your world of the however many levels it is, is that it’s all reified down to one level. In our society, it’s reified down to, guess what? Money. And in your society, it’s not exactly money, but it’s this artificial level system, which is essentially the surrogate for money in some sense or the replacement. As you say, it was the compromise to the rich people when money was gotten rid of as the principal means of distinction.

Gary: Yeah. Yeah. [inaudible 00:51:56] agree. I think in some sense, money certainly will lose its value and in the way it’s central today because when everyone has lots of stuff, it’s not going to distinguish you very much. And why would you spend all your time chasing that if there’s plenty of it?

Jim: Exactly. But you still want to chase something. Yeah. I would encourage you to write your second volume and build your vision of one of these egalitarian worlds. That’s actually a lot harder probably, right?

Gary: Oh, it’s hard.

Jim: Frankly, we struggle with it in the Game B world. All right, we can sort of point out a few things, but has actually drawn a realistic and believable example of a different non-rivalrous future? Yeah, somebody will, but not yet. So anyway, good old Joe goes on sabbatical and there’s some very interesting people here at this college, some really good thinkers. Tell us a little bit about them.

Gary: Well, he runs into Mike, who is an economist. Mike and he have some debates about whether the current economic system and the level system are potentially as benign as Joe seems to think. So there’s that conversation. There’s another professor, Gabe-

Jim: My favorite.

Gary: Oh, okay. Okay. So yeah, Gabe is a little bit of a skeptical philosopher and Gabe becomes Joe’s mentor to help him think through these questions of consciousness and what is the [I 00:53:23] truly? And Joe’s trying to sort that out because as you mentioned, he had concluded that even though the conventional wisdom is, is that yeah, the robots walking around are sort of conscious, Joe realizes that that’s not true. He wonders whether it’s even possible to make robots conscious. And he then thinks that the answer… He first has to ask the question, what is his own consciousness? And so, what is that I that is Jim Rutt? What’s that center of you? And what’s the center of that I that, for all of our audience here, what is that thing really? And so that’s one of Joe’s quests is to find the answer to that. So he has those conversations with Gabe in particular.

Jim: Yeah. I love those conversations. I mean, Gabe to my mind was just the most interesting character, even though he was in a supporting role. He was the truly wise person in some sense.

Gary: Everyone’s talking about the metaverse and stuff. There’s a couple things just to give you another sense of my future world. I don’t think that the metaverse is going to be that big of a thing, quite honestly. I think that we will enjoy so much being in the real world, but this part… The metaverse will be tertiary to our existence. Joe talks about, “Oh, I’ve traveled around the world in this sort of net kind of experience,” which would be a virtual tour of going to Venice, for example, because in the future we’re not traveling so much because we’re trying to be eco and not spend the energy in that way.

Gary: And Gabe is giving a lecture and he’s trying to teach Joe some particularly difficult philosophical concepts. And so he puts on his avatar and they’re in a virtual world together and they are able to interact with the world around them. And this is an example of using a metaverse to better inform oneself. So yeah, we have this thing going on with the characters. But again, that as a part of the real world, it’s only tertiary to even 140 years from now.

Jim: I hope you’re right. I’m with you that we really ought to be using our technologies to enrichen our human wellbeing and I’m not nearly as cynical as my old friend [Mark Andreson 00:55:44], who believes that, “Okay, we can immiserate people in their physical life as long as they got Lamborghinis in the metaverse.” To stuff some words you probably wouldn’t quite agree with into Mark’s mouth, but he said some things not too far from that. I hope it’s not true and that the metaverse is the spice on the meal, not the main course, but we shall see.

Jim: One of the things I did make notes on and I’d like to jump into is this concept of the eye as the distinguishing factor. You lay out a hierarchy of, I guess, I’d call it complex adaptive systems that eventually lead to consciousness, nociception, which is a word I don’t believe I ever heard before, oddly enough, sentience and consciousness. In my language, I use consciousness and a lot of people use consciousness way down the stack down to amphibians and maybe fish, but you don’t I don’t think, right?

Gary: Correct. I do not. No.

Jim: You would call that sentience and you promote sentience, which my white-tailed deer, you would argue are sentient, but not conscious. I would argue they’re conscious, but I use a different definition for consciousness.

Gary: That’s probably true, yeah. Yeah. So to distinguish, nociception is sensitivity to chemicals. A sea sponge has that. Right?

Jim: Right. And famously, you know what, for our SFI people, bacteria have that. They’ll follow a glucose gradient and they’ll avoid certain harsh chemicals. You don’t get anything more primitive than a bacteria at least on this earth.

Gary: Exactly. Now then sentience is the capability to feel pain and to have some basic emotions, fear. You can think of lots of animals that fall into that category. Then at the top, if you want to be human centered in our ranking of this stuff, we have consciousness as something akin to our idea of consciousness. And then we can have a debate about how far down in in the animal kingdom does that go. Many people will say it includes dolphins, probably octopi. How about your favorite pet, your dog? Yeah, maybe. Does it include Blake’s Tyger? Tyger Tyger, burning bright. Is that creature conscious? Well, certainly sentient. He feels pain and hungry and all kinds of other things. We humans make the distinction that any moral code requires consciousness. That’s why I think it would be a general view that yeah, but the lion eats you, the tiger eats you. It’s not because you did something immoral. It’s just an animal. He doesn’t share the consciousness. And therefore, without consciousness, one does not have a moral culpability.

Jim: In the science of consciousness, many people make some different distinctions. For instance, Gerald Edelman, who spent some time out at SFI, makes the distinction between primary consciousness, which he would say we certainly share with a dog or even a bird or a reptile. It’s essentially those biological and neurological architecture that make us the first person experiencer of being in the movie of being alive. There’s no doubt in our mind that a dog has that level of consciousness. He has a personality, he reacts in a consistent fashion, et cetera.

Jim: And then Edelman posits extended consciousness as fully what humans have, whatever that thing is that puts us clearly over the line and he’ll also then backtrack and say, “Yes, other animals have it a little. Great apes have it a little. Dolphins, we’re not really sure but probably have it as much as apes, maybe a little more, maybe a little less. We’re not sure.”

Jim: Some people have looked at the famous theory of mind tests to say whether they’re in extended consciousness or not, but I don’t think that gets us to lawfulness and morality as a model. I would use Terrence Deacon’s line, his very wonderful and under-known book, Symbolic Species, where he draws a bright line. I kind of enjoy poking some of the people at SFI with this is because they hate bright lines. And I say, “Here’s a bright line.” Humans can actually create arbitrary, nested symbols. And from that they created language and that is clearly different. So in my work, I think of the fundamental architecture of human consciousness as not very different at all from chimps in particular, but we have a new kind of conscious content, which the chimp doesn’t, which is the symbol. And once you have the symbol, you can build networks as symbols. And then by a not very large jump, you can move sounds to symbols and invent language.

Jim: And it’s worth knowing for people that reading, for instance, is the same as oral language. When you read something, it actually goes through your auditory system. The thing that actually recognizes the words is auditory, which is kind of clever and interesting and a good exaptation [inaudible 01:00:42] only came late in the day.

Jim: So I think that’s a very interesting, different way of slicing it. And so does a dog have an eye? Well, the dog probably doesn’t know he has an I or at least doesn’t have symbols to express it and to have a model of the world in which an I is somehow different than a U or something like that, but he might have an I. I suspect he does. He’s the dog. He’s the same dog. He has the same personality. He has episodic memories. He has tastes. He has people he likes and doesn’t like, but he just doesn’t know he’s an I. And so that’s what some people call self-consciousness, which I argue is bootstrap from symbols because we have a means to distinguish between being an I and knowing that we have an I and being able to write poetry about our I and somebody else’s I, et cetera.

Gary: Yeah. Okay. That self-consciousness, so there’s one of the tests I’ve heard is if you have a mirror and the animal looks at themselves in the mirror, do they know that it’s them or… And there’s examples where you get some animal looks at it and wants to immediately attack the thing because they think it’s something else, but-

Jim: Most dogs fail the mirror test.

Gary: Chimpanzees pass it, right?

Jim: And elephants apparently pass it and a few other animals. And yeah, that’s a classic test. That’s the theory of mind test essentially. Does the animal understand that it has a mind and other things have minds? And it understands that another chimp is not the same as itself. We know Jessica Flack’s work is very interesting in this area that clearly great apes understand the idea of the other person having a mind and how to lie, how to trick, et cetera. Primitive, compared to what we can do, but still analogous.

Gary: I’d like your cut with the symbolic, the ability to deal with symbols because that gets to a distinction I make in my book in passing and it’s more in the Appendices, but between syntax in semantics, which is an interesting discussion. [inaudible 01:02:38] John Searle deals with that too. There’s a question, does syntax… Necessary or semantics necessary, which one came first, et cetera? And I have the belief that central to the mind, what is our human mind, is semantics, not syntax. Syntax is putting a bunch of symbols in a row to mean something. Semantics is what is the meaning inherent in it? And I think that mind starts with semantics and then the syntax follows. And that’s maybe different than many other thinkers in this field.

Jim: But I would agree with you absolutely. And for instance, I believe my white-tailed deer have semantics. For instance, they know that a guy in a [cammy 01:03:30] coat carrying a rifle in the fall means run. That’s semantics. They don’t have any syntax. Clearly we had semantics long before we had syntax. Though in terms of human language, I take the position that you can’t disentangle the two. Once you get to language, semantics and syntax are so amazingly interwoven that an attempt to think of language as either being all syntactical, which is proven to be a total dead end or being totally semantic, which actually makes no sense because language is implemented as syntax. And so I think the actual trick is you got to look at the two together and they’re kind of all twisted around each other in a very complex pattern.

Gary: Totally agree with you and I think… But one of the issues with how did this come about is folks start with human consciousness. And because those things are very commingled there, that’s where I think they make the wrong turn to think that syntax is important. In my third paper in the Appendices, I used that exemplar of sea elegans, the little nematode, and tried to imagine how primitive life started out and then how it got to some form of consciousness. And I think that if you start from the smallest end of the spectrum of life on earth, and then trying to move up, you make more progress. And so that’s why I started with that one at that end. You start from digestion, which is not mental, as I said earlier, but you can imagine the simple nematode. It’s got some outputs, it’s embodied. It gets back to the concept of embodiment, at least for organisms on earth, is critical.

Gary: It senses outside that there is some food and then it has to use those 302 plus or minus one neurons to decide to do something, to take an action and that’s the internal system. And then the result of that action is it moves toward food and then it achieves what it wants to do. So it’s gotten its intentionality and then it eats. And so now we have a very simple system that is based on the semantics of the creature embedded in its environment and the semantics of that creates that little thought. And your example with white-tailed deer seeing in the hunter in his bright orange is exactly the same kind of thing. Yeah.

Jim: Yeah. I think we’re on the same page there. And as you say, it’s amazing that people are confused about this. If you think about it carefully, it seems like that’s just ridiculous that syntax goes all the way down. Can’t possibly. They don’t have the capacity for it.

Gary: Exactly. Right.

Jim: But semantics obviously had to be there from the very beginning. Well, there’s a complex adaptive system, but a pattern of semantics essentially. And it has to be useful semantics if it’s going to survive.

Gary: Yeah. And in fact, back to Searle again, his Chinese room example, the example of that system is all syntax and he argues that it’s not conscious because it’s just syntax. There’s no semantic meaning [inaudible 01:06:36] created there.

Jim: It’s a very elegant thing. We’ll put a pointer on the episode page to Searle’s Chinese Room paper. Let’s get back to the book now. We’ve been having so much fun going down rabbit holes here. So Joe goes on sabbatical, hangs out with some very interesting people at this little college, wherever the hell it is, somewhere in Northwest sort of looks like, and the state that they live in, I don’t know if it’s still the United States or not, but it’s some state, pretty vague. That seems like an ugly and severe police state. And fairly often the head of National Security shows up, [Peyaton 01:07:10] and fun ensue.

Gary: Yes. Peyaton. Yeah. So, yeah well, is it utopian future? Is it a dystopian future? You tell me.

Jim: I’d be in the mountains with my rifles and my Wolverines. If you remember Red Dawn. I would consider it a very ugly repressive dystopia and would definitely be leading a squad of revolutionaries against it.

Gary: Okay. Well, but how do we stop that from happening? I mean, think about this. We’ve got lots of stuff. You don’t have to do anything if you don’t want to. You don’t have to work. You have all of your needs met. You can write poetry all day if you want to. You don’t have to do anything.

Jim: Very seductive. Very seductive.

Gary: Yeah. There is this system where you can try to compete for the various fun jobs there are, like running on an orbital base trying to find exoplanets. There are some interesting jobs. If we look at how data is collected from in our society on each of us today, where does that end up? Some of us, me included, would like to see a system where we have much better privacy rights. As research for the book, I went down to DEFCON, which is the Black Hat Hackers Conference in Las Vegas. There’s lots to worry about. So I don’t know how we stop that disappearance of our privacy from continuing, but that’s why that’s in the book because it is something to worry about. It is a concern. At the same time, if all that data’s out there and if the data is used to actually predict our needs, it makes the economy really efficient.

Jim: That’s the trade off. That’s the seduction. That’s the devil’s offer. Of course, frankly, Google and Facebook is the devil’s offer. We’ll give you this amazing, magical technology for free, as long as you are willing to give us enough data that we know more about you than your spouse does.

Gary: Yeah. Now this future world, what it imagines is yeah, there aren’t those corporate entities anymore. This is all owned communally. Because I think there’s so much of the dystopian literature where it has these evil corporations and that’s not necessary, but nonetheless, let’s just take policing. I think your family was involved in police.

Jim: Yeah. From a long family of Irish Catholic cops and Marines.

Gary: Okay. Okay. I’ve got Marines in my background too. So yeah. I mean we all probably agree that there are bad people out there and it’s good to catch them. There’s pedophiles, there’s all kinds of… And sociopaths out there. It’s good to catch them. And in fact, if we asked most people, most people would be fine to use mostly whatever means, short of torture, et cetera, to collect data to catch those bad people. But where does that line end? That’s a big question. I don’t know. There’s enough knowledge about DNA that essentially almost already, anyone who creates a crime that leaves DNA, they can be caught. Every place in the Western world. Keep reading about the databases. So I can quickly see a future happening where it’s common knowledge if you kill someone, they’re going to catch you with your DNA.

Jim: That would be good if that was the only thing that came from that. Of course, that’s the devil’s temptation. Should we have a national DNA registry? There’s a lot of arguments in favor of that one actually. If we could somehow police the police, it would be okay. But again, this is James Madison, my hero, the only known revolutionary who was a complete cynic about human nature. He was the exact opposite of utopian. He assumed that every capability would be used corruptly sooner or later. He even imagined an idiot and a menace at the level of Trump somehow ending up in the presidency. And so far the Madisonian architecture’s held together just barely, even under an assault of that level. But he would argue it’s a recursive problem. Who polices the police? Who polices the people that police the police? And so probably you don’t want to give the police unlimited power.

Gary: Yeah. And the book, one of the characters, who I’ll not name at moment, he says, “There’s an ocean of data out there and we’ll eventually catch everybody.” So how do we prevent that from happening if you think that’s not a good thing?

Jim: Yeah. It’s really a tough problem. If we’re going to, frankly, suspect we won’t get by this one, but maybe we will. And we’re going to have to have some societal wide agreement that we’ll give up every murderer and every pedophile. In return, we will own our own data and it’ll be encrypted end to end. And I’d even go further and ban a large group of algorithmic recommenders. It’s like heroin. Again, well, [inaudible 01:12:08] we’ll give you Facebook, we’ll give you Google in return for us being able to manipulate you at unbelievable levels of detail.

Gary: Well yeah and the algorithm, there’s another scene in my book without giving away too much, you have robo-judges. Now of course the robo-judges paired with a human judge, but the robo-judge is supposedly impartial you could imagine. They’re using algorithms and just the facts. The robo-judge with its algorithms comes to a conclusion and then has the human judge verified or not. So that could be a system. I mean, we’re kind of down that path now. We have algorithms that recommend whether we grant-

Jim: Bail. Yeah. And there’s some work on sentencing and frankly, there’s a fair bit of scholarly evidence that sentencing is so capricious, it probably should be automated. For instance, I read a very interesting paper that if you’re going to be sentenced by a judge, try to schedule your sentencing hearing for after lunch, because your sentence will be less. If it’s at 11:30, you’re fucked. If it’s at 1:30, and particularly if the guy had a… or the gal, whoever it is, had a nice steak and potatoes and a martini or two, then you’re likely to get a 20% less sentence if it’s after lunch. And that level look capriciousness is pretty scary, but apparently it’s true. So that might even be an argument for something like an AI sensor. Anyway, let’s get back to the story here a little bit. So there’s a leader of the underground, the activists against The Levels Act, Evie. Again, I’m sure that name is no coincidence. Tell us a little about Evie and where does she come from and what motivates her?

Gary: Okay, well Evie is of the lower levels. She’s a 76. Again, everything in the book has meaning and she’s fighting for the freedom from these oppressive levels, which aren’t actually as meritocratic as one would imagine. Because of these levels, if you’re more than 20 levels away from someone, you can’t marry them. You can’t travel if you’re in the lowest quartile of the population, et cetera. So there are these restrictions that show up and she just thinks they’re fundamentally flawed. So she wants to get rid of them. So Joe comes in contact with her in an interesting set of circumstances, and then the story proceeds from there. Joe, every man, as I said, just an average guy and Evie… This book has many levels. One of them is it’s a retelling of the Adam and Eve story.

Gary: It is a story about humankind starting from the beginning of society and thinking about how we build civilization from nothing up and what kind of structures do we do when we do that? And then it’s also a re-imagining of the Adam and Eve story in a slightly different form. Everyone’s familiar with that story. God orders certain things to be done and if humankind disobeys, then there’s punishment. And so the alternative story is, what if there’s a world where God does not interfere? How is that story retold then?

Jim: God does not interfere, meaning that they ate from the Tree of Knowledge, as well as the Tree of Life.

Gary: Yeah. So let’s talk about that element of this spiritual element of the book. Most of the world’s religions deal with situations where you have God or gods that involved very heavily in human life, but that’s clearly not something that myself as a scientist finds palatable.

Jim: Put it this way, there’s no evidence for it, right?

Gary: There’s no evidence for it. It seems that the universe is a closed physical entity and there’s no evidence that we have found for any interference whatsoever. We can hypothesize with our current physics that there was some singularity, there’s a Big Bang. There was a first point in time and space that everything started. And that from that time we don’t see any evidence that anything interfered. So we don’t see any evidence of any outside creative force there.

Jim: Nothing supernatural. Right?

Gary: Right. But the science absolutely tells us nothing about how that all started. How did that whole thing get into motion, if you will?

Jim: There is some speculation about it, but it’s at the very outer fringes and is not actually scientific yet.

Gary: Yeah. Well and it’s been more than that. Even so many of the theories at the forefront of theoretical physics, everything from string theory to all those theories. They’ve not been tested because most of them are untestable in fact. You need a particle collider that’s as big as the galaxy. So we’re never going to test that, some of those theories.

Jim: Don’t say never. There are a whole series of articles on how one might someday be able to test string theory using, for instance, particles that are accelerated by the black hole at the center of the galaxy, for instance.

Gary: Yeah. Okay. I’ll agree with you to that. And the other part of it is, is that yet, where do those theories come from? Well, they come about because primarily through the mathematics. The theoretical physicists spend their time with the math and the math itself is incredibly elegant. Wigner wrote a famous article called The Unreasonable Effectiveness of Mathematics in the Natural Sciences. And he asks the question, as a very reputable physicist, “Well, how does this happen?” And there isn’t any answer about why that is, but it does seem that the universe is organized around mathematical principles at its core. And so the physicists, theoretical physicists, followed the math and the elegance of it and that’s the basis of many of these posits about how things started. But they don’t have any tests.

Gary: So my point is, is that we know nothing from the science about what happened before. There’s no reason whatsoever for anyone who is fully a scientist, fully follows the science, to be able to opine upon what started it. You just can’t say. So your epistemology is limited. That is if you’re just following the… You’re a good philosopher of epistemology, what we can know and we can’t know. As a good philosopher and as a good scientist, we can’t opine upon that. We can’t say whether there is no God. We don’t know what happened before. And so actually I think that folks who are of the stringent… Strongly arguing their atheistic beliefs are overstepping the limits of the epistemology.

Jim: Yeah. I always say everyone thinks I’m an atheist, but if pushed on, I say, “No, I’m a very atheistically inclined agnostic.” Because how the fuck could you know? But on the other hand, I put Yahweh down there in the same probability class as the Flying Spaghetti Monster.

Gary: Yes. I’ve been toyed with pescetarianism too. Right?

Jim: Exactly, right. It’s a beautiful thought experiment. What is interesting that there are people who are scientists who do speculate and even talk about possible tests, like Lee Smolin, who’s been on the show, he admits it’s just a hypothesis, not yet even close to testable that maybe our universes emerged out the back end of black holes and you have whole evolutionary theory and that the laws of physics are adjusted more or less randomly from the baseline and the previous universe, and how universes could then evolve towards those that produced more black holes. And that would explain why we have so many black holes, why our physical [inaudible 01:20:30] are.

Jim: So there are some semi-scientific thinking about things like that, but neither here or there. I think we’re both on the same page that there’s a lot of things we just can’t know. That’s one of the things I’ve enjoyed most of working with the people of Santa Fe Institute. There’s been a fair bit of work there on the fact that there may be a whole lot of things that are just fundamentally unknowable and just deal with it, dude. Right? The limits of knowledge is really something very, very important. One of the great failure modes of our so-called experts. They think they know everything. They don’t and it may well be they never will.

Gary: Yeah. And I think just to put a point on the atheistic argument, my reading of it is what happened is we had these creationists who were trying to misuse science to push certain religious views. And so, they were suddenly changing the way science was taught, et cetera. And so science suddenly realized that they needed to do something about it. And so scientists rose up to try to defend the scientific method.

Gary: And much of that then got into a fight, which some of the leading spokespersons are articulating a kind of atheism as an opposition to these creationists. And you can understand why that happened and all that, but that makes it a fight and it leaves some to view the argument being that either you have a religious belief or you believe in science. The epistemology clearly does not support that distinction. So I’m hoping to open up the argument to suggest that you can be very scientific and yet, admit that science doesn’t know and leave some room. But again, this is a… Any spirituality that imagines something else that is not interfering, that’s a closed physical universe [inaudible 01:22:32] all that work then. So yeah.

Jim: Yeah. That one’s above my pay grade. As I famously say on the show, when I hear the word metaphysics, I reach for my pistol.

Gary: Okay.

Jim: And I refer to spirituality as the S word. The other S word. We don’t have a whole lot of time. This has been a long podcast as these things go, but it’s been so much fun. I can go on for quite a while. Let’s see if we can get at least a pencil sketch of the rest of the plot so people can understand what’s going on here. So they end up at the Combat Dome, which is where Evie lives. I’m not sure if that’s where she grew up or not. I don’t frankly remember. It’s been a couple of months since I read the book, but I do remember they end up at the Combat Zone. Then they get caught and they go before some kind of crazed judicial process obviously. Seemed like a kangaroo court and they’re found guilty.

Jim: And then the prevailing punishment for severe crimes, because of seemingly some international agreement against the death penalty, it doesn’t seem like there’s too much scruples amongst the powers that be, is to instead banish people to the Empty Zone, which is a part of America in Nevada. And you actually lay out where it was. And I went on Google Earth and looked at it and even check the rainfall at various places along the way. It’s in Nevada. And the theory is that almost everybody sent to the Empty Zone dies and a three year sentence is very severe, but they don’t die.

Gary: Yeah. Yeah. In fact, you’re right. You could hike their whole trip exactly and see exactly what they see and it’s all realistic. So any backpackers, it’s possible to hike this thing. And yeah, I visited the Empty Zone. It’s a lot of fun to imagine what this would be like in the future.

Gary: Yeah. This is a retelling of that story. So what happens if you start from nothing and you try to recreate your own Eden. Which is the Eden? The society they left or the society they try to create? So yeah, I mean, I think one of the questions is humankind. If we started all over, could we make it better? That’s part of Game B. You’re going to have to try to figure out something else.

Gary: Well, one thought is, is that if you started all over, you would still try to recreate some of the same things we have anyway, because we like the stuff that we have. We like the fact that it makes our lives easier. We like the fact that we have more leisure, we can do what we want to do. We like the freedom that comes from it. It comes with costs. And so if you rebuild everything, what would be different? And I think one of the things that comes about is that we’re all in this together as humankind. So community is important and there isn’t a way just imagine that we’re going to go back and build our own tribal separate lives and be very successful. So that’s my view.

Jim: Gotcha. Makes sense. And of course, just a classic genre of science fiction. Can’t remember how many novels I’ve read where either people get stranded on human tolerable planets, but just themselves and a bag of tools or something. Or they go back in time, one of my favorites, all the way back to the Yankee in King Arthur’s Court and zillions of other ones. It’s nicely done. You actually spend a goodly… What is it? 30% of the book, something like that of how Evie and Joe make it in the Empty Zone.

Gary: Right, right, right, right. Yeah. The book has many levels, as I said.

Jim: Yeah. It’s very interesting. So anyway, it’s not too big a giveaway, but they survive the Empty Zone and come back to the world. Stuff happens, the good guys win. And it’s funny, we talk about this in the pregame discussion. I would argue that the way they won is through the Star Trek Nomad template, but it turns out you were aware of that.

Gary: That’s right. Well, there’s only so many stories out there. So we try not to repeat them and certainly not intentionally.

Jim: Yeah. I bought some of those books, 101 Plots. That’s probably stretching it a little bit.

Jim: Yeah. So that’s the story in a nutshell. It’s really well written. The world is very richly imagined. I think you also showed a lot more discipline than a lot of first time novelists do, in that you didn’t try to explain everything. For instance, the state government. What is the state like actually? You don’t even get into that, but you don’t need to, which is kind of cool. So I think as a work of literary art, it’s quite good. And for a first time novelist, damn good.

Gary: Well, thank you. Well, it won six awards last year. So I’m pleased with that. We’ve got lots of copies out. I’ve got it translated already into six languages, including Brazilian Portuguese for your Portuguese listeners. German, it’s [foreign language 01:27:38]. And we also have the Russian and the Japanese volumes coming out here in the next month. I’ll have it in eight languages very shortly.

Jim: Very cool. And we have lots of German listeners. I’m not quite sure why, but we have almost as many German listeners per capita as the United States and we have more than US per capita in Sweden. How about that? And we have a fair number in Italy. Oh by the way, you mentioned Brazilian Portugal, but we have more per capita in Portugal than we do in the US. So I don’t know if a Portuguese can read Brazilian Portuguese or not.

Gary: Oh, it’s very close. Yes, yes, yes. Yeah. And then in Italy, my book has been a small time bestseller in Italy among that audience too. So I’m very happy with that.

Jim: All right. Well, we have much more we could talk about, and as I mentioned upfront, there’s the second book, the appendix to Unfettered Journey. And I had a bunch of talking points on that too. It was real nerd alert stuff. Would’ve been fun, but at some point us old dudes start to run out of gas and I found for me, it’s about 90 minutes. We’ve been going at it here for almost 120. So I think we’re going to have to wrap it here, Gary. Remember that Unfettered Journey available at your book provider of choice. And it’s been great to have you here on The Jim Rutt Show.

Gary: Great, Jim. Thanks so much. I really enjoyed it.