Transcript of Episode 2: Robin Hanson

The following is a rough transcript which has not been revised by The Jim Rutt Show or by Robin Hanson. Please check with us before using any quotations from this transcript. Thank you.

Jim Rutt: This is the Jim Rutt Show. I’m your host, Jim Rutt. Today’s guest is Robin Hanson.

Jim Rutt: Robin is an associate professor of economics at George Mason University, in Fairfax, Virginia. Welcome, Robin.

Robin Hanson: Great to be here.

Jim Rutt: Could you give us just a few words about your economics department at GMU, and particularly how it might be different from other economics departments?

Robin Hanson: I’m a professor of economics here at George Mason. Our department is a little unusual. We’re, in some sense, a weak alliance of contrarians, which has some interesting features, because, “Do contrarians really have an intrinsic alliances?”, is a good question. We tend to be less focus on high method and more on a range of interesting questions. So, we also tend to be more publicly engaged in having blogs and books that are widely read and that sort of thing.

Jim Rutt: That sounds real good. Certainly, you and I have known each other at least a little bit, for a few years. Your range of interests is quite astounding. In fact, when I was looking at your research interests, I was going, “Oh, my goodness. How are we going to cook this down to 90 minutes?”

Robin Hanson: I usually have to apologize for that, which I kind of still should, although there’s this recent book out called Range, by David Epstein, which is celebrating those of us who have excessively wide interests and backgrounds.

Jim Rutt: Frankly, it fits me to a T. I have never done the same job for more than 18 months, in my whole life. I’m a radical changer of focuses.

Robin Hanson: And that says a lot more about someone like you or me than it does about a 20 year old.

Jim Rutt: Yeah, I think that’s probably right. Anyway, let’s hop in.

Jim Rutt: Last night, my wife and I watched on Black Mirror, the episode Rachel, Jack, and Ashley Too. You recently tweeted about that, and I think you mentioned that they’d actually chatted with you a little bit about it, and you weren’t entirely happy with the result.

Robin Hanson: I’ve had no contact with Black Mirror people at all, so as far as I know … I have no indication they are aware of me at all, or any of my writings on the subject.

Robin Hanson: I have this book called the Age of Em, which is a serious attempt to analyze what happens when you do brain emulations. About a third of the Black Mirror episodes have included brain emulations, which I think is saying that the concept is potent and interesting, but when they use brain emulations in their stories, they don’t do it, at all, in a realistic way.

Robin Hanson: That is, brain emulations would radically change most everything. It would be this enormous disruption of society on the scale of, say, the Industrial Revolution or even the farming revolution. They usually have an emulation there just for one little thing. That is, the world is mostly familiar, except one thing is different because there’s an em. That just doesn’t make much sense.

Robin Hanson: So, in that particular episode, there’s a singer, and in order to get more fan engaged with her, they make a brain emulation of her that they sell to her fans, and you, as a fan, can talk to an emulation of the singer, and chat with it, and it can help you do your hair, and your clothing, and be friends, which, if emulations were possible, isn’t a crazy thing to do. It’s just crazy to have a world where that’s the only thing they do, because basically ems could replace everybody on all the jobs. Once ems are cheap enough to do that, well, they’re cheap enough to do everything.

Jim Rutt: Yeah. I read the Age of Em when it came out, and was quite taken with it. When you agreed to be a guest, I went back and reviewed it. Went through at a hight speed, but reminded myself of a number of interesting things. And I must say, that someone who has known a number of futurists, and have read a number of futurist scenarios, it is probably the richest, most integrated, futurist scenario I’ve ever seen. As Robin indicated, his assumptions at the broadest level is that we develop this technology to upload humans and turn them into robots, essentially. It very quickly becomes the basis of our whole civilization. Maybe you could speak relatively briefly about the various things that are big, that are happening, that you see after this.

Robin Hanson: Sure. I like to say that it’s like science fiction, except there’s no plot, and there’s no characters, and it all makes sense. A lot of people have told me over the years, a lot of technology people especially, that while it’s possible to envision future technologies, it’s just not possible to see the social consequences. If you read a fair number of futurists, you might tend to agree with that, because futurists often seem to have pretty vague and insubstantial forecasts.

Robin Hanson: So, I wanted to show, in this book, that you really could say a lot about a concrete scenario, if you worked it out carefully. This scenario is where you can scan an individual human brain and make a computer model of that, and run that on a computer, and that model has the same input/output behavior. That is it acts the same in the same situation. You could hook it up with artificial eyes, ears, hands, and mouth, and then it could talk back to you. You could do jobs, et cetera.

Robin Hanson: Once you have emulations and they’re cheap, then they are cheaper than human workers, and they can do pretty much anything a human can do, because they are emulations or humans. So, first of all, all the humans lose their jobs, and the ems take the jobs, but a lot more thins happen. The ems can be made in factories, as fast as you can crank them out, which is very different from the human population. So, the population of ems can explode. The population gets so large that it drives down wages to near subsistence levels, which is the level at which human wages have typically been in most of human history. For most animals in history, it’s also the kind of life they live.

Robin Hanson: So, ems have to work most of the time, in a very competitive economy, but because the number of ems can grow so fast, the economy can grow much faster. The ability to grow humans is a big limit on the current growth rate of the economy, so I estimate, roughly, that this new em economy could double, roughly, every month, rather than the doubling every 15 years we have at the moment.

Robin Hanson: These ems, two big features of them is they can make copies of themselves, and they can change their speeds. Since they’re running on a computer, and they’re very parallel process, if you give them more processors, they can run that process faster. So, subjectively, they can run faster or slower than human speed, if you pay more to go faster. My best estimate is they run, roughly, 1000 times human speed, the typical em does. That’s the speed at which, in the one month doubling time of the economy, they experience a subjective century, and, roughly after that point, they need to retire because their minds will become too fragile to be competitive with younger workers. Because it’s easy to make copies of ems, them most ems are copies of the few hundred most productive humans. So, even though there might be ten billion humans available to be scanned to become emulations and compete in the em world, the em world really want the few best, because it can make as many copies as it wants of the few best.

Jim Rutt: Very, very interesting. Within the constraints of your assumption, I was amazed at the detail you went into. You talked about politics. You talked about sex. You talked about religion. You talked about the architecture of the cities. You even went into and amazingly detailed argument about the subjective experience of gravity, based on the size of the robot.

Jim Rutt: So, as a said when I introduced it, if someone’s interested in an amazingly integrated, crosschecked scenario based on one key assumption, this is a great book to read. However, I will say that, personally, I am skeptical of the core assumption, which is that emulations will be the first AGI.

Jim Rutt: Let me give you a couple arguments, and then I’d love to hear what you have to say.

Robin Hanson: Let’s get into it.

Jim Rutt: Yeah, let’s do it. First, your ballpark estimate for ems was about 100 years, and you laid out, I thought, some reasonably good analysis on why that was not unreasonable. I guess, I would say my estimate it shorter than that, for other approaches, but I’ll confess, my methodology is more seat of the pants and less analytic than yours. But, if I had to put a flag down, I’d say 20 to 80 years, with a somewhat skewed mean of 50. If we got there in 50, before your ems got there in 100, I think there would be no point in doing ems, or very little, and here’s why.

Jim Rutt: Ems aren’t very good. Right? They’re only a bit better than humans. One of the things I like to say about humans … and I used to just say it, because it sounded good, which is humans are, to the first order, the stupidest possible general intelligence. Right? Ma Nature is seldom profligate in evolution, and only does what she needs to do to get an advantage, or species does. Of course, this is all heliological talk, which we know is not true, but nonetheless, you get the point that humans aren’t that smart.

Jim Rutt: Well, in the last five years, I spend a good amount of my time on cognitive science and cognitive neuroscience, and now I know humans aren’t that smart. For instance, one of the more obvious ones is so-called working memory size, seven plus or minus two that allows us to relatively easily remember phone numbers. Turns out that’s basically for audio information, and writing is actually processed in our audio system. We actually have another set of sort term, very short term, memories for images, and there’s even less there. Three or four.

Jim Rutt: It turns out those two constraints have, I would argue, giant impact on our cognitive ability, particularly around the ability to process language, and even to read. We can just barely read. We’re constantly shuffling things in and out of our short term memory, and when you actually test people on their reading comprehension, it’s pretty bad. When one thinks about what would ability to read be, even just our plain human languages, for an entity that had a working memory size of 100, that’s just staggering. Keep in mind the seven is an average. The village idiot has five. Einstein has nine, plus or minus. Right? Approximately. So, 100 is just like qualitatively beyond anything we’ve ever been able to see before. And the ability to see linkages between works, linkages between ideas, I think would be huge.

Jim Rutt: Some of our other cognitive limitations are memory fidelity, right? Our memories are famously bad, and it’s getting scary how strong the evidence is on how bad first testimony is in court cases. Memories get co-mingled. They erode with age, et cetera.

Jim Rutt: Another, when we’re doing higher level thinking … call it Kahneman’s system two type thinking, our processing it through our conscious framework. We’re almost perfectly single threaded. We can pay attention to exactly one thing at a time. Not at all clear to me that you’d build that into a software AI.

Jim Rutt: Another’s our vision. We know our vision is very synthesized. What we’re really seeing is a very small spot that moves around, jumps around very rapidly in our brain somehow, by means we don’t yet know. Synthesized is something that seems like the full frame, but it’s missing things all the time.

Jim Rutt: Those are just a few. So, my argument would be that if software approach Ais get there before the ems, even from a very early stage, they’ll be better, and they’ll take off faster.

Jim Rutt: Secondly, for the question of how to get there, it strikes me that software approaches or hardware approached fit in better with our financial and business ecosystems. I would assert … Love to hear your thoughts on this one, that ems are mostly all or nothing. Yes, we might be able to hack out the perceptual system, but we’re already doing pretty good with deep learning on perception. To get a full [bore 00:11:22] em, we have to do it for a full bore human and not lose much.

Jim Rutt: All or nothings are famously hard to finance. In my hat as a early stage investor, I often say, “I don’t generally bet on jumping up a cliff”, while software approaches are much more potentially incremental. For instance, if we were to crack one of the biggest problems I see that’s within sight, which is language comprehension, just language comprehension itself, with a bigger working memory size would be unbelievably valuable in all kinds of ways, from understanding research reports and looking for integrations, perhaps most valuably to writing novels, writing screen plays, writing text books, et cetera. So, there’s be lots of money and lots of organizations focused on these early wins on this more ramped approach, rather than the all or nothing approach.

Jim Rutt: The last one … and I looked very carefully, actually, when I was flipping through the book, on this last one, is that humans only communicate with our pretty crappy languages. Ems look like they would also mostly communicate with human languages. You’d had about two paragraphs on ems possibly being able to exchange higher level data structures, but that really wasn’t filled out at all. On the other hand, some of the AI projects, including one I’ve been associated with, on and off, called OpenCog, from the very beginning, expects there to be very large and complex knowledge structures that could be shared between AI, so that we could have much higher bandwidth between our AGI elements that uses highly connected graphs of probabilistic logic expressions, which are tiny amount of data actually, and hence have a high information density. And they’re explicitly understandable.

Jim Rutt: So, anyway, those are my set of arguments.

Robin Hanson: I think you’ve given us a lot to talk about here. I’m not sure we’re going to get to anything else, but it’ll be fun if we even just go through these.

Jim Rutt: Absolutely.

Robin Hanson: All right. There’s a number of separate questions here. One issue is how far can other things get before ems show up? And the separate issue is, when both of them are around, which wins the competition, or which wins where? Those are somewhat issues, although they’re related.

Robin Hanson: With respect to the first question, a key issue is just how hard is AGI, and how hard ar emulations? If you think, as many people do, that there’s some essential five line theory, and once you find that theory, then you’ve got it, and you can do basically everything easy, then you’re just wondering when we’re going to find that theory. But if you think of AGI as just a name for a huge pool of tool and a huge pile of capabilities, that the only way to achieve it is to slowly add to the pile, then the questions is, well, how big is the pile, and how fast are we going?

Robin Hanson: I tend to be skeptical about the one big revolution that’s going to change everything. We really haven’t seen that in the last 70 years, say, of computer progress. Computer progress has been lumpy sometimes, but the degree of lumpiness in computer progress is comparable to other fields, and, in fact, we see a relatively universal pattern of the lumpiness of research in terms of citations, actually. So, I would say the straight forward prediction for the future is that continued rate of progress at the rate we’ve seen over the last 70 years. At that rate, it should take a really long time. So, 50 years would be pretty ambitious.

Jim Rutt: I think you said 400 was kind of the center of your guess. Do you still feel comfortable that-

Robin Hanson: I would say one statistic I use is that I’d ask people, “In the last 20 years, how far have we come in your sub field of AI, as a percentage of how far we need to go to get to human level abilities?” The median response I got were about five to ten percent with not noticeable acceleration. So, if you project that forward, you’ve got two to four centuries.

Jim Rutt: Got you. Okay.

Robin Hanson: In principle, if ems took longer than that, then that’s how far you get, but still, the question is … I agree that ems are all or nothing, and therefore less likely to produce a burst of investment to try to produce it. Of course, AGI really isn’t going to produce a burst in investment to produce AGI. It’s mostly going to be investment along the path of particular products and services that will be useful well before AGI. Similarly, for em, most of the work is going to be on the technology that will eventually lead to ems, but not because it will lead to ems. It’ll be for other purposes. Although, it’s worth noting that many people, especially in science fiction, focus on the benefit of ems as immortality. The benefit of ems from that point of view, as a product is you could become an emulation, and then you’d be immortal.

Jim Rutt: Yeah, I think that’s actually an interesting argument, which one I probably haven’t considered enough. We know humans are vain, and we also know how much of human history has been driven by, in my view, false promises of immortality in the afterlife. So, it is possible. I will acknowledge that that’s actually a good response with respect to the investment ecosystem, that is they get close enough, then there might be a spur of investment around the immortality marketplace.

Robin Hanson: Right. So, at this high level, my two kinds of arguments would be to say, look, AGI’s just really hard, so it’s going to take a long time. There isn’t going to be this moment where we suddenly figure it out, and then it’s all done. It’s just a long slog.

Robin Hanson: Ems are also, in some sense, a long slog, but, in some sense, ems are just scaling up things we kind of know how to do. It’s less fundamental new understandings you need for ems, then just increasing scale. You have to have bigger scale scans. You have to have bigger, cheaper computers, and you have to have more brain cell models.

Jim Rutt: Actually, one item when I was pondering this …And I think you addressed it in passing. I don’t know how much detail, but a little bit. Is one of the questions about how doable em is, is at what level do details matter. Right?

Robin Hanson: Yep.

Jim Rutt: If the details matter at neuron types plus synapsis, plus [synapsize 00:17:12], plus [connectome 00:17:13], then it’s at one level of difficulty. If it also includes details of the epigenetics of each neuron, if it also includes state information within the synapsis, et cetera, it might be a very much bigger and more difficult problem.

Robin Hanson: We believe that it’s not the worst possible case, because the brain is a signal processing system. In general, signal processing system design produces designs were there are some degrees of freedom that represent and move the signals. Then, there are other freedom that don’t, and then you usually try to go out of your way to have some degree of separation between those degrees of freedom. That is, the wires and all electronic device, the wire itself conducts the message, and the insulator keeps that somewhat independent of other degrees of freedom. That’s, in general, how you do a signal processing system. The brain is such a system, so the brain must, at some level, have been designed to have some key degrees of freedom that represent the signals, and other degrees of freedom, i.e. all the other infrastructure of the cells, to be somewhat separated from that, which makes you think that you have to find the degrees of freedom that matter, which isn’t going to be everything, but it could be a lot.

Jim Rutt: Yeah, it could be, though Ma Nature, unfortunately, is not an engineer. She’s an evolution machine, and, as we know, does all kinds of crazy, weird things that are highly dependent upon each other between levels and things. So, I would expect us to find some degree of separation there.

Robin Hanson: If you look at the organization of the body, say just organs themselves also follow this principle, that there are different subsystems inside an organism, and those different subsystem have different purposes. In order to best design each subsystem, usually, you try to make the subsystems be somewhat independent of other subsystems, and they roughly are.

Jim Rutt: Yeah, indeed, roughly, but if you change small things in the metabolome metabalistic graph, it all fails, right? So, there’s a lot more cross-linkage than a decent engineer would do, but I take your point. There’s certainly some.

Jim Rutt: How about the third point that suppose the two arrive at similar times? Let’s talk about the competition a little bit.

Robin Hanson: Right. Imagine that ems come when AI hasn’t taken over everything, [inaudible 00:19:22] better at everything. Then, the key question is now they can compete on equal terms, in terms of hardware, and investment, and capitalist attention, and it’s more, now, the question of which kind of architecture is better suited for which kind of problems? This comes to your architectural criticisms. You could say, “Well, look, our architectures aren’t perfect. They have flaws”, but then you can start to improve them some. Obviously, humans have improved their working memory, say, by having external memory. Records, videos, audio recordings, things like that, and of course, that’s also a more accurate memory. So, the real question is, these different architectural systems and approaches, which have the advantages of flexibility, or evolvability, or changeability? And what other advantages do they have, so that some win somewhere, and some another?

Robin Hanson: As long as ems win in some substantial fraction of job tasks, they don’t have to win everywhere, then something like the Age of Em happens. That is, in the jobs where the ems are best, then the Age of Em scenario happens in those job tasks. Then, there’s a whole other world of tasks that are done by other kinds of automation. The Age of Em isn’t talking so much about that, but you should imagine it, that this implicit, huge background of hardware and software that’s doing other tasks would exist in that world, and wouldn’t change that world enormously. The description would be roughly the same.

Robin Hanson: It comes down to, then as many people think, that, in fact, ems brain organization software just loses at everything. That’s the argument that many people have said, is just so hopeless. That’s an argument that combines two claims. One is that brain organization, as an approach to software, is just hopeless, and there’s just no good way to improve it or modify it to be competitive, with a claim that it’s very hard to modify, that it’s stuck. Whereas, other kinds of software is more robust to modification, easier to improve, easier to generalize.

Jim Rutt: Yeah. I, in general, would support that argument, though I just thought of something that actually may support your side a little bit, which is another area there is increasing investment in, is in neural man machine interfaces.

Robin Hanson: Yeah, I’m pretty skeptical about that mattering much here.

Jim Rutt: Okay.

Robin Hanson: Each task will be done inside some kind of hardware, in a brain hardware or in artificial hardware. The neural interface basically increases the bandwidth between tasks being done in different places. So, it allows more fine grain interleaving, but it doesn’t really save the ems, unless there’s a task that the ems are best at. There has to be something they’re best at, and then you can integrate that more at a fine grain time and space scale, with tasks done on artificial hardware, the higher the bandwidth you have between that.

Jim Rutt: Well, let me throw a real world example on where a loosely coupled man machine interface beats machine, and that’s chess. Right? We all hear about how good the chess programs are these days. They are amazing, but I have also read that there are tournaments now where you take a very good grandmaster, doesn’t even have to be the best, but a very good one, with a state-of-the-art chess playing program designed to be guided by a human, and the combination of the two is considerably stronger than the best, pure computer chess programs.

Robin Hanson: That’s what I heard ten years ago. I understand today that’s no longer true.

Jim Rutt: Ah, too bad.

Robin Hanson: Today, the machines are just beating the human machine teams.

Jim Rutt: Yeah, truthfully, I had not heard whether, for instance, the man machine teams could beat the Alpha Chess Zero or something. Interestingly, that actually argues towards how rapidly regular software evolves, versus anything else.

Jim Rutt: Anyway, this has been a good conversation.

Robin Hanson: I have another point I want to make on this.

Jim Rutt: Okay, go ahead.

Robin Hanson: I have a theory I tried, an account I tried to come up with would be essential difference between the software we usually write, in including a lot of machine learning software, and the software in our brains. I think you need an account of the fundamental source of that difference, in order to make intelligent estimate about which wins where, and in a longer competition.

Robin Hanson: I propose that the key difference is that in the eons that evolution was evolving and improving the brain, it never found some key abstractions that we have in software, that we build our computers around. In particular, it mixes up hardware and software. It mixes up processing and memory. It mixes up addresses and content. It mixes learning and doing. All these things that we usually, in software, try to go out of our way to separate, distinguish, the brain doesn’t. It seems to basically mix them all up. In its search for improvements, it was searching for variations that were constrained to mix these things up.

Robin Hanson: That first distinction between hardware and software is perhaps the most important. Today, when you sit down in front of a computer to write software, you have limited processing, but you have unlimited memory, basically. That means that you can start with a blank screen, and start to write a new piece of code, and then only connect this new piece of code to the old code you have when you see an advantage from that. You’re usually slow to do that and cautious about that, because you know the more you connect to old stuff, the harder it will be to change and update. That’s the standard issue of modularity. We use modularity as an enormous power to write software, and we also do thins in machine learning, because we can, because we have so much extra memory that that’s cheap and easy, but the brain couldn’t do that. That’s the key point.

Robin Hanson: Over the eons, when the brain had an existing brain design, and then it was evolving and trying to add a new functionality, or feature, or capability, it couldn’t start with a blank screen, because hardware and software is mixed up. It would have to add a new section to the brain to do that, and the brain was pretty volume constrained. It would either have to delete some old section in order to add a new section, or it would find a way to reorganize old things to allow the new thing to fit in more easily. It would focus a lot on reorganization, and especially abstraction, which is the key way that you can take multiple things and make a smaller, shorter version of them that allows more room to doing other things.

Robin Hanson: The key idea here is that relative to the software we’re familiar with, the brain is this marvel of integration and abstraction, but it isn’t very modular. That’s obvious when you look at the wiring diagram. The software we write is much more modular, therefore we don’t pay as much of attention there to organization, reorganization, and abstraction because it’s not worth the bother when we can so easily start with a new blank screen. But, because we have this standard habit in software, software rots faster than brain software. This is the overwhelming observation of software in the real world, is that even though we have decades of people writing software to do so many things, typically a software system rots. With time, it becomes harder to modify, harder to usefully change, harder to understand, and eventually, people just toss it, and they start with a new blank screen, and make whole new systems at enormous expense. We apparently haven’t found ways to inherit most old code that does similar things. Instead, we mostly throw it away and start with something new, because our systems rot.

Robin Hanson: That gives you a perspective on the advantages and disadvantages of ems relative to other software, suggesting that on small … One big disadvantage of the brain is, in order to have the brain do anything, you pretty much have to devote a whole brain to the task. Your brain is a big thing. It’s expensive. So, most of the advantages we’ve gotten from software over the last 70 years are mainly to do small things, with much less software and hardware than the whole brain, which is an enormous savings. We should expect that to continue. So, for tasks that can be done by a small piece of hardware and a small piece of software, that will just not be done by an em or a human-like mind, but as you move up the hierarchy of tasks to more general tasks that require a wider scope and more things to be integrated, in our experience, we actually find it hard. That’s the problem of AGI generality. We find it quite hard to make systems that are general, and broad, and that can integrate a wide range of things well, and still be modifiable and easily updated. That’s usually the limitation.

Robin Hanson: That’s where human mind seems to excel. Human minds seem to be this marvel of integration where they can pay attention to a wide range of things and integrate them well, and not rot. So, that suggests this future where there’s like in a company, an abstraction hierarchy of control, and there’s people at the top who have very wide scope and attention, who look at longterm things, and people at the bottom who look at very small scope, and very short term local considerations. Those local considerations will obviously be done by automation, and that will be the first thing to automate, but near the top or the control hierarchy, those things seem much more naturally to be done by humans, or human-like minds, and I would predict that ems have a long future there.

Jim Rutt: So, even against software AGIs, which may have a much higher high end than ems, you think there will still be a place for ems in their unique ability to do very broad integration and abstraction?

Robin Hanson: Yeah. For an economist, the question is not, “How smart is a thing?” It’s, “How cost effective is a thing?”

Jim Rutt: Right.

Robin Hanson: In some sense, if you put 100 people together, they’re smarter than one person, and they’re more capable, but they cost 100 times as much. So, when you say, “AGI could be smarter”, well I say, “Smarter than how many humans?”

Jim Rutt: And at what cost, right?

Robin Hanson: Right, et cetera. When we’re thinking about this cost trade-off, then I say, near the high end of the generality, flexibility range of considerations spectrum of tasks, that it looks to me like the ems can win there for a long time, especially considering that once ems are possible, we can modify that software. It doesn’t have to stay the way it initially did. Then, the question becomes how fast can that improve, relative to how fast can other things improve? You’d say, “Well, look. We’ve been able to improve other things much faster.” But I’d say, “Yes, bu that’s on very narrow tasks, very narrow scopes. You haven’t really pressed into the hard problem of how to be general and broad.”

Jim Rutt: Yeah. That’s an interesting point. How rigid will they have to be, or how opaque is the other question. It is true that the deep learning neural models are almost as opaque as the human brain. Right?

Robin Hanson: Indeed. And they are much more modular than people give them credit for, in the sense that any one neural net that does chess is this big opaque mess that does everything in chess at once, but it only does chess. In the deep learning world, people solve new problems with a blank slate. Basically, they start with a blank net, and they learn a particular area, with a particular dataset. They are not making systems that do everything all at once.

Jim Rutt: Yeah. It’s actually, by the way, why I do not believe that the current deep learning neural net architecture is likely, by itself, to get us to AGI. I know there are a lot of people out there these days who do, but, frankly, I think it’s like going into an orchard where everybody was picking fruit from the ground with a six foot ladder. Now, you can pick a lot of fruit, but eventually you’ve picked it all, all you can get with a six foot ladder. We’re going to need other approaches that integrate probably multiple neural nets and have the ability to generate new neural nets on the fly, under the control of a higher level abstraction, perhaps a more symbolic, more probabilistic system. Then we’ll have a highly complicated system that will probably, by the time it gets to have the capability of human brain, will be almost as complicated.

Jim Rutt: The question will be, to what degree is it modular, and can it out-evolve? Because, in the business world where I come from, the rate of evolution is pretty much everything over the long haul. If you can make your product better, then you’re doubling time is, in value, three years, and the other guy is five years, before long, you’ve eaten his lunch. Same would be true in a race between software driven AGI versus ems. If we could improve our software driven human level, say, equal to ems, faster, then the dominance of ems could be pretty short.

Robin Hanson: The key point to notice is that until we have ems, the humans stuff isn’t improving very fast at all. It’s definitely been the observation over the last decades that human workers are not improving as fast as artificial software. We almost never see something that was being done by software being switched to being done by humans, whereas we see a lot of cases where something that was being done by humans is switched to being done by artificial hardware and software. If that trend were to continue, then it’s just obvious that the humans lose.

Robin Hanson: When ems are possible, that game changes, because now the human minds who are now ems can take advantage of the improvements in hardware. They can take advantage of scale economies and factories, et cetera, to make many of them, and make them more cost-effective. They can even take advantage of their brains being modified by efforts to reorganize them and redesign them, and to improve them.

Jim Rutt: That is, of course, interesting that there’s no reason you couldn’t spin up ten million ems to work on the problem of improving ems. So, it’s, again, the singularity foon type argument, though probably slower.

Robin Hanson: The key robust prediction is the growth rate goes up. Regardless of whether it’s AGI or ems, the rate of change in the economy becomes much faster. If it wasn’t for ems, then we would much more despair of humans being able to keep track and keep control over all that change. Whereas, when ems are possible, then they can be very fast, and they can keep up with change.

Jim Rutt: I think that’s a very good optimistic point about the ems world, is it’s probably less scary than pure software AGI world. We’ll be able to reason about it from analogy to ourself, at least in the initial phases, and probably it wouldn’t accelerate as fast.

Robin Hanson: Subjectively, in the sense that from the point of view of the fastest ems, the world wouldn’t be changing as fast, because they could speed up with it.

Jim Rutt: Yes. Also, from the human perspective, not as fast as if in the fast take-off, AGI is real, which again, another long point to argue, and I’m unclear on my view on that even, but there are certainly, as you know, scenarios where software AGIs reflect upon themselves and the 1.1 human AGI becomes 1.4 human, becomes two human, becomes nine human, becomes a million human in six hours.

Robin Hanson: I actually spent several years arguing that point with my ex co-blogger, Eliezer Yudkowsky. So, I feel somewhat confident in saying the key assumption behind that is some level of simplicity in the essential innovations that would be possible. That is, at some level, there are some simple ideas that when you find them, they give you these big improvements, and maybe let you find other simple ideas, that basically there is these big lumps of innovation to be found.

Robin Hanson: To the extent that innovation typically isn’t that lumpy, then I’m much more skeptical about that, because the past we’ve seen a distribution of lumpiness, but most innovation has been relatively small lumps. A few big ones. Because of that, we’ve seen relatively steady progress, and I think that allows us to predict those kinds of rates of progress forward.

Jim Rutt: I love the quality of your analysis. It’s just so deep. Again, I recommend the Age of Em to anybody who wants to see a beautifully constructed scenario about something quite far into the future, that if you accept the one premise, makes a lot of sense.

Robin Hanson: Now, many people are scared by it, and in some sense, rightly so. That is, if you thought of the history of, say, from animals to foraging humans, to farming humans, to industrial humans, each of those transitions was enormous. Each of those transitions, to somebody before the transition, should have been scary, if they had understood the basics of it, because a lot would change in their world and some things would be lost. Some things they value were lost in each of those transitions. So, you should expect the same thing of the next transition, enormous change in most aspects of society. Therefore, some key things you treasure will be lost. That’s not crazy. That’s completely reasonable. But then, that’s pretty much what you should expect in the long run future. If you say, “I like the world I’m in, and I don’t want any more big changes”, you’ve got a really big problem in front of you, because, so far, we’ve had big changes, and how are you going to prevent them?

Jim Rutt: And at an accelerating rate. That’s been the trend since the modern age started in 1625, on February 16th, at 11:00 PM. Right? It’s been accelerating ever since, and at a pretty high rate.

Robin Hanson: Actually, I would say we have seen acceleration over time, but in lumps. So, within each of these eras, say the animal era, the foraging era, the farming era, the industrial era, change has been relatively steady. It has not been accelerating overall. We’ve been, say over the last century, I would say change has not been accelerating over the last century, if you look at overall rates of change. But, of course, change today versus 1000 years ago, it’s much higher, because of the Industrial Revolution. Then, during the farming era, from, say 5,000 B.C. until a few hundred years ago, change was relatively steady in a much slower rate, but it was faster than 100,000 years ago, because the foraging era was even slower. So, I’d say, roughly, we see steady rates of change until we see a small number of these very big increases in the rates of change, and AI or ems could very well be the sort of thing that triggers a big burst in the rate of change.

Jim Rutt: Oh, that’s interesting. Let’s divert here a little bit and talk about the idea that rates of change are not accelerating, because I would say the common pop view it that they are. Not to say that that is correct. I suppose I’d also want to know what’s your metric, and are we talking first derivatives, or second derivatives, or velocities that are stationary?

Robin Hanson: The key point of that, obviously, is that our society’s enormous. There’s lots of different parts. At any one time, some parts are changing faster than others. So, it’s always possible to look and find parts that are changing very rapidly, but are the representative?

Robin Hanson: Our best way to look at overall society, averaging over all the different kinds of change, is economic measures, because they basically average out how much more can you get of what you want. Economic growth basically is the ability of us all to get more of what we want. When growth happens, it’s because we can do more, and that’s averaged over all the different things we want. Some things we want improve much faster, and other things slower. The way we average those is how important is each one, from the point of view of getting what we want?

Robin Hanson: Economic growth has been relatively steady. In fact, economic growth has, arguably, declined a bit in the advanced countries over the last 40 years. That’s called stagnation or decline. Then people have talked about that, although worldwide, because the poorer nations are growing faster. Worldwide growth hasn’t slowed down, but it’s not sped up either.

Jim Rutt: And that’s in terms of, essentially, productivity, I presume.

Robin Hanson: Productivity is producing the things we want, so it’s basically the ability to get more of what we want has gone up.

Jim Rutt: Yeah, and per capita, like I said. Just have to adjust by population size, to get a real sense of advancement.

Robin Hanson: In some sense, more population is one of the things we want. So, you shouldn’t neglect that either. If we choose to have more population who get less, that’s still getting more if what we wanted was to have more population to get less.

Robin Hanson: The longterm trend, I would say, is more largely in terms of the total productive the product the economists had per capita. So, through most of the foraging world, through most of the farming world, per capita wealth did not change much. The most of change was in the total number of people, which is what standard theory predicts. That is, if we can grow people faster than the economy grows, we will, so then most growth will be eaten up by population change, as opposed to per capita income change. It’s only in the last few hundreds years what we’ve been able to grow the economy faster than the population, such that now the per capita rate has gone up.

Jim Rutt: And gone up a lot in the last 400 years, something like that. Okay. Well, this is great. I really think this is a very, very interesting conversation. Let’s move on to our next topic, which seems to be becoming a semi-feature here on the Jim Rutt Show, which is the Fermi Paradox.

Jim Rutt: For those listeners who aren’t familiar with this, this goes back to Los Alamos, where they made the atomic bomb during World War II, where the story is told that a group of very smart folks were sitting around a lunch table, talking about how many intelligent races there were in the universe. It must be a bunch, because we have these many stars, these many galaxies, and this makes some assumptions about planets, et cetera. And Rico Fermi walked over and said, “Okay, guys. That all sounds good, but where are they?” Right?

Jim Rutt: So, where are these other intelligences in space? So far, we’ve been looking for 60 years or so, and have seen no confirmed signals for anything that we would say is the sign of either an existing or previous intelligent race in the universe, other than ourselves.

Jim Rutt: In general, the argument falls into two categories. One, that there aren’t any, that for whatever reason, we’re the only, the first, race, at least, in the universe to have reached the level of general intelligence. Then the other fork is they’re out there, but they either are intentionally not showing themselves, or we don’t have the tools to detect them.

Jim Rutt: With that, I’m going to throw it back to you, Robin. Tell me what you think about the Fermi Paradox.

Robin Hanson: This is a really big question, and it’s an important question, and, when I first started writing about it 20 years ago, I thought neglected. In many areas, what I’d often do is take the data we have and try to be a little more careful theoretically to ask, “What does this data predict? What are the implications of this data? What can we infer from this data?”

Robin Hanson: On this cosmic scale, the key data point is we don’t see anything out there that isn’t dead. Everything we’ve ever seen anywhere except on Earth is dead. The question is how to explain that. Now, we have to confront that to some sort of theories. One of the theories we have to compare that to is our own theory or our own origin. We’re not dead. Where did we come from? So, we have a set of stories that try to explain where we came from, and they have to do with a path that we came along, our ancestors came along. The very first kinds of life appeared, then new kinds of life appeared, more advanced kinds of life. Eventually, humans. Eventually, humans grew to this level. That theory of our past is a set of theories that mostly explains improvement through some sort of evolutionary or competition, which is the sort of theory that predicts increasing capabilities, and increasing range and scope of environments.

Robin Hanson: When you apply those sorts of theories to creatures like us at this stage, we tend to predict more of the same. That is, the most straight forward prediction of our future is that we will continue to become more technically capable, that our economy will continue to grow, and that we will, therefore, be able to survive in a wider range of environments. We will take a wider range of energy sources. We will use a larger set of material. Therefore, we will continue to grow. The straightforward projection many people see is that that means we take over a larger section of the solar system, and eventually take over larger sections of the galaxy, et cetera.

Robin Hanson: Those sorts of straightforward predictions of our future run into the conflict that if other places out there would follow that same sort of trajectory, they would eventually be visible, and we would see them, but we don’t.

Jim Rutt: Yep, that’s the essence of the paradox. Right?

Robin Hanson: Then the key question is, “Well, how do we interpret all this?” So, the way I’ve tried to interpret it, under the phrase the great filter, is to say we can see this theory of a trajectory of going through these various steps, as a theory of a path you go along, but these paths can have filters along the way. That is, we can imagine that, at some point, a process that goes along this path doesn’t continue. It stops.

Robin Hanson: For example, we can imagine, say, humans blowing themselves up, and doing it so thoroughly that we prevent the rise of something like us a few thousands of years later. It’s actually pretty hard to do that. An ordinary nuclear war doesn’t quite do it, but the point is you can imagine something that’s thorough enough to keep it from going on. Or you can imagine, say, that evolution would have searched for billions of years in the space if possible species, but never come up with a species quite like humans, with our combination of intelligence and tool abilities, and things like that.

Robin Hanson: If we think of this path, but it’s not guaranteed that the system goes along to the next step in the path, then we can think of this path as a process with filters. Then, each step has a filter, which is, what’s the chance that it goes on to the next step? Then, we can think of the great filter and the sum of all those filters, up to the point where something would be visible on the cosmic scale.

Robin Hanson: Then, the claim is, “Well, there is a great filter.” We look out in the universe, and clearly nothing has become really big and visible. That means, starting from any random spot, it’s really hard to go along that path to get to that endpoint. It’s enormously hard. It’s astronomically hard. Literally. If we can see, say, ten of the 22 planets, roughly, out there, and none of them produced a visible civilization, then the total filter is more than a factor of ten in the 22. That’s a puzzle.

Jim Rutt: I like the filter metaphor. You can take the hundreds of arguments about the paradox, and put them into the, probably, in terms of early filter verus late filter, also.

Robin Hanson: Absolutely. This is the key point, from our point of view, is we realize we’re only partly along this filter. We’re not actually, at the moment, capable of being visible on a cosmic scale, although we envision that in the future we would become that. So, there’s a filter ahead of us, and a filter behind us. The biggest question for us is how far along the filter have we come?

Robin Hanson: The key point is if it’s ten to the 22, and say we’re ten to the 20 along the way, but we still have ten to the two to go, that means we only have a one percent chance of reaching that final point, which is bad news. It means, most likely, we fail, and we fail pretty thoroughly in the sense that you can imagine various disasters, but if they only temporarily knock some parts of us down, but not all of us thoroughly down, then we wouldn’t take that long for those parts to regrow, and regroup, and continue on.

Robin Hanson: That means, if there’s only a one percent chance of getting there, there’s 99% chance of something in our future so thoroughly knocking us down that we can’t get back up again.

Jim Rutt: Yep. That’s a very interesting way to look at it. Of course, it does ignore one of the branches of the arguments on the late filters, which is that there are societies out there, but either live in a way that are not detectable to us, for instance they don’t use broadcast forms of electromagnetism. They may use lasers or something for long range communication. Or, some of the other later filter arguments are that they turned inward and chose, for whatever reason, to not go out into the world, and maybe have built very powerful social inhibitions against it, or the singularity always happens and we’re always eaten by our AIs, and-

Robin Hanson: Eaten by AI doesn’t work in the sense that you’ll have to explain why the AI doesn’t grow and become visible. If we saw something visible in the universe, it could well be AIs that had eaten their original species, but they would still be visible. That would still be a puzzle, if they didn’t become visible. So, you ask yourself, “Okay, given any one civilization like ours, what are the various chances it goes down these different paths of, say, choosing for some reason not to grow, or creating a totalitarian limit that prevents anybody from growing, or not having any interest in growing?” The fraction of the filter you can explain with that has to be how reliable is this process?

Robin Hanson: If 99% of the time, a civilization like our went to this point where it locked itself down and didn’t do anything, then that could explain up to a factor of ten to the two of the filter, not the whole ten to the 22. A only mildly reliable tendency of this sort can only explain a modest fraction of the filter.

Jim Rutt: Though, for the ten to the 20, that one explanation could explain the rest. Though, I suppose, if we were to look at it probabilistically, such a lockdown, you’d also have to look on what’s its longevity. Right? If we look at social evolution, we’ve never seen any lockdown that lasted longer than … what was the Middle Ages, Dark Ages? 1,500 years or something, right? We could say someone locks it down. What’s the chances of it lasting for cosmic time spans, which might be pretty damn low at almost any mutation rate.

Robin Hanson: Life on earth has been locked on earth for the whole time, but that’s not because anyone part of it set up a law that said, “No, the rest of you can’t do that.” It’s just no parts of the life on earth could really do it.

Robin Hanson: Now, we’re imagining a future where we have the technical capability to expand into the rest of the galaxy, and then some part of the society makes a rule and a policy that prevents other parts from doing that. That’s the kind of star you have to be thinking of. Even the vast majority of the civilization not being interested in going out doesn’t do it. As long as there’s any small part that does and is able to, then the puzzle is, why hasn’t that part happened?

Jim Rutt: If you were to place your bet on the cosmic opinion marketplace, on Fermi Paradox, where do you come down?

Robin Hanson: Well, if you look at the entire filter, clearly the parts that we understand the least are the earliest. They’re long ago or their date is very weak. So, that’s the easiest place to put big filter factors that don’t contradict our evidence, is way back at the very beginning. So, that’s a convenient place to put it, because then that’s in the past, and that makes our future less problematic.

Robin Hanson: It’s not a terrible scenario there. It’s not terribly depressing to say, “Well, it looks like the most likely place to filter is in the distant past, and therefore that’s probably where it is, and therefore our future’s okay.” The main problem is you just shouldn’t be too confident of that. That’s just a little too fast and easy.

Robin Hanson: You should worry that, yeah, but part of the filter might be in the future, and any small fraction of the filter in the future is a big problem for your descendants and our civilization. You should still worry about it and try to do what you can to prevent it, even if perhaps the most likely scenario is that it’s all in the past, in the very distant past.

Jim Rutt: And, of course, if it turns out it’s not in the past, then it means it’s in the future, right? And we will get some data on how much is in the past relatively soon, as we start to be able to scan distant planets for their atmospheric constituents and things of that ilk.

Robin Hanson: If we see any kind of life anywhere in the universe that’s not at the zero dead stage, that gives us data about those early steps of the filter. That limits how big those early steps can be. So, the more advanced life we find, the closer to us we find the lower that initial filter can be, and therefore the higher the later filter has to be, which is bad news. It will be bad news to see advanced life away from earth, especially the more plausible it is that it had an independent origin.

Robin Hanson: One plausible story, though, is that life was very unlikely to evolve, but it evolved in the initial cluster of stars that was the nursery for our star. Our star didn’t arise by itself. It arose at a nursery with several hundred start all together in the same [inaudible 00:50:16] cloud, because that’s typically how these things happen. So, if that [inaudible 00:50:20] cloud was a dense cloud where stuff traveled between different parts of the cloud, then it’s plausible that even after our sun formed and life formed soon there, other parts of that cloud got the life together. So, the best place to see it in our galaxy for other life is the other siblings of our star. They should be actually relatively easy to identify, because they would have the same mix of elements as our star, in terms of the various kinds of elements that would have been in the cloud.

Robin Hanson: We can find our few hundred siblings of our start out there, and if they have life around them, that speaks to the rest of the filter after that point, but not so much to the very first step.

Jim Rutt: Yeah, it doesn’t quite answer the cosmological question, right? Maybe it got lucky within the cluster, on the early filter. Well, I suppose if they independently all developed their own life … The question was is it independent, or is it through some propagation of some early substrate?

Robin Hanson: Right, but if they were from our siblings, it’s almost certainly that there was propagation between us. But other stars out there that doesn’t have a common origin, then it’s much more plausible that it was farther back in the chain.

Jim Rutt: Of course, that depends where the filter occurs, right? And when? If it turns out it was 4.5 billion years ago, then some form of contagion amongst them might make sense, but if it was 3.5 billion years ago, the distances were getting might big at that point, and it might be much less likely, though it could have been some precursors that were shared. There was enough of the advance hydrocarbons left over from the previous set of supernovas, for instance.

Robin Hanson: The highest level thing to realize is just there are reasons to be worried. That is, you look in this dark night sky, and everything looks dead. That should worry you a bit. It should say something up there kills things, or prevents them ever from becoming alive in the first place. There’s something really fierce, and dark, and dangerous out there.

Jim Rutt: And, of course, one of those scenarios is called the Dark Forrest Scenario, where there is a regular eruption of life, but there’s some top predator that goes and eats them every time they show themselves.

Robin Hanson: That’s a great entertaining scenario, and it’s worth thinking through, but over the years, I just can’t make it work. I just can’t figure out how there’s a plausible scenario where there’s a dark power that’s hiding and kills off anything that shows up, without that dark power becoming visible itself.

Jim Rutt: It’s interesting. When I was a 12 year old, or 14 year old nerd, I, of course, ran the Drake Equation. Said, “Ah, there’s got to be hundreds of thousands of them”, but the more I have looked at it in the last ten years … I’ve really looked at the literature. I’ve become more and more … a little bit more weight, quite a lot more weights on early filters.

Jim Rutt: For instance, as you mentioned, our working assumption is that we have a evolutionary ratchet that improves over time, and giving enough time, ought to produce pretty amazing results. However, the mathematics of evolution require an error rate below X, not succumb to. It’s known as the error catastrophe.

Jim Rutt: You look at how information could be duplicated in an early warm pool soup before we have something like our DNA architecture with its pretty heavy error checking and error repair capability. You run the numbers, and you go, “Hmm, the evolutionary power of that pre-error corrected information is pretty damn low.” So, how did a low power ratchet without error correction was it able to generate our high fidelity information ecosystem around DNA and all of its duplication, error detection, and error correction components? That has become my favorite early filter.

Robin Hanson: No doubt there’s a lot earlier stages even before the sort of life we’re familiar with, just large pools of biochemicals selecting each other, and selecting out some forms, and emphasizing other forms. Even if that’s not the life, there could be a lot of processes there, by which there’s a slow, gradual accumulation of some kinds relative to others.

Jim Rutt: The point would be, though, if the error rate in the duplications of information between generations was above the error catastrophe level, the rate of evolution seems mighty slow, and we probably had not had enough time. The question is, how quickly and in what probability do we go from that low power evolutionary machine with non-error corrected data to the high power evolutionary machine with error corrected DNA data? Seems like a possible early filter.

Jim Rutt: I then go and say, if we do have a strong early filter, and that’s one to my mind that seems plausible, then that has some very interesting moral implications. This is something I would never have believed when I was a 14 year old nerdy kid, which is that we might actually be the first general intelligence in the universe, or at least in the galaxy. One could argue that has some immense moral implications.

Robin Hanson: The biggest implications come from being the only as opposed to the first. If we’re just the first, but others are coming, well, we kill ourselves off, the others will be there. If we’re the only, and likely to be only, well, now a lot more rides on our shoulders.

Jim Rutt: And I would say that until we know … If we think we’re the only … 13.X billions years have gone by, and if we’re the only at that point, it’s not a bad place to start as a betting proposition that we’ll be the only, at least for some reasonably very long period of time. So, we have a huge moral obligation to preserve this amazing thing, life in the universe. And maybe we can state the mission of humanity is to bring the universe to life, at least until we have some evidence that there is other advanced life in the universe, or life in the universe.

Robin Hanson: Then we get the hard question of, “Okay, how exactly can we do that?” How exactly can we ensure this big growth? Some people’s first reaction is, “Well, we need to make new independent variations like a colony on Mars or something.” Although it’s probably a lot easier to make independent colonies deep under the earth’s surface than putting them on Mars, so maybe we should be making a lot more independent colonies under the earth’s surface. But they aren’t that independent in the sense that if there’s a war or something, we know where they are. Then, they’ll be recruited in the war. So, there’s a sense in which, as long as we’re part of this big integrated society, we are all one big variation that will have to continue, because we can all find all the part of us.

Robin Hanson: Then the question is, “Well, if we’re all one thing, should we try to go faster or slower?” Some people say, if we go too fast, we’re going to run into these problems, and we’re better off to slow down, so we can research and understand these problems better. Another point of view is that there’s just problems that show up at somewhat independent rates, and the faster we can run past all these obstacles, the better off we’ll be. These are hard questions.

Jim Rutt: Yep. Of course, there’s another possibility for bringing life to the universe, is not to spread intelligent life, but to essentially do a human engineered panspermea. Create a massive payload of viruses, bacteria, precursor chemicals, et cetera, and dispatch them to stars that, based on analysis of the atmosphere of the planets, look like maybe reasonable precursors, and do that a billion times over the next million years. That might be what we come up to, as our moral answer to our duty, if it turns out we’re the only, and far as we know, the only for all time.

Robin Hanson: My bet would be that that’s worth doing, but as you can see, there’s a lot of social inertia or feeling on the other side. That is, they feel like if there’s life out there and we send something out that kills it, then this is a murder, that we’re destroying the other life out there.

Robin Hanson: But under the scenario where there’s murder, it’s a pretty optimistic scenario, i.e. there’s a bunch of things out there to kill, i.e. there’s lots of life. The more pessimistic scenario is that there’s almost nothing there, so there’s very few things you could kill, which means then you’d be basically ensuring more variation, which would be good.

Robin Hanson: I guess it’s somewhat related to the [Medi 00:57:56] dispute where some people want to send signals to aliens, in the hope that those aliens could come help us and be nice. Other people go, “Wait a minute. What if they’re not nice?”

Jim Rutt: Exactly. Where do you think on that one? Where’s your position on Medi?

Robin Hanson: Almost surly, we would be one of the youngest, weakest, most primitive civilizations around. If civilizations are communicating on a cosmic scale, they would almost all be much older than we are. So, it’s the question of, “Do you really want to stick your neck out, being the youngest, most naïve, weakest member of this community?”

Jim Rutt: We have a lot to gain, but we have a lot to lose, by being the weakest. My sense is when you’re the new kid in a company, you’re usually best to keep your mouth shut for a while and see what’s going on.

Robin Hanson: If you have the ability to go make a speaker a few hundred light years away somewhere else, and make it be loud, and then watch what happens to it, that would be the safest approach.

Jim Rutt: Yeah. I would be tolerably in favor of that one.

Robin Hanson: Big loud speaker, 100 light years away, and with no way to trace it to us.

Jim Rutt: I like it.

Robin Hanson: But making sure there’s no way to trace it to us would be really hard, because you don’t know all the different detection capabilities they have.

Jim Rutt: It also may turn out that the nearest 200 light years may be close enough.

Robin Hanson: Too close. Yeah, they’ll just kill everything within 300 light years.

Jim Rutt: Exactly. Anyway, this has been wonderful conversation. Let’s move on to our next topic. This is actually what caused me to reach out to you, and ask you to be a guest on the show, which is I very recently read Elephant in the Brain, and was quite taken with it. That’s a book that you co-wrote with a guy named Kevin Simler.

Robin Hanson: Yeah, he’s a great co-author. The book is much easier to read, and more accessible because of my co-author. You can compare it to Age of Em, and you can see the difference.

Robin Hanson: For me, this is an attempt near the end of my career to summarize a key point that I which I had known at the beginning, and that hopefully new generations of social scientists won’t make the same mistakes I and others of my generation did, because we didn’t know the one key factor of this book.

Robin Hanson: The key story is that in social science and in policy, we typically analyze something like education, or medicine, or even marriage from the point of view that basically know what people are trying to do, their basic goal, their motive, with respect to this thing, and then that there’s a lot of complications in how to get that, and that policy can help you figure out how to get this thing that, obviously, is the thing you’re tying to get.

Robin Hanson: In education, we’d say the thing you’re trying to get is to learn more material faster. Then, education policy is all about how to reorganize schools, or school funding, or ratings, such that we can induce more material who’d be learned faster.

Robin Hanson: Or you would say medicine is, obviously, about taking sick people and getting them well, and medical policy, and medical reform, and medical funding, and institutions are all about figuring out better ways to get more people well or faster, at a lower cost.

Robin Hanson: This is what we do over and over again, through our society. In each area of life, we have this easy assumption of what something is for, and then we build on that to analyze how we can do it better. The claim of the book is that through on awful lot of these areas, the standard key assumption is just wrong. School are not about learning more material faster. Medicine is not about making sick people well. That is, at some level, it’s kind of about that. It has some degree of functioning that way, but that’s not really the key strongest motive that’s driving these institutions and behaviors. If you miss the key thing that something is for, you’re just going to be misdirected right from the start.

Robin Hanson: We have a long history, in education research, of finding many powerful way to help people learn more material faster. We actually know a lot about that, and we found many ways that existing school could, in fact, teach more material faster, more effectively, at lower cost, et cetera, and we’ve consistently not adopted these reforms. That’s got to be pretty frustrating for education researchers and reformers who’ve known for many decades how we could do things better. Same in medicine. We know lots of ways that medicine can be more effective. People are not very interested in these ways. Again, that’s got to frustrate medical reformers and social scientists.

Robin Hanson: The key claim of the book is the big mistake everybody’s been making is assuming you know what these things are for. Our book starts out trying to make it plausible that you would just be wrong a lot about why you want to do things, that you’re not aware of your motives. So, our book has been classified as psychology, and it had psychology referees and psychology reviewers, and these psychologists have all said, “Yes, this is a standard thing we know in psychology. People are quite often unaware of their motives for doing things.” But they’ve said, “We already knew that, so who cares?”

Robin Hanson: But in all these other policy areas, like education, and medicine, et cetera, these people don’t know this. So, our book is addressed to them, trying to say, “You’ve been missing the point. This thing these psychologists all know hasn’t transferred into our social science and policy worlds, and you aren’t appreciating that, in fact, you’re making mistaken assumptions about the key driving motives in these areas.” We want our book to be read by social science and policy world, but, unfortunately, because it’s been classified as psychology, they don’t see that it’s something they need to respond or respond to.

Jim Rutt: Yeah. I did notice that, actually, most of the piece parts from which you built the theory were things other people had discovered. I think you actually said that in the introduction, that you had gone through a large amount of research and found things that were known. But what I found new was that you integrated it to show that this is a very consistent pattern, and you also teased out at least some quite good arguments, I thought, for why it actually makes sense for humans’ brains to evolve themselves this way. And a lot that comes around things like arms races. One of the ones I liked a lot was the social brain hypothesis, that what actually drove the rapid mega growth of the human brain was the ability to be machiavellian within our own small groups, or as Trivers put it, we deceive ourselves the better to deceive others. Right? I thought that was really where you guys delivered, where you pulled together these various piece parts, and then also found the pieces that explained all the piece parts together.

Robin Hanson: We’re doing some degree of synthesis, but, again, I think a lot of people in this area correctly would say they kind of already understood that synthesis, even if we’re doing especially good job of explaining it to a wide audience.

Robin Hanson: What’s new, I would say, is that we are trying to tell people in these other policy areas that they have neglected this key insight. There is this basic problem in our academic intellectual world that we assume once something is known by the experts in that area, it’s known by everybody and included by everybody in their analysis, and that’s just not true. There are a great many things that are known in one’s subfield that are completely neglected in lots of other areas where it’s relevant.

Jim Rutt: Yeah. That’s clearly been true. I happened to done a little dive into the move from educational psychology to evacuation departments at universities, and it’s like 50 year lag. It’s pretty amazing.

Robin Hanson: We also suffer a lag, because there’s also lags for things people don’t want to hear. In the history of innovation and intellectual progress, things that people want to hear, or at least that don’t conflict with something they want to believe, are easier to assimilate and spread the things that people just don’t want to hear, that go against what they want to believe.

Jim Rutt: Yeah. One of those that came up again and again, particularly in your applied chapters, was that so much of human behavior can be thought of in terms of signaling, as opposed to substance.

Robin Hanson: Right. The key general idea is that when we are trying to attribute motives to our behavior, we are mostly acting as a PR person, or press agent, for ourselves. That is, instead of knowing what we actually are doing, we’re more like the President’s Press Secretary, who is trying to make up good explanations for what we’re doing, in order to present a good face to the world. We’re looking for explanations that defend us well against potential accusations. The explanations we come up with do achieve that function. They’re just not really designed to be accurate representations of the real motives that are driving us, because the real motives tend to be less pretty, and more socially dysfunctional, and more open to the accusation that we’re violating social norms and being selfish and self-serving, which we are. Therefore, we look for more pro-social, good-looking motives to attribute to ourselves.

Jim Rutt: Certainly seems to be the case. The section on medicine was particularly interesting. Could you maybe talk about that a little bit?

Robin Hanson: Sure. For many of our readers, perhaps the most surprising chapter … Each chapter in the last two thirds of the book, the ten different areas, what we do is we say, “Here is the topic we’re talking about, and here is the standard motive most people would point to. Then, here are a bunch of puzzles with that, things that don’t fit so well with the standard motive.” Then, we finish with an explanation, a motive that fits better with many of the details of that area.

Robin Hanson: As we said before, the usual story about medicine is that, hey, you can get sick, but there’s these experts out there who could help get you well. These experts are expensive and they are hard to judge who’s good, but, hey, we need a system to help you get their expertise and to help you get well. This basic story is a story we usually tell, and it doesn’t fit that well with a number of actual details of this area. Perhaps the most prominent one is the fact there’s very little correlation between people who get more medicine and people who get healthier. We have not only geographic variation, where we look at some regions or hospital areas, and see the ones that spend more on medicine are not healthier, even after we’ve [inaudible 01:07:39] some other things. We also have a number of randomized experiments, where people were randomly given more access to medicine, such as through a cheaper price, and they weren’t healthier. That’s a big puzzle from the point of view that the whole point of medicine is to spend a lot of money to get healthy.

Robin Hanson: We have a number of other puzzles in this area that people say are not very interested in ways to improve their health via, say, sleep, or nutrition, or air quality, or exercise, compared to medicine. They are really compared to medicine, even though we see very little connection between medicine and health, and much larger connections between health and these other areas.

Jim Rutt: And they’re cheaper. Right? Getting an extra hour’s sleep costs a hell of a lot less than some expensive course of mental health intervention. Right?

Robin Hanson: People are also remarkable uninterested in information about the quality of medicine. They just won’t bother to pay for it. They won’t look at it. They just want to take a simple strategy of trusting their doctor or surgeon, and not thinking about it.

Jim Rutt: How do you explain those behaviors in terms of your theory?

Robin Hanson: An analogy is Valentine’s chocolates. We have a tradition here on Valentine’s Day of giving your lover chocolates. The idea is that you show that you love them, with the chocolates. Now, when you do that, you don’t ask how hungry they are when you decide how many chocolates to get. The point isn’t how hungry they are. The point is how much do you need to spend to distinguish yourself from somebody who doesn’t care as much as you? You need to spend a lot. When you ask, “What quality chocolates should I get?”, you’re not that interested in what you privately think is the highest quality chocolate, and you’re not that interested as a recipient of the chocolate in what you privately think is the quality of the chocolate. You’re interested in commonly perceived signals. What do most people think is the high quality chocolate?

Robin Hanson: If, on Valentine’s Day, you don’t have a lover to give you chocolate, but you’d like people to believe that somebody out there cared for you, you might buy yourself some chocolate and leave it on the desk at work.

Jim Rutt: Okay. How does that work on an individual basis? I can see how it might work with respect to a parent purchasing medical interventions for a child, but let’s take the case we talked about in passing of a person who could either to sleep one hour a night, or spend $200 a week talking to their shrink. There is nobody they’re trying to impress, really.

Robin Hanson: First, to make the point, most medical spending is done via third parties. Nations buy medicine for their citizens. Companies buy medicines for their employees. Heads of family buy medicine for the rest of their family.

Jim Rutt: But they pay. They don’t buy. It’s an interesting distinction between the payer and the buyer. Right? It’s always good to find a business where you can decouple the paying and the buying. College textbooks being a perfect example, by the way. But in this case, the actual decision to pull the trigger, to say, “I’m going to go to see a shrink weekly, rather than sleep an hour”, is an actual decision made by an individual agent.

Robin Hanson: With the price greatly reduced because other people paying for it, as you say. These other people fund the bill. They say, “Don’t worry about it. When you feel like you have a problem, just go to the doctor, and it’s be paid for by this other source.” That other source is part of the process to show they care. So, they are definitely achieving the benefit of reassuring people that they care about them through that spending, even though they’re not choosing each individual doctor visit.

Robin Hanson: Then the individual person, when they have a problem, they have a choice of whether to go or not, but often they’re getting pushed. So, you may know couples where one member of the couple pushed the other to get a checkup, to get something checked out, et cetera. There’s a lot of that in family areas, where individual people show they care about other people by pushing them to go get care. Then you, as the person who is the subject to these other people subsidizing you and pushing you, you often want to be a good sport, cooperate, and lot those people show they care about you by taking care of you. And you want to feel that, so …

Jim Rutt: And so the signaling is then at the macro level. You can argue that society, together, say, “We’re going to show we care by allocating 17% of the GDP to medicine”, but an adult typically makes their own decision on what medical treatments to go after.

Robin Hanson: People do initiate whether they have any particular doctor visit, or whether they buy any particular medicine. That’s at their choice, but it’s mainly funded by other people who have enabled that, who have said, “Don’t worry about the cost. Just go.”

Robin Hanson: In addition, we push people. And people seem to want to get, and get, credit for being caring by pushing other people to go. Now, you, as somebody who has people around you, pushing to go to the doctor, you show them that you care about them by letting them show they care about you, by accommodating them and going when they push, and spending the money that they have allocated for you to spend, which then shows them and everybody around that you are loved and cared for by somebody who is willing to pay for this.

Jim Rutt: Do you have a rough estimate on what percentage of medical, let’s say in the United States, you might consider to be above and beyond medical usefulness, that are generated by these signaling behaviors?

Robin Hanson: Most likely more than half.

Jim Rutt: That’s a big damn number. Bigger than almost anything else in the economy, right?

Robin Hanson: The advice to you is if you only wanted to get your health higher, but you didn’t care about these other signaling benefits, then I’d say, “Cut out half of your medicine.” Of course, you might say, “Which half?” And I’d say, “Well, there are some simple ways to do that.”

Robin Hanson: In the health insurance experiments we’ve seen where people had randomly assigned more medicine, we see they weren’t any healthier when they were given a lower price. So, that method says that you should look at the price and say, “If I had to pay for this out of my pocket, if I had to pay full price for this, would I be willing to pay for it?” Then, if you wouldn’t, don’t bother to get it, because that’s the kind of medicine that in these experiments wasn’t useful.

Robin Hanson: Another way to do it is there are these various studies out there. The Cochrane Reports is one, where they take different treatments and they rate them according to strength of evidence that they are useful and cost effective. So, you can just drop lower half of the ratings. That would be another way to drop half of your medicine. I’d say you can safely be about as healthy with spending a lot less money, and time, and trouble by just dropping the lower half of medicine according to either of these two ways to rate it.

Jim Rutt: Very interesting. Let’s go on to another one. Politics.

Robin Hanson: People are especially sensitive about politics. It’s especially emotional. They’ve got a lot invested. Well, actually, some people are. In all the different ten areas we discuss, people vary a lot about how central those things are to their identity.

Robin Hanson: For many of you, medicine is not central to your identity, and you were fine with what I just said in the last few minutes. For others of you, medicine was central to your identity, and you’re very skeptical. It’s really important to you to believe that medicine is effective.

Robin Hanson: Similarly, in politics, some of you don’t care much about politics. You’re kind of cynical or skeptical about it, and what I’m about to say will sound fine. Others of you will be much more skeptical, because politics is really important to you.

Robin Hanson: The usual story about politics is that you get involved with politics. You read about it. You talk about it, because you are helping. You are a Dudley Do-Right. You are making contributions to your society by thinking carefully about the better political positions, the better politicians, et cetera, and then making good choices that will benefit all of us. Good for you.

Robin Hanson: That’s the usual story people want to give to explain their political behavior, and that story just doesn’t fit with a lot of details about people’s political behavior. For one thing, you are just way too emotional and very uninformed about most political opinions you have. People seem quite free to have political opinions they haven’t researched much at all, and they seem very confident of them, even though these things are usually pretty complicated. These a number of other puzzles that just don’t fit with the usual story.

Robin Hanson: If, for example, you really cared about producing good political outcomes, you would want, say, to pick politicians who were good operatives, who knew how to work the machine behind the scenes. They could craft bills and negotiate deals, and you’d want somebody who was good at all that. You wouldn’t care so much about what public positions they took. You wouldn’t necessarily want them to take extreme positions that would limit their ability to make deals. You’d want them to be good at making deals. But, in fact, people seem to care mostly about the positions politicians take, and they don’t care much about whether they’re good at actually making things happen. They want politicians who share their positions. And, of course, politicians are happy to supply that, but it calls into question the idea that what you really want to do it produce more effective policy.

Robin Hanson: There’s a number of other puzzles we can go into. We’d say the better explanation here is that what you want to do is show your support for your political allies. There’s your team, and you want to say, “Yay, us, team”, and, “I’m with you.” Most of what people do actually does a pretty good job of that. People are very effective at that, and that’s, in some sense, much more plausible about something you would actually care about, in a sense that the people around you do look at you and interpret you in terms a political lens. They ask, “Whose side are you on?”, and, “Are you loyal?” And that affects whether they want to relate to you. People seem to care a lot about whether their spouses and lovers share their political views, about whether people at school do, whether people on the job do. This matters a lot to people, and it’s a strong selection effect on people, whether they have the political views that the people around them like.

Robin Hanson: That’s all very plausible, but, of course, it means that our political system isn’t designed or necessarily doing a very good job of producing good policies. It’s allowing a lot of people to take sides and to show loyalty to their sides.

Jim Rutt: Now, this scenario I’ve done a little bit of thinking in. I would call my perspective a more institutional one, in that the behaviors you see … In fact, in my own writings on this, I just refer to the two teams as Team Red and Team Blue, to make them seem as much like football teams as anything else, because that’s certainly how the behaviors come about. My argument, though, is that it isn’t necessarily driven by human nature or culture, but rather by the institutional design of our political system. In the case of the United States, we have one of the strongest set of Constitutional and traditional settings to produce a two team dynamic. We have a single member election districts with first pass the post. Usually plurality wins. Then we also have a congressional system, where control is by one party, typically in one house, which argues strongly for very strong coalitions, or very strong parties to win a majority, so that they can control the house of Congress.

Jim Rutt: I look at that, and I’ve looked at another institutional design, which I think could address that and produce a very different result, which is liquid democracy, also known as delegative democracy, where there aren’t any teams. Essentially, each voter allocates their vote to people who represent them. In my version of liquid democracy, that vote is divided up into something like 25 issue areas like defense, and economy, healthcare, education. Let’s say I’m Mary voter, and I give my education vote to my sister who’s a teacher, and to my uncle who’s a retired Lieutenant … for defense, a retired Lieutenant Colonel in the Air Force, et cetera, and I’ve taken my vote and pushed it to people who probably are more knowledgeable than I am in the domain. Nowhere does that get aggregated into a team. Rather, each issues that gets floated by a person at an initiation of legislations gets voted on by these proxies without any intermediation. So, we don’t have the institutional dynamic that we see in our system.

Robin Hanson: In our book, in all of these areas, we are first just trying to look at broad averages of human behavior across a wide range of space and times. For example, in medicine, there are some different things about United States versus the rest of the world, but we’re not focused on that. We’re not trying to explain why is the United States different. We’re trying to say why has medicine everywhere, at all times, has certain common characteristics.

Robin Hanson: I’d say the same about politics. We’re mainly trying to look the common features of politics across a wide range of context. For example, in your church, in your firm, in your family there is politics. We talk about office politics, for example. These other political arenas have a lot of the similar sorts of features and effects. So, it’s less trying to focus on the United States today in the year 2019, or even all democracies. There’s going to be variations, based on institutions as you say, and lots of other context, but our first priority is just try to look at the very basic overall patterns.

Robin Hanson: We’re usually immersed in a variety of context, and our politics varies by context. We often try, to some degree, to integrate them, but at a lower cost. For example, in your firm, there’s going to be different factions, and you’ll be aligned with some faction. Then, in a meeting, you might try to promote your faction in that meeting. That would be politics. Often, the main thing you’re trying to do is make sure other people in your faction know that you are still with them in supporting their faction.

Robin Hanson:Robin Hanson: Now, in many firms, the main division is between the top and the bottom. Then you might be trying to say, “I’m not rebelling against the top. I’m with the top. I’m loyal.” That would be the main thing you were trying to show. It would be less one faction against another, then. There’s the dominant powers, and you’re trying to show your submission to them.

Robin Hanson: Similarly, in your church, there might be factions, or there might be one dominant coalition in your family, but we still have politics in all these contexts, in the sense that we take positions. We express opinions on what would be good policies, who should be in charge, and, again, in a lot of these contexts, we seem much more interested in the public appearance of being loyal to some side than we care about actually making good choices.

Jim Rutt: That makes a lot of sense. Anything else you want to add about the book, before we move on to our last topic?

Robin Hanson: Well, the key message of our book, for people who want to do institution design or improving the world, is to say the problem is a little harder than you might have thought, but there’s a better prospect that if you solve this harder problem, you’ll actually get some traction.

Robin Hanson: Today, for example, if you try to solve the education problem in the form of, “How can we get more students to learn more material, faster?”, you’ll find that we actually have a lot of solutions, and we can identify them, but nobody cares. But if school is really about showing that you are smart, and conscientious, and conformist, and things like that, then you might have a better chance if you found a solution that let people show those things more effectively. Then they would be actually interested, although they’re going to want to continue to pretend to want to show that they’re learning the material. Because we are hypocritical, because in each of these areas we have one thing we say we’re trying to do, and anther thing we actually are trying to do, reforms will also have to be hypocritical, in that they will have to continue to let us pretend that we are trying to achieve the thing we are pretending to achieve, but actually give us more of the things we really want, so that we will actually be interested in adopting them.

Robin Hanson: That’s the different perspective that I’m hoping to pass on to social scientists and policy makers, from my lifetime of experience, which is, if you reform the question that way, you’ll have a better chance of actually producing reforms that people want.

Jim Rutt: Okay. That makes a heck of a lot of sense, actually, though it does bring something pretty close to cynicism into the design of our institutions. The fact that there’s a nominal reason, learn something, and then there’s a real reason, show that you’re smart … Right?

Robin Hanson: Right. And it makes a plausible reason why you can’t be fully honest about why you are proposing something. It means that if other people are doing this, you can accuse them of hypocrisy, which is a terrible accusation usually, and people try to deny it. They can accuse you of hypocrisy, because if this is what you’re doing, you actually are being hypocritical in some sense, because ordinary people are hypocritical and you’re trying to accommodate them.

Jim Rutt: Unfortunately, that seems to give rise to a defense mechanism within the system for inefficiency, right? If to actually address the reality immediately produces a reaction from the other side of hypocrisy, it then becomes difficult to nudge the system towards reality.

Robin Hanson: The first things to realize is all of these different systems are chock full of enormous waste. From the point of view of the thing we say we are trying to achieve, there’s a vast potential for doing better, and a vast unacknowledged and unused potential. It’s far easier to figure out how to produce policies that would make things better than it is to get people to be interested in adopting them. The more fundamental problem in the world isn’t, “How can we have a better world?”, but it’s, “How can we make anybody care about having a better world?”

Jim Rutt: Yeah. It seems to me the split it the reality is what they say they want is different from how they’ll actually behave. Unfortunately, if you try to move the system towards how they’ll actually behave, you’re immediately subject to the charge of hypocrisy because you have deviated from the stated social norm.

Robin Hanson: Which is what happens if you admit your actual reason for making those proposals.

Jim Rutt: Then you have to have another level of strategy, which is here’s the proposal, but here’s the reason, which isn’t the real reason. Right?

Robin Hanson: Right now, most people who are involved in politics, and policy, and social science know that, in fact, this is what actually typically happens. Most people who are making policy proposals do have hidden agendas and aren’t willing to be fully honest about why they’re making those proposals. They know that’s true about pretty much everybody, which they know is true about the other side. They focus on the other side, and they know that those other people are hypocrites, which all the more emboldens them to be outraged and indignant about the other side.

Jim Rutt: Well, it’s always good to know how a system works, but I must say, I don’t think a tremendous amount of optimism about our current political process away from this knowledge, so …

Robin Hanson: But what it means is enormous potential is there for improvement. If the system was really well optimized, it would mean we couldn’t do much better. When the system is really broken, it means there’s huge potential for improvement.

Jim Rutt: That is true. One of the best job I ever took was a really messed up company. People said, “Why’d you do that?” I said, “It’s actually a lot easier to fix a messed up company than it is to build a good one.”

Robin Hanson: Or to improve a good one.

Jim Rutt: Or to improve a good one. All right. Well, let’s move on to our last topic. This has been such a great conversation. That is an article that you put on your blog not too long ago about publishing tax returns.

Robin Hanson: That wasn’t … Just a relatively small thought I had. It wasn’t central to my thinking, but I basically said, “Why not publish tax returns as a way to make it easier to enforce tax law?”

Robin Hanson: If you’re worried that people are cheating on their taxes, the more visible the taxes are, then the more other people who, say, saw their neighbor and how much tax they were paying and could look at the items, would then be able to check that against what they saw and maybe call bullshit when something didn’t look right.

Robin Hanson: In some sense, there’s a trade-off there between privacy, how much you’re going to allow each person to keep their taxes private, and our enforcement of tax law.

Jim Rutt: What about other implications? I read that and saw you had a relatively narrow argument, but it immediately got me thinking what about things like, for instance, how much charity people would do if their tax returns were public? Especially, going back to your idea of signaling, right? If our tax returns were public, we might behave in different ways.

Robin Hanson: Sure. Now, it does come to this basic question of to what extent do you see social pressure and social visibility as good? Overall, humans clearly evolved a system by which we have social visibility and social pressure, in order to induce better behavior. This is the whole concept of social norms.

Robin Hanson: Long ago, we didn’t have formal systems of law. What we had is norms. The rules were, if you saw somebody breaking a norm, breaking a rule, then you were supposed to tell everybody, other people, about it, report on it. Then you were supposed to talk to them and coordinate with them about what to do about it, which included threats of punishment, punishment, and escalations to continue to do something to get it to stop. That is our ancient human heritage, to have norms that we enforce in that way.

Robin Hanson: It’s only in the last few ten thousand years that we’ve had formal systems of law that did anything else than that. Mostly, formal systems of law are trying to more strongly and accurately enforce these norms that we developed before.

Robin Hanson: If you approve of this general norm-based system, then you like the idea that you could make behavior more visible, which would then make other people be able to shame you or disapprove if you were violating a norm, and then exert more pressure on you to behave better. Many people have come to the opinion lately that norm enforcement is bad, that we should not be having norms enforced, that we should, instead, create a system where people can do things privately in ways that are not visible, and that that’s a better world, where other people are not getting involved in their business.

Robin Hanson: If true, that means that norms have gone really wrong somehow. Somehow, once upon a time, norms were effective and useful, and, now, norms are just broken. Of course, often, people want to make a norm about that. That is, they create and enforce a norm that you shouldn’t have other norms, or that certain norms are inappropriate. That’s part of the modern world discussion, is when it would be okay to have norms.

Jim Rutt: A lot of research has been done, including some mathematical simulations by some folks at our Santa Fe Institute, indicate that it’s really hard to have norms survive without enforcement.

Robin Hanson: Absolutely. You’re not going to have norms be followed if they’re aren’t enforced. So, the key question is, “Do you want norms being followed?” There have been norms in societies in the past that people have disapproved of those particular norms. For example, there have been norms against homosexuality, and those norms were enforced. People today have come to the opinion that those were bad norms and it was bad to enforce them. Then, they just from that to the idea that it would be bad to enforce all norms. It’s related to the issue of blackmail, actually, which is a fun topic that I’ve talked on the blog a couple times in the last year. Because, arguably, blackmail is a way to enforce norms. That is, if there’s a thing that if it were known, you wouldn’t like it to be known, because people would treat you worse, blackmail induced the threat of making that be known, and punished you for behaving in ways that that threat would be effective.

Robin Hanson: In some sense, when you allow blackmail, you allow people to enforce the norms against whatever behavior you would do that you wouldn’t like to be known. Then, many people think that bad. It’s bad to have blackmail, because it’s bad to enforce those norms, such as, for example, homosexuality in the past.

Jim Rutt: Got you. Though, then we get to an issue that if we have nihilism about norms, or nihilism about enforcement, we’ve essentially said we’ve abandoned all of our norms.

Robin Hanson: Right. Some people say, “Well, now that we have law, we don’t need norms any more.” Although, then they have norms about which laws we should have, and, of course, norms about which laws should be enforced. For example, the recent “Me Too” discussion is about norm against behavior, not only behavior, some of which is illegal, but they want the norm to be enforced that more sorts of behavior like that should be illegal, and even when it’s not illegal, it should be disapproved. That’s an attempt to use norms to enforce concept of good behavior and bad behavior.

Jim Rutt: It makes sense, if we agree that that is bad behavior, which I certainly would agree.

Robin Hanson: Especially important thing is that we seem to have weakened our norms for supporting law. In the past, there usually were strong norms in any given geographic area that if you saw illegal behavior, you should report it and you should assist in reporting, and preventing, and inducing the formal law enforcement apparatus to come to play. More recently, we often have communities where in the local communities’ norms, there’s no particular thing you’re supposed to do when you see illegal behavior. It’s up to the police to find it and catch it, but you’re not supposed to help. There’s no obligation you have to help, and, in fact, often you feel you have an obligation to not help, because you see the police, for example, as an outside occupying force that are not supporting your community. When you don’t have a norm a supporting law, law is, of course, much weaker and much less effective. So, then you might not have norms or laws.

Jim Rutt: Presumably you’ll have a much less strong society, right? Your coherence as a society will be substantially weakened.

Robin Hanson: Again, from the usual [broadus 01:32:04] perspective, we say norms and then law were important tools that societies had for discouraging bad behavior and producing better behavior, but from some perspectives, they say, “That may have once been true, but isn’t true now”, and therefore they say norms are, now, just things better off not enforced. Even many of them think laws are things better off not enforced. There’s even a contingent of people who think a stronger law enforcement is a bad idea, because we have so many bad laws, that enforcing the law is bad. If that’s true, that means we have a pretty desperate, sad situation, that we no longer have norms or laws available to us as a way to enforce good behavior, because there’s a lot of bad behavior that’s possible, and that might well grow if it isn’t discouraged.

Jim Rutt: Yeah. From any evolutionary gain theory perspective, we know that systems are easily invadable by free riders and by other kinds of malicious strategies, again and again. Work by guys like Sam Bowles and Herb Gintis on some of these simple social exchange games show that without the ability to enforce norms, there’s a large tendency for people to retreat to a series of far from optimal strategies.

Robin Hanson:gb I’ve been especially interested lately in way to reform criminal law enforcement and punishment, exactly because if you assume that, on average, criminal law is a good idea, then on average, it would be good to be more effective at enforcing that, and arguably our current system is pretty awkward and inefficient at doing those sorts of things, and unfair in many ways.

Robin Hanson: That is all based on the idea that it’s better to have criminal law than not to have it. The same is true for norm enforcement. Blackmail is good if the typical norms that would be enforced by blackmail are good, but not otherwise.

Jim Rutt: What are some of the ideas in that criminal justice area?

Robin Hanson: I’ve got this package that it’s kind of radical, but I’ve even thinking of writing a book on it, to combine a bounty insured fine system, to basically privatize the punishment and enforcement of crime, but still have the same social choice of what is a crime and whether any one act is a crime. So, the bound of the proposals will leave in place whatever systems we have for deciding which acts are crimes, and how severe any one act is as a crime, in terms of a type of act, and the whether there’s an accusation about any one person having committed one of these acts, leave alone the process for deciding whether that accusation’s correct.

Jim Rutt: For instance, let’s say we still have a law against armed robbery. We want a pretty strong sanction on it. We put a bounty on people who apprehend armed robbers.

Robin Hanson: We put two numbers on each crime. Say armed robbery of a certain sort … We could vary armed robbery, say, whether it was violent or non-violent, with a gun or not. You could vary the punishment for different versions of armed robbery, but for whichever version you’re talking about, society would put two numbers on that crime. It would put a bounty, i.e. how hard should people try to catch it, and will put a fine, which is how much should it be punished when it’s caught? Those would be the two key numbers that the society would decide about each crime. Now, there’d be a private system by which, say, a bounty hunter would go look for evidence to accuse any one person, and then go to court and try to convince a court of that. Then, if they convince the court, they will receive a certain bounty. Then that person will suffer a certain fine.

Robin Hanson: Now, in the past, we’ve tended not to use fines for big crimes, because most people can’t pay really big fines. So, the innovation here is to make it more like our system for automobile insurance. Today, you’re not allowed to drive on the road unless you have insurance that covers for an accident. So, the idea would be to do that for everything. Crime insurance for everything, or even lawsuit insurance for everything. You must have insurer who backs you to a large amount, say, the 90th percentile of awards, such that if you are found guilty of that, they will pay this amount on your behalf. If we can do that, now, you and your insurer internalize the issues of how to punish, how to monitor, how much freedoms you should have, and even how much to help with crime detection. You and the insurer make those choices, and the rest of us don’t have to get involved.

Robin Hanson: Today, in our system of crime, because it’s the centrally run system, we all have to get involved in deciding which kinds of traffic stops are justified, which kind of prison treatment is cruel and unusual, whether torture is okay, whether the government should be able to get your emails under certain subpoenas, et cetera. We all have to make all of these central choices, in a centrally bureaucratic way, which is not very well attuned to individual variation.

Robin Hanson: Now, under this alternative system, once you’ve got an insurance company that agrees to pay for your crimes, you and your insurance company decide what happens to you, if you’re guilty of something. You can agree to torture. You can agree to prison. You can agree to exile. You can also agree, ahead of time, to limit your freedoms. You can agree to curfew. You can agree to ankle bracelets. You can agree to places you’re not allowed to go. You can agree to letting people read your emails. You choose your freedoms and your punishment, and that, of course, will affect your insurance premiums. The insurance company will offer you a lower premiums on your crime insurance if you are willing to accept fewer freedoms, or more limits on your behavior, or stronger punishments, should you be caught. But that’s up to you now. The rest of us don’t have to get involved.

Jim Rutt: It’s an insurance mandate, kind of an ACA for crime insurance.

Robin Hanson: Just like we do for cars now, exactly. And like some people are proposing to do for guns, in the moment. People who don’t like gun in the moment are saying, “Hey, you should have to get gun insurance, if you buy a gun.”

Jim Rutt: Well, that’s another damn good idea. Well, Robin, this has been amazing. Even better than I thought it would be. But I think the time has come for us to wrap up. Any final words? Any final thoughts?

Robin Hanson: It’s been fun chatting with you, and I’m happy to have been able to talk about all these different subjects with you.

Jim Rutt: Very good. Look forward to talking to you in the future.