The following is a rough transcript which has not been revised by The Jim Rutt Show or by Joshua Epstein. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Josh Epstein. He’s a professor of epidemiology in the NYU School of Global Public Health and a Founding Director of the NYU Agent-Based Modeling Laboratory. Prior to joining NYU, he was a professor of emergency medicine at Johns Hopkins and Director of the Center for Advanced Modeling in the Social Behavioral and Health Sciences. Before that, he was a senior fellow in economic studies, the Brookings Institute and Director of the Center on Social and Economic Dynamics. Welcome Josh.
Joshua: Thank you, Jim. Pleasure to be here. Thank you for that introduction. Yes.
Jim: Good to have reconnect. I mean, you were one of the first scientists I met when I retired from the business world in 2001. I remember we had lunch down at Brookings, I believe.
Joshua: I remember it well. It was early on in the history of the Santa Fe Institute and in this entire line of work that’s come so far since then.
Jim: Yeah, but I remember you guys told me about Sugarscape. I go, “Holy shit.” Amazon exists at that time because I found my record for what I ordered the book and I read it and I go, “Wow.”
Joshua: The story of Sugarscape is pretty interesting. Rob and I really invented that on napkins in the Brookings cafeteria as part of this wild sustainability project that the MacArthur Foundation funded. It was a sustainability project involving Brookings, the Santa Fe Institute and the World Resources Institute. At the time I remember going to MacArthur with Murray Gell-Mann and George Cowan and John Steinberger to try to convince them to take kind of a crude look at the whole, which is what Murray’s word for, let’s look at everything and think about sustainability out to the year 2050. In their infinite wisdom, they funded that completely ill-defined project. And we had several activities and Murray appointed me or invited me to be the director of the theoretical work. And neither of us had any idea what that meant, but I felt anointed and accepted with great pleasure.
Joshua: At about that time, I gave a talk at Carnegie Mellon, which was the first time I met Herb Simon. He gave a talk and then I gave a talk and mine was on arms races. After the talk, this person who had been there, a grad student at Carnegie Mellon called me on the phone and said, “I was at your talking, I can’t replicate your results.” And of course this was Rob Axtell. And I thought, “Wow, this kid has the technical chops to try to replicate the thing and the chutzpah to call me and say he couldn’t do it.” So I said, “Well, come on down to Brookings. Let’s figure it out.” Which we did. And we also had many common ideas and impulses and got along great, chemistry was terrific.
Joshua: When Rob left, I went to John Steinberger and said, “Wow, this guy is great.” He said, “Well, hire him. Hire him under the 2050 thing.” That’s how Rob and I got going. And there we were the theoretical group with no direction at all. And we literally invented the whole thing on napkins in the Brookings cafeteria. We thought what’s the simplest possible sustainability model we could possibly dream up. And we thought, I don’t know. Some sort of resource landscape and hunters and gatherers move around the landscape. The first Sugarscape runs were just that. They were really simple hunter gatherer societies on a renewable landscape of resources that the agents needed to live. Then we made it more and more elaborate and fancy and at some point discovered that you could actually generate real-world things like wealth distribution, settlement patterns, all sorts of stuff at that point, figured, “You know what? This could be a legitimate scientific instrument.” And today it really is with a million applications across infectious disease, conflict, economics, network science, geography.
Joshua: I mean, it’s just absolutely exploded since then. And as you know, you were present at the creation and you’ve watched the whole evolution. We certainly couldn’t have imagined this level of development at the original Sugarscape table.
Jim: Yeah. I remember it and I’ve been watching it and doing it. I’ve written a number of agent-based models over the years. It’s quite a major tool. So let’s start at the very beginning. Let’s keep in mind our audience is smart and generally well-educated, but not experts in these domains. So let’s start with telling the audience what is an agent-based model.
Joshua: Okay. Very good question. An agent-based model is an artificial society of software people. They live typically on some landscape. It could be a landscape of resources. They need, it could be a organization where they all work. It could be some sort of setup where they interact with one another and with their environment. These direct interactions between the agents, they might bump into each other and trade or bumping into each other and fight over a resource or bump into each other and spread a disease or-
Jim: Or make love.
Joshua: Right. Make love, make offspring. They could segregate into different spatial patterns and all the rest of it. But the idea is you build an artificial society of software people and let them interact with one another. Just hands-off let them interact and see if those interactions can generate the social patterns you care about. Segregation, unequal wealth, patterns of violence, class structures. And when they do that, we say that we’ve grown the phenomenon from the bottom up and we talk about it as bottom-up social science. One word for this is generative social science. When we say we’ve explained something like a segregation, we mean, were we able to grow it? Were we able to generate it? In a population of software people. Now the software people need to be somewhat more realistic than the perfectly informed optimizing agents of economics, for example.
Joshua: Those agents are assumed to have great information and to make highly rational decisions based on that information. In my recent work, I developed a software agent named agent zero, who has emotions, crummy information. He makes statistical errors analyzing it. And he bumps is influenced by other agents who are also emotionally driven, statistically crippled and all activities of those agents are generating all sorts of important collective phenomenon from violence to financial panic, to vaccine refusal, to all sorts of other things. Big picture is an agent-based model is an artificial society of software people. Today, we’re trying to build software people that are cognitively plausible in that they have fears bounded deliberative capacity and are influenced by others in their social networks.
Jim: I would also add that it doesn’t actually have to be people.
Joshua: Right. Exactly.
Jim: It could be cars. For instance, I knew the company did some very interesting work on traffic flow, for how to design stoplights and where to locate fast food restaurants, et cetera, by simulating cars as agent-based modeling system. So it’s essentially any system that makes decisions in response to local conditions, but people is the canonical example, but just the people know it could be some others.
Joshua: Absolutely right. I mean, we’ve built models where it’s households, not people, but for example, we built a model of Zika transmission for the New York City Department of Health that had seven and a half million human agents and another several million mosquito agents. We knew a lot about how mosquitoes behave. We learned this from people who really spend their life studying bugs. We knew what their life cycle was like, what their bite rate is, whether the males are different than the females, where they all hang up, we’ve got this fabulous trap data from New York City showing mosquito densities all over the city.
Joshua: So yes, Jim, you’re absolutely right, that agents can be all kinds of things. They have to adapt in some way usually. And the other thing that’s important is they’re heterogeneous that people are not all alike and agent-based modeling is very good at accommodating that diversity, that heterogeneity, when in a lot of classical models, everything is pooled, in a classical epidemic model, there’s just susceptibles who are in a bowl of one compartment of one homogeneous type. When we’re saying, “No, no, no. They can be very different. They can behave differently. They can have different immunocompetence.” And we can accommodate that diversity very naturally in agent models, where it’s much more difficult in other approaches.
Jim: Yeah. Perfect setup. I was going to actually ask you to compare and contrast it to probably the other classic set of tools that’s closely related, which you indirectly referenced, which are mathematical systems of differential equations. That was the other tool and still is, a very common tool to address these kinds of problems. And maybe expand a little bit more on what you see as the differences in benefits of agent-based models versus your classic systems of differential equations.
Joshua: Okay. Well, differential equations, they’re very different. We actually had a lot of fun figuring out what is the agentization of a differential equation? And the typical differential equation is, as I say, let’s take epidemiology or ecosystems as an example. But in epidemiology, the classical differential equation model is there are three states a person can be in, they can be susceptible, they can be infected and they can be removed. Let’s say they die of the disease. And these are perfectly homogeneous cools, susceptibles are the same as one another, infectives at the same as one another and the dead are the same as one another.
Joshua: They mix as if in a bowl. You take the infectives and susceptibles, put them in a bowl, stir them around. Everybody has the same probability of bumping into anybody else. There’s no space, there’s no networks. That’s one difference in agent models, there can be networks and space. In agent models, there can be differences between the susceptibles within the susceptible pool, within the infected pool. And most importantly, they never adapt their behavior in the differential equation model. They just keep mixing no matter how far along the epidemic is. But in human situations, people do adapt their behavior. If there’s a plague in your town, you stop circulating, you isolate. You can wear a mask, you can take a vaccine, you can do all sorts of things. You can adapt your behavior.
Joshua: So the agent model is heterogeneous and the agents are adaptive. In the differential equations, they’re homogeneous, and they’re not typically adaptive. In the differential equations, there’s mixing, there’s no networks where in agent models, there’s space and networks, for example. Finally, the differential equation models are deterministic. There’s no randomness. And an agent model, every run is different. We make them all. A fancy word is stochastic, but run to run, people bump into different people each time. You bump into different people every day and we run these models many times with different random realizations. And that’s another big difference, the deterministic, different hues are deterministic and agent-based models are stochastic.
Joshua: So stochastic, spatial, heterogeneous versus deterministic, non-spatial, homogeneous. Now you get a lot of insight from the differential equation models. They’re the basis for all these very elegant results about R naught and vaccine requirements and all of these things. And they’re an important starting point. Just like the perfect spring is an important starting point in studying classical physics. But things get much more complicated and you end up with a statistical physics in some areas and so on. The other contrasting a method in the social sciences would be statistical regression analysis. Here you might ask, that asks, you have some macroscopic variable, total consumption in the economy and another macroscopic variable, total income in the economy. And you ask if I tax people, it’s subtracts from income, how does that affect their consumption?
Joshua: On the left-hand side, how much people consume and on the right, I have other aggregate variables and regression lets me ask if I change the aggregate variable on the right, the independent variable, how much does it change the aggregate variable on the left? And that’s useful in predicting how a macroscopic change could affect another macroscopic variable. But it doesn’t tell us anything about how those behaviors bubble up from the individual level of economic activity. And so the regression results are really targets of agent-based models, if that’s a little bit clear. We’re interested in micro to macro and macro regression is only macro to macro and it just raises another important distinction that I were just raised not to babble on indefinitely, but there’s a big distinction I make between predicting and explaining.
Joshua: I’m interested in explaining. We’re interested in generative explanations of social dynamics. We explain earthquakes by saying, Why are the earthquakes? Plate tectonics. Can we predict earthquakes? No. Why are there tsunamis plate tectonics? Can we explain tsunamis? No. Why are there new species? Evolution. Can we predict next year’s flu strain? No. And on and on and on. So the question is, can we identify generative mechanisms and get some handle on what’s driving these social dynamics? We might be able to predict them and these models might help us do so, but there’s a big difference between predicting and explaining. And a lot of the new machine learning AI methods being used in finance for example, are purely predictive.
Joshua: It’s a black box that says, if this is the state Monday, I’ll give you a probability distribution for the state Tuesday, but it doesn’t give me any idea why things are changing. And so it doesn’t let me design interventions at the micro level that could change the emergent macroscopic state.
Jim: Yeah. That’s one of the things I liked about agent-based models, when I first discovered them back, I think around 2001, which is at least at that time, most of them were stated in what seemed like plain English. You can say, all right, this person interacts with these number of people with this variability and they shake hands 10% of time. And so you can actually, as you say, you have an explanation that may be actionable in terms of intervention, which you can then model in the agent-based model. Let’s say they don’t shake hands ever again. Right? Then what happens? Oh, contagion rate goes way down. That’s interesting. Right? But it was a black box and you just derived it from data by brute force machine learning. You wouldn’t have any, shall we say human visibility into the structure of the micro-sphere. That’s what’s cool about agent-based modeling.
Joshua: Yeah. And it’s also makes it very useful for policy. Because we had a big crisis after 9/11 on smallpox bio-terror and there were several competing groups about what to do. Some said, “Vaccinate everybody.” Some said, “Do other approaches.” And we were enlisted to identify a novel approach to smallpox vaccination. We worked with policymakers and with doctors, none of whom are particularly numerate or modelers, but when you can state the thing as rules and say, “Here’s the narrative, this model is executing.” In English. People wake up, they take their kids to school, the school kids mix it random in their school place. Parents go to work. They mix it random in their workplace. When they get sick, they go to the hospital. There’s this many doctors and nurses mixing around. They have some probability of dying. Then they go to the morgue. Otherwise, they go back and survive and you can enlist experts in the implementation of the model.
Joshua: So we’re saying, okay, somebody’s got smallpox. How long do they even walk around? Once you have symptoms, these guys are pros. They say, “Once you have symptoms, you’re pretty much down after three days.” Okay. So we’ll take our software agents and knock them down at three days. Then when the thing says something counterintuitive and novel, the policymaker might actually pay attention to it. Whereas if you show him a giant 400 differential equation model, and it says, “Do something novel.” They’re risk averse. They’re not going to do anything counterintuitive or innovative, based on a model they can’t understand. And then agent-based models they can see under the hood. They can tell what’s going on and they can understand the fundamental logic and narrative of things. And then they’re much more open to the novel ideas these things present.
Jim: Makes sense. Of course, I should also note that while the agent-based modeling discipline was the first to reduce this kind of thinking to actual computation, there were some precursors in the social science. In fact, one of the first papers I read and I actually implemented it on a computer as my very first agent-based model was Thomas Schelling’s famous paper on emergent segregation, where as I understand it, he actually got down on his living room floor with pennies and dimes and laid this thing out and moved them around and all that. So he did his agent-based modeling, actually physical tokens on his hands and knees. And it was exactly the same kind of thinking. But of course there’s a limit what you can do with pennies and dimes.
Joshua: Yeah, absolutely right. I knew shark Tom very well, liked him very much. We were good friends and he wrote a nice piece in a book, computational economics edited by Ken Judd and Leigh Tesfatsion, Handbook of Computational Economics. It’s called Some Fun, Thirty Years Ago. And he recounts exactly how… I think he says it was on an airplane. He was just mucking around with pennies and nickels and noticed this funny, intriguing results. And he pursued it a little bit, published this very interesting, intriguing model. I will say, Rob and I, I guess it’s a testament to our illiteracy, but we’d never heard of Schelling’s model when we built the whole Sugarscape. At the end of it, Rob discovered Schelling’s model in a paper by Van Dellen Harrison on urban segregation. And he said, “Well, this thing looks really cool. Why don’t we implement this too?” And then we did and included other mechanisms beyond Schelling’s as an appendix. We didn’t know a darn thing about it when we actually built the Sugarscape.
Jim: That sounds like a good transition to my next topic, which is why don’t you give the audience a real simple explanation of Sugarscape, especially the early, the original Sugarscape.
Joshua: The Sugarscape model, and again, there are a million implementations of this online. So if you just go to Google and hit YouTube videos, Sugarscape, you’ll get a million of these things. But the idea is this. Picture the landscape that has some color yellow. There are two sugar peaks. The sugar peaks are dark yellow at the top and light yellow at the bottom and the bad lands with no sugar are white. The rules for the landscape are simply you have some sugar capacity. The heights have sugar capacity for, and then they descend three, two, one zero into the Valley. The rule of behavior for the sites is just if chomped grow back at unit rate until you’re at your maximum sugar level. That’s all it is for the landscape. Of course, they do get chomped on their way back by agents. What are those? Picture red dots scattered around this yellow to peak landscape. And each of them is an agent who comes into the world with a little sugar, just so they don’t die instantly. And they differ from one another, as I’ve been saying.
Joshua: Some have very good vision and can look around a long way for good sugar positions. Others have crummy vision, some have a very high metabolism, meaning they have to eat a lot of sugar to get through the day. Others have low metabolism, meaning they can get by on very little sugar. And we put these agents on this landscape, 500 agents on 2,500 sites. And the only rule for the agents is look around for the best unoccupied sugar site within vision, go there and eat it. At that point, we up their sugar wealth by that new sugar intake and we subtract their metabolic rate. If it’s negative, they die, they couldn’t meet their metabolic needs and they’re out.
Joshua: Otherwise, they just keep playing. So the landscape is just growing back until it gets chomped. There are two sugar peaks. The agents initially start randomly and they have a metabolism and a vision. And their rule is just look around as far as your vision permits, pick the highest free site, go there and chop the sugar. That’s all that’s going on, nothing, right? It looks like what could possibly happen in a system, so trivially simple. The answer is a heck of a lot to our amazement. So first off, there’s a lot of agents who die because they’re born in a crappy position, have no vision and have high metabolism. So there are. Many of them migrate to the peaks where they harvest sugar madly, they live on Wall Street, in an incredible frenzy of activity and competition. Out in the badlands, some agents just sit there on these sugar terraces and survive perfectly well and never move anywhere.
Joshua: How is that happening? Well, they’re agents who have crummy vision, so they don’t see that there’s a higher sugar terrace, but they also have very forgiving metabolism. So they’re just happily sitting there with their little lemonade stand in Santa Fe, going along perfectly well. While these other crazy agents are competing like mad on Wall Street. So you get that sort of thing, you also get evolution. If we let agents have offspring, the breed crew that they pass onto their kids, their genetic attributes, metabolism and vision. Over time, you see evolution take place on the landscape. The average vision increases, average metabolism falls and you get the evolution of ever fitter agents, under this very simple sugar selection mechanism. Also, the sugar wealth is changing over time, as people continue to accumulate, right? If everybody’s accumulating wealth, then there’s a wealth distribution that’s unfolding. And we thought, well, what does that look like?
Joshua: Rob and I knew that real distributions of wealth have a so-called scaling law, they’re Pareto law distributed. If you graph them, if the X axis is how much wealth you have and the vertical axis is how many people have that level of wealth? The thing is very skewed. It goes from high left to low right? Meaning lots of people have very little or a moderate amount and a few people have a great deal. It’s not normally distributed or symmetrical. It’s skewed as are many, many other phenomenon. This is one of the central themes of the Santa Fe Institute, is the apparent universality of this kind of pattern. Anyway, we ran the sugar thing and thinking, okay, we have no clue what this is going to look like. It could look like anything. We had no real sense. We just thought, let’s just run it.
Joshua: And lo and behold, the thing forms up into a Pareto law of wealth. It was really the first instance that occurred to us that, you know what, you could use this to grow actual real-world data. And it’s an explanatory scientific instrument. That’s how I think of it. I think of this as a generative explanatory standard and the agent-based model is a scientific instrument. Then we went on and did many, many other variations and runs and we wanted to break down all silos and departments. We wanted one run in which all sorts of things were going on. Disease transmission, combat over sugar, cultural assimilation, reproduction and selection, trade for sugar, all sorts of things, combat over sugar. We called this whole artificial civilization, the proto history, a kind of toy history of civilization.
Joshua: I showed that at the Santa Fe Institute and asked the audience, “Does this remind anybody of anything real?” A hand shot up? It was George [Gamaman 00:25:03] who I didn’t know. He said, “It reminds me of the Anasazi.” I said, “The Ana, what? What is that?” And he explained that they were an ancient civilization that flourished in what is now Northwestern, Arizona for hundreds of years, and then mysteriously vanished from these lands. I thought, “Well, that’s very interesting. But do you guys… You’re archeologists. You’re just these dusty guys crawling around looking for pot shards and stuff. Do you have any data?” And I assumed the answer would be no, but they did have tremendous data on everything for this entire millennium civilization in this place called Longhouse Valley because they had tree ring dating all these great methods, dendroclimatology.
Joshua: They were able to reconstruct household locations, household sizes and infer demographic patterns. And I thought, “You know what? Let’s try to build an artificial civilization and try to understand the mystery of this disappearance.” Their data begins at about 900 AD and ends at about 1350 AD, at which point they all vanished. So I thought, well, let’s try to grow the Anasazi in an agent-based model. And that became a whole artificial Anasazi project and it was really, I think the dawn of computational archeology, where people do this all the time now, and the results ended up in the proceedings of the National Academy of Sciences and Jared Diamond wrote a very nice article in nature life with the artificial Anasazi.
Joshua: After the wealth distribution, I would say it was the second serious empirical application that we were involved with. Again, when you’re saying, here’s a real empirical target, real world thing. So we build an agent-based model that uncovers the real sort of mechanism. And since then infectious diseases, I think that’s been really revolutionized and I think economics is being revolutionized. Although most economists don’t know that.
Jim: It’s amazing to me, that economics has taken so long to take up agent-based modeling. They’re way too much in love with their differential equations for my taste, like the cop who comes along and sees the drunk on his hands and knees under a street light. The cop says, “What are you doing on your hands and knees?” “Oh, I’m looking for my car keys.” And the cop says, “Did you lose them here?” He says, “No, I lost them over in the parking lot, but there’s no light there.”
Joshua: Yeah. Right. Exactly. They’re very committed to that and they’re also very committed to this neoclassical economic agents, homo economicus, who really is a cold optimizer, Herbert Simon and Dan Conaman, the behavioral economists, lots and lots of experimentalists have shown that human beings do not act like homo economicus, they don’t optimize. They have biases. They rely on all kinds of heuristics and shortcuts. They’re very bad at statistics. There are systematic departures from canonical rationality. I think most economists would say, “We know that. But this is the only agent we have. We have this calculus-based rational actor agent, and unless there’s an alternative, we grant these points, but we’re sticking with this.” The whole thrust of agent zero was okay, so let’s have an alternatives. Let’s have a real alternative that is also formal and mathematical, but includes more cognitive plausibility that has emotions, that has bounded deliberation, that has crony information that has conformity effects and networks at the same kind of limited individuals.
Joshua: Paul Samuel I think, said, “You can only beat a model with another model. So, a pile of gripes about the economic actor. Don’t change practice. You’ve got to give people some alternative and others are trying to do that in earnest.
Jim: That’s great. We’re going to get into agent zero in more detail a little later. Before we go there, though, let’s hop back to the Anasazi model because that’s actually the perfect opening for my next question, which is, you build a model, you take some data. You say, “All right, what model might fit this data?” And you play around with it. It raises the question of what’s usually called model validation, which is all right, you’ve created a model, how do you know that it really has much to do with the way the Anasazi actually live? As opposed to being an over-fitted. You can always use evolutionary techniques and find some model that would replicate the data. But how do we feel comfortable that our model is more or less valid with respect to the target system.
Joshua: It’s a great question. Always a central question, and always fraught with lots of uncertainties and so forth. I think in that case, there a lot of ethnographic evidence about marriage systems, for example, and it turns out the Anasazi really are the ancestors of the modern Hopi Indians, who you can interview and survey and discuss what there… There were rules for inter clan, marriage and rules for, does the husband move in with the woman or does the woman move in with the husband? There were all these rules that were conserved over the evolution of this ethnic group. And so we actually had pretty good evidence from their oral tradition. What sort of rules applied to intermarriage, age at which you’d start new families?
Joshua: Again, it was a Sugarscape type situation. So they weren’t doing things that were particularly elaborate. They were just looking around for the best sugar site in this landscape and moving through it, if they couldn’t sustain their household. I should say that the data they had on the water table, the hydrology, the fertility of the soil, drought severity was unbelievably complete. I mean, we could build an entire serious validated good history of the environment, the landscape. They lived on maize, which was an ancient corn. So, that’s essentially sugar. So we really had it, an actual Sugarscape. If you go to one of these all over the place, you’ll see the evolution of the water table over time, which is closely associated with the fertility of the land. When there’s a lot of sugar populations can flourish and grow and when there’s very little sugar population declines. We see these fluctuations in the true historical record and in the agent-based model.
Joshua: Again, we try just to generate the demography and the settlement patterns. So, we don’t know what their favorite dances were or what their favorite songs were or any of these things. Although, they hope you probably do know. We tried to stick with the archeological and ethnographic anthropological record as data permitted. I think for the limited goals we had, which were just demography and spatial organization, the thing performs very well. There’s always this question, right? I mean, history is one run. You can’t rerun history a million times and see if that run is an outlier or not. But you can run your model a zillion times and see if in the statistical distribution of model runs, the real world run is centrally located, right? So you run the model many times and you see, is the real world, the history you have, located, is it central located in the distribution of model realizations, right?
Joshua: If it is, then you think, okay, the model is capturing this in a strong way. If you run it a million times and the actual history is way the heck out 27 deviations to the right, then you think, “You know what? I’m not capturing what went on.” So you come at it in different ways. You have the actual output, you could evolve different generative rules. And there’s a whole initiative that I’m part of called inverse generative social science, where we’re doing precisely that. We’re using evolutionary computing to design different rules that could generate the same thing. And colleagues of mine actually did that with the Anasazi and evolved other rules that also produce the history. So when you say, the motto is, if you didn’t grow it, you didn’t explain it, but there might be many ways to grow it and we should be discovering those using evolution, AI, all these other modern techniques. So I’m all for that.
Jim: Great. Yeah. We’re going to get to inverse generative social science in just a minute, but let’s wrap up on models and maybe around model validation. I remember when I built definitely the most extensive agent-based model I ever did, I was operating all by myself with my basement lab out in first of all, Virginia, didn’t know a thing about it. It had politics, economy, religion, and all this stuff. I was so proud of it. When I went out to [inaudible 00:33:56] in 2002, I showed the people that, and they said, “Well, how many parameters does it have?” I go, “47.” They basically laughed and said, “Boy, you got a lot to learn.” One of the things I learned in my time at the institute as a researcher was, you should not have too much confidence in a model with 47 parameters.
Joshua: No doubt.
Jim: In fact, their rule of thumb was three is tolerable, two is better and one is best. It was actually very, very good for me. As it turns out, it makes you think much harder and coming up with relatively realistic model with three parameters. If you’re just allowing yourself to add parameters, as you see fit, you can just build this gigantic pile of shit, which doesn’t really explain anything.
Joshua: I think Von Neumann has this funny thing. He said, “With three parameters, I can build you an elephant. With five, I can make it dance.” No, you’re absolutely right. I mean, there’s a distinction between the parameters that you can measure and the ones that are freely adjustable that you sweep in some way. But yeah, it’s very important. That’s absolutely right. If you have too many parameters, you can tune it to get anything you want and you need to prune the models as vigorously as you can. I mean, agent zero. I think it’s three or four parameters. I really tried to keep everything very, very minimal precisely for this reason. But yeah, it’s hard to do. It’s also true that agent-based modeling, you can learn how to do it pretty quickly. There are these wonderful tools. NetLogo is one. Jim, you might’ve used that.
Joshua: They’re very easy to download on your computer. NetLogo download on the computer and they have three very simple tutorials that basically high school kids can do. If you do the three tutorials, it might take you a day or maybe two days, and then you’re up and running. You can build simple agent-based models or of your own. I always say agent-based modeling is easy to do badly. It’s not easy to do differential equations badly because you still have a lot to learn before you can even abuse the thing.
Jim: Yeah. It’s funny. I did look back before we went to that conference, you guys had, I did do some lookup on what was going on. And a lot of social science agent-based modeling does have 45 parameters and shit like that. And I go, “Goddamn it.”
Joshua: Absolutely right. And you give a talk and people say, “Well, why didn’t you include this? Why didn’t you include that? Doesn’t this matter? Blah, blah, blah.” And the answer is always look, they think you’re done when you can’t add anything more. I think you’re done when you can’t take anything more out. Einstein said, “The model should be as simple as possible, but no simpler.” You’re trying to keep these things to the absolute minimum number of moving parts that let you get at the issue you care about. And that’s really an art, as you know, Jim.
Jim: Yeah. That’s where you need to be part of a community that will critique your models, right? And particularly be part of a community that critiques your model towards parsimony. That’s really important. It’s funny, you mentioned the Einstein quote. I had it down here in my notes.
Joshua: Oh good. There you go. You’re ahead of me. You’ve been ahead of me on every point so far. So you’re batting a thousand.
Jim: You beat me on that one, actually. Now let’s pivot to the next topic, which is the inverse generative, social science. You guys had a quite interesting meeting back in January. In fact, it was the last scientific meeting I attended in person.
Joshua: And you presented there. Thank you very much.
Jim: Yeah, that was fun. It was a really good event. I think it really took the whole idea of agent-based modeling to the next level, which is, as you started to talk about a little bit before, if historically agent-based modeling was creating assumptions about little people running around and seeing what they do when they came up, inverse generative, social science is essentially the opposite. Can we take data or known behaviors, I guess you’d have to call that data, and then inverse the process and create models. Talk more about that. Go into as much depth as you want. Because, I think this is a tremendously interesting area.
Joshua: Good. So yes, we had this nice founding workshop that you attended, back in January. And we just had another big panel at a international conference on modeling and simulation in Milan. Of course, nobody was able to go to Milan, but we had this entire other panel on inverse generative social science. And yes, as you say, the traditional agent-based model is we handcraft the agents and see if they can generate some target. The history of the Anasazi, a segregation pattern, a wealth distribution. Inverse generative social science stands that on its head. It says, don’t design the agents at all. Tell me the target, the segregation pattern, the wealth distribution. And now I’m going to let evolution produce agents. And the fitter agents. When I say agents are fitter, they’re better at generating the target.
Joshua: So I’m going to start not with agents at all, but with agent constituents, parts of agent, little parts of rules. Look around, add two numbers. Things like this. Kind of action primitives and concatenation operators. So the primitives, as I say, just rule constituents, they’re little pieces of agents, little modules, little blocks of agents. Initially there’s some random distribution of these agent architectures. If you say, let them see if they can climb the Sugarscape. Some of them can’t climb the Sugarscape because they don’t have the vision rule or they might have other rules like imitate the richest agent or other sorts of things. But the primitives are just things like imitate search and so forth.
Joshua: Then these are combined in different ways. Using these operators, you can add them, subtract them, divide one by the other, take a square root. Say if this then do that, not this or that. All these soup of combination possibilities. And then these primitives that the thing combines and every time it combines this stuff, it creates an agent and the agent is released on the landscape and he lives or dies, does well or poorly. The ones that do well get to replicate. They meet up with other strings of this kind. Their agents could be encoded as strings or trees or in various ways. And you can let them mate and have offspring.
Joshua: So the parent might be all zeros. Mom might be all zeros, dad might be all ones. And their offspring could be half ones that have zeros, meaning that they take rules from the parents and recombine them. And as this process proceeds, they get fitter and fitter and fitter and fitter until you arrive at these agents that actually generate the target. They’re fit, that is to say the model output is very close to the target. That means the micro world is a fit specification. So you’re really evolving agents and agent-based models, starting with the data rather than starting with the models and trying to generate the data. We’ve done this a lot now, and it’s a big… I think it’s a huge watershed for agent-based modeling and we’ve done inverse showing segregation, inverse Anasazi, inverse patterns of drinking behavior and all sorts of other areas I think are very fertile ground for this.
Joshua: So agent-based models are often criticized for being one-off. You say, “Well Josh, you dreamed up this nice model, but aren’t there lots of others that would do the same thing?” And here we’re able to say yeah, there might be an entire family. And here’s a family that does the same thing. Another critique as well. You came up with that, but what if I change it a little? Does the results stand up or is it robust to small changes in the rules? Again, if you have some neighborhood of agents within which the pattern always emerges, you could say, “Yeah look, there’s a cluster of architectures. “All of which generate this thing, which confers some robustness. It allows us to address a couple important questions about agent modeling and mainly it comes up with weird, cool, interesting solutions that you might not have thought of, just the way alpha zero comes up with all kinds of moves, that human chess players and other computer chess players just don’t think of.
Joshua: So we’re using AI in a totally different way here. I mean, most artificial intelligence is about displacing humans at work, augmenting humans as diagnosticians or defeating them in chess and other areas, but not really in explaining them. And we’re saying, let’s use it to explain them to understand how humans behave, rather than just displace humans as truck drivers. So, it’s a whole move.
Jim: Yeah. It’s a really cool thing. On the other hand, my own home scientific field is evolutionary computing. I’ve been using genetic algorithms, genetic programming simulator to kneeling, et cetera, all that stuff for a long time. One of the things that we talked about at the meeting and that there wasn’t any solid answers, but people are now thinking about it seriously, is that when you’re using these kinds of inverse evolutionary techniques, questions about model validation become harder and more important. If you do use genetic programming, one of the techniques I’m most familiar with, you can get some real convoluted chip. That’s like, what the hell? What’s going on here? Right? It’s not like a nice hand-built model where you can say, “Okay, you shake hands once every 10 times you meet somebody. It may be this convoluted bit of code is very, very difficult to interpret.” And so interpretability, and then model validation, does this make any kind of logical sense? Is much more difficult?
Joshua: Well, that’s certainly true. In the drinking model, we actually… You’re absolutely right. I’m sure you remember and others might find it interesting to go back to the John Koza, who was one of the fathers of genetic programming. The expressions you get are these very, very complicated, nested with expressions that no human could possibly understand, so that you get a very good fit from a model that you just can’t interpret at all. And in our work on drinking, we have a whole trade off curve, but you get some expressions that are very powerful, but very convoluted with a million nested operations, you don’t understand. So high fit, low comprehensibility, and at the other end high comprehensibility and great parsimony, but not quite as powerful predictors of what happens.
Joshua: So there’s a kind of trade off between comprehensibility and predictive strength, if you will, which I also think is very interesting. If you’re talking about communicating your results and convincing people to believe your model and all of these sorts of things, you have to be able to explain what’s going on. Evolutionary programs don’t give a hoot that humans understand what they’re putting out. They just care that they’re putting out fit agents and the fit ones may be really very impenetrable to the human. So yeah, this is a big problem.
Joshua: Again, if you’re interested only in prediction and you’re saying what things will generate the thing, I may not care. But I think when we’re talking about really explaining human behavior, really trying to get our teeth into why things are happening, you have to be able to understand and articulate clearly what’s their narrative? How are they getting through the day? What information are they taking in? What are their interactions with others? How does all that shape their actual observed behavior? So I agree, Jim, I think we’re at the dawn of all that. I think one of the principle things this emerging field has to sort out.
Jim: There’s a brute forced tool, which we use in genetic programming, putting up an inverse parsimony cost on the output, right? You just put it in the fitness function. All right, we’re going to deduct from your fitness based on the length of your code, right? And that forces the fitness function by brute force, no smarts at all to produce shorter code and presumably, but not no guarantee, particularly in languages like Lisp, which are hard enough to understand by themselves. It ought to produce code that’s more amenable to humans, but they just say it’s a work in progress, but it’s going to be an area that I’m going to follow carefully, which is how do we use this really neat, powerful, inverse, generative, social science idea and get models that we can have any confidence in and have explainability? Very cool. This is one of the most interesting new things I’ve run across recently.
Jim: Anyone who’s interested in intersection of social science and computational social science and methods of this sort should really look into inverse generative, social science. Let’s go on to our next topic, which is your good buddy agent zero. You’ve talked about them a few times. I read the book that you were so kind to give me with a nice inscription in the front. One of the things I’ve pulled out of it was, you said that Tolstoy was the first agent-based modeler. Tell us about that a little bit.
Joshua: Well, I think Tolstoy, if you read War and Peace, there are long sections where he talks about a King is history slave performing for the swarm life. That the individual’s behavior is a product of his social new, and these social influences. He hated the Great Man theory of history, right? Spengler, and these other historians had argued that history is shaped by great men of great ambition and vision and Tolstoy is saying, “No, it’s not. They’re just little…” He called individuals, the differentials of history. So that what you really see is this vast accumulation of influences between individuals that basically are just performing for the swarm life, they’re influenced by others. And this is one of the ideas that sort of is embedded in agent zero. Agent zero, as I say, is meant to be… The title of the book is Agent_Zero: Toward Neurocognitive Foundations for Generative Social Science.
Joshua: So again, it’s a pursuant to this idea of generative explanation. And I’m saying to explain the phenomenon, you got to grow it in a population of cognitively plausible agents. What are cognitively plausible agents? And I’m saying, again, keep it as simple as you possibly can. The minimum requirements are that the agent have emotions. I used fear because we know a lot about fear and we even have simple equations of how fear conditioning happens. So let’s give them a really simple stripped down fear module that has one parameter. Yes, they do deliberate and they have a reason module. So if you said reason is a slave to the passions, so the passion module is the emotional piece, the fear piece. And the reason part is a deliberative module where yes, they get some information from their world, but it’s imperfect and limited and typically local.
Joshua: And they do poor statistics on it. They make all the mistakes that Kahneman and Simon both won Nobel prizes for establishing and that huge literature of behavioral economics and experimental psychology have now documented. People neglect base rates, they overrate losses. There are framing effects that if you tell a person to tell a person, “There’s a 10% chance you’ll die from the surgery.” They’ll make one sort of decision. And if you tell the same person, “There’s a 90% chance that you’ll survive.” They make a different one. They’re completely equivalent, but the way it’s framed matters a lot. So there’s an entire menagerie of these departures from rationality and agent zero exhibits a couple of the most salient ones. But again, just a few that matter a lot.
Joshua: And they’re influenced by others in their network. So they have weights. They put weights on other agents that are emotionally driven, statistically hobbled. And when you put all of them together, they can do all sorts of fascinating things. Some of the agents live in a really violent part of the world and their decision is whether to blow up sites around them, based on how afraid they are and their statistical estimate of the likelihood that a random site will blow up in your face, ambush or something. But another agent lives in the South where there’s no explosions. It’s completely peaceful. He never encounters any dangerous sites or reversive stimuli at all. In the simple model, there’s just three agents. Because, that’s the smallest number that would permit me to study majorities.
Joshua: So again, simple as possible. And these two agents in the North are experiencing all this violence and they blow up their sites and through the contagion of their dispositions, their fears, their appraisals, and so forth, the lower agent has never had a bad experience with these sites. He also blows up his village. He joins the lynch mob, even though he’s never had a bad experience with black people. And in the more disturbing run, he even leads the lynch mob, The person who had no bad experience leads the lynch mob. Enter Tolstoy, he said, “Well, is that because he’s a leader or just susceptible to disposition or contagion?” And I say, with Tolstoy, he’s not a leader. He’s just susceptible to the dispositions of others. But in that framework, he can still go first.
Joshua: That doesn’t make him a leader. He’s a follower, but he’s the first actor. And in the more extreme cases, there’s a jury trial. I call them all computational parables, just to be very humble about what I’m doing here. It’s computational impressionism. But in any event in the jury case, every single person would have quit if they were alone, but in the jury chamber where they can amplify each other’s suspicions and dispositions and fears, they unanimously convict.
Joshua: So, it’s a form of universal self betrayal that you see in that case. So these are the kind of, I think, important, interesting counterintuitive results you see in these very small agent zero experiments. So yes, it’s Tolstoy in the sense that you can lead the mob because you’re performing the swarm life. Not because you’re a great man of history. And since then, we’ve done a lot of work. Again, trying to replicate real psychology experiments, and scaling up, to try to grow other things like vaccine refusal. Other behaviors, financial panic. I’ve got a big project with the OECD, a group called New Approaches to Economic Challenges, NAEC. That tries to take agent zero and use agent zero to study financial panic, contagious fears and cascading economic phenomenon. So there’s a lot of work on economics I’m doing with agency zero.
Joshua: There’s other work on conflict. I’m also developing a variant called addict zero, where instead of the fear module, there’s a simple model of the neurobiology of addiction. We’re building a kind of toy initial conceptual model where agents become addicted. Of course, they can deliberate about the costs and benefits of being addicted. And again, they can be influenced by others who might be suppressive or promotive of that behavior. So it’s very agent zero-like. It’s just that it’s addiction, not fear. And we have some very simple models of that. So build the general model. Then I have different actual groups on addiction to nicotine, addiction to drinking and then addiction to opioids. But the idea is make agent zero kind of unified framework for the study of addictions of various types.
Joshua: So there’s a lot going on. And again, that would be a good example for inverse generative social science, also. If you have the pattern, which is what we’ve been doing in the drinking version, is try to grow the agents themselves and see if we can get some insight and maybe it’ll suggest novel interventions or timing of interventions. I mean, early in your addiction, you might be swayed by arguments and interventions by others. But at a certain point in your addiction, you might not. Those deliberative and social matters might not matter. And so there could be important timing issues, interventions, and their content. That’s a long rambling answer to what’s going on with agent zero. But that’s some of what’s going on anyway.
Jim: Yeah. One other item, which was in the book was that not only do you have the nice three dimensions of emotion or affect cognitive or reason and the social, but you point out that there can be entanglements between those dimensions. The classically affect and cognition or affect and social can influence each other. Why don’t you talk about that a little bit.
Joshua: Absolutely. Right. Yeah. Again, I did a variant where they can be entangled and it’s well known that if you’re under tremendous stress or you’re in a terrible fear state or a state of terror and anxiety, your reasoning capability is compromised. You’re driven mostly by fear. If I throw a snake in your lap, you don’t hang around and say, “Now let’s see, is that a garden snake or a black mama?” You just freeze, because you’re wired to just plain freeze. And in that state, you’re not capable of deliberating so that the heightened effective state compromises the deliberative module so that I have a variant in which that’s exactly how it works. That your appraisal of the situation is heavily determined by the pure emotion of the situation. And in others, the emotional valence is low and the reasoning component can dominate, but there’s always an interplay between the two. And again, this is another very rich area, but there’s a lot of experimental work that establishes this point.
Joshua: So again, a cognitively plausible agent should be able to exhibit that in some crude way. And again, in the book I keep saying, I say all the time, look, the game here is not to get these modules finished. It’s to get the synthesis started and I’m encouraging and playing myself with all kinds of different modules and all different kinds of functional relationships among them. So in the original they’re just additive, but in the entanglement ones, they’re non-linearly tangled up with each other. I think all of this is wide open and because the combinations are so numerous. Yeah, evolutionary programming is a great way to start searching for the agent architectures that generate the results we see in experiments and in the world.
Jim: Let me reach back to something that we talked about earlier when we were talking about what distinguished agent-based modeling from, say coupled systems of differential equation. And that’s learning. You guys did a quite interesting thing in agent zero is that you… At least as I read it, you have a simple, but quite powerful classical conditioning learning algorithm, Rescorla-Wagner model. So learning is quite pervasive in the system, perhaps more so than in many agent-based models running around out there.
Joshua: Yeah. This was the idea that you acquire fear by a process of associative conditioning. So just to give a conditioning 101. I’m not a skin area or any of this, it’s just a simple way of incorporating this idea. And again, in the book, I talk about the neuroscience of fear and even have several FMRI and other studies, about fear acquisition. And yes, there’s a simple equation that has all the limits of other simple equations. Box said, all models are wrong, but some are useful. And this is an idealization, a useful idealization. Anyway, the idea is the amygdala is centrally implicated in all of this. And we can do imaging registering the recruitment of the amygdala in different settings. If I throw the snake in your lap, your amygdala is immediately recruited and it trains an entire retinue of endo chronological adrenaline, all sorts of other things happen, freezing, lots of other signs of the fear state.
Joshua: If I put a shock cuff on you, and I just shock you out of nowhere, your amygdala lights up. There’s an adverse event you have to deal with and it gets your attention. It needs to be surprising and salient to get your attention strongly. By the same token, if I just show you a blue light, your amygdala doesn’t care at all, and you don’t notice it. It’s not a big deal. So yeah, nothing, no response at all. But now if we start pairing the two, blue light shock, blue light shock, if I do that enough and I show you the blue light alone, then you get the same amygdala response, as if you were shocked. You have associated the blue light with the shock and for agent zero creatures here, they are in this violent part of the world.
Joshua: They are calm, benign indigenous sites, but there are also violent ambush sites. At a certain point, they come to identify this region with the violence and begin to fear yellow sites who are just the blue light, the shock or the ambushers, but they also fear the surrounding village. And so you get the Mỹ Lai massacre for those of you old enough to remember that. Jim, you are, we are, but in any event, they do things to the innocent sites because they’re associated with the violence sites. And this is because of fear learning. That impulse can be transferred to agents. And we also have fMRI studies and other very interesting literature on the transmission of fear to one person to another and the result of network effects on the transmission of fear and other emotions. So the neuroscience of fear is really enlisted in the design of this agent.
Jim: I guess, I’ll point out that, it would be an area of experimentation and that you could put other learning rules in there and see what happens.
Joshua: In the book and in subsequent public publications, I’ve said, “Look, the Rescorla-Wagner model is one learning model. But there’s temporal difference learning.” You could put a neural net in there, there’s a million ways to swap out modules. I proposed that we all get together and develop a kind of agent construction kit where you could choose modules that could swap in and swap out or that evolution could swap in and swap out. So yeah, absolutely. Everything’s up for grabs, as far as I’m concerned. What I was going to say is that fear acquisition is important. Fear extinction is also very important. The same mechanism. If I stopped the pairings, eventually you stop being afraid of the blue light.
Joshua: This is also true in the case of acquired human fears and it’s very important to, in a case like COVID. So for example, I have some modeling and epidemiology that tries to look at these fear dynamics and it’s called coupled contagion models in which there’s an epidemic of disease, but there’s also an epidemic of fear about the disease. The epidemic starts up and gets hot. People get afraid of the epidemic. That spreads, and it produces social distancing, self protection. People go into isolation and so forth. They adapt their behavior as we were talking about before, in ways that suppress the epidemic. Okay. But now the infection goes down and people stop being afraid and they come out of self isolation prematurely, their fear extinguishes, and it produces premature re-openings of the economy and the like, which just pours susceptible people back onto the infected embers.
Joshua: It turns out, even if there were a few infectives around, you can reignite the thing. So the fear helps you in producing distancing, but its evaporation produces this complacency and you get second waves. And this is just what we saw in 1918 in the great pandemic of that time. And it’s what we’re seeing now with COVID with these premature relaxations of distancing, mass avoidance, reopening schools and so forth. This is the downside of the fear system is that it also can evaporate in ways that encourage complacency when vigilance is still required.
Jim: Yeah. One thing though, this is a little bit of a nerdy inside baseball thing. One of the limitations of the Rescorla-Wagner model is that it does not support rapid relearning of what had been previously learned. That may actually be relevant in a case like COVID. We’re actually even seeing it. Arkansas starts getting bad and then they switched back to the previous mode, which they had already learned. Probably too nerdy for our audience. But I just wanted to point that one out.
Joshua: All these things have to be taken into account for sure.
Jim: I’ve intentionally avoided too much talk about the pandemic, but you’ve done a lot of work on it. So let’s go there. We kind of have depleted it, but you may have some more things to say, about this fear cycle and it’s coupling with the disease itself. You wrote a very interesting article for Politico back on March 31st, 2020. It’s quite interesting to read it. In the article you quote Fauci is saying, worst case 200,000 people will die. Okay. We didn’t know that much then, but anyway, maybe go in a little bit more detail about knowledge, fear, policy, how those things interact and how government ought to be at least playing a role in managing our responses to these things.
Joshua: So, in the immediate term, the only real tools we have to mitigate this pandemic are social distancing, mask wearing and personal hygiene. I think the administration has done an absolutely abysmal job in containing this disease because it has abjured these approaches, people are mocked for wearing masks. There’s all this premature abandonment of social distancing. And you’re seeing the continuation of this disease. In the political piece, I say, look, if you’re interested in reopening the economy, you actually have to control COVID and the administration really hasn’t done that. It hasn’t shown any real respect for the Faucis and serious scientists, on the Corona task force. And it’s now got people with no credentials in infectious disease giving advice and so on. So I mean, I think in the immediate term, these are the only measures we have, to really clamp the thing down and they’re not being pursued in a uniform rigorous way with the support of the federal government.
Joshua: I think that’s why we’re continuing in this state. Again, on the agent zero level, there’s this dynamic that people, yes, where those measures are applied, they suppressed the disease, but then the disease gets down. People get sloppy again, and the thing comes back. So you need to keep these measures on until the disease is really tamped down to a point where any resurgence can be contained. Now, here again, you need testing to know when a resurgence is even going on. This has also been shortchanged. I mean, there’s been this lunacy about testing produces more cases or something that there’d be less cases, if there were less testing. That’s just insane.
Jim: Yeah. I mean, that’s like not logic at all. That’s just total bullshit. Right?
Joshua: Absolutely. It’s just crazy. And that’s harmful as well. I mean, we’ve done a miserable job testing, which makes it hard to determine things like the fatality rate per case, stuff like this. So, the measures that are available have not been promoted. They’ve been undermined. The people who should have been in charge have been sidelined. The people in charge are not experts in the field. And there’s just this massive denial that these have produced hundreds of thousands of deaths. This is the truth. But as I say back to agent zero, yes, it’s been abetted by these cycles of vigilance and premature complacency. Another area where this is going to be important is when vaccine is actually fielded.
Joshua: I mean, here, there’s another fear of contagion, which is fear of the vaccine. I think Trump is again, magnifying that problem by insisting that it’s going to be developed at warp speed and we’re going to circumvent all the testing and clinical trials that are required. I don’t care if it’s safe or not, we’re just going to get it out there before election day. I mean, that too is going to just produce a huge reaction against the vaccine when it is produced. So all of these things are very, very problematic and serve to prolong the entire crisis. That’s my appraisal of the situation.
Jim: That’s interesting. I’m going to back to allude back to something that you talked about, long ago in the episode, which was, you talked about being consulted on the smallpox possibility after nine 11. Interestingly, I was too, but this was before I got involved with Santa Fe and so it just happened to be some connections that I had in the intel community from my days at Network Solutions, where we were part of the critical infrastructure of the United States and all that. Someone said, “Oh, there’s a smart lab. Let’s ask him.” I sat down and sketched possible solutions. I said, “Here’s your cheapest possible solution.” Which is to give every household in America, 30 days worth of food and water. I calculated it would cost about $30 billion. And then if they hit us with smallpox, absolute fucking lockdown for 30 days, national guard patrolling the streets. You’re seen on the streets, you’ll be shot on site.
Jim: Some people will die from heart attacks that don’t get treated, et cetera, but we’ll be done. 30, actually is 21 days for smallpox. When I said, to be safe, let’s have a 30 day supply and literally shot on level of lockdown. We could have done it in 28 days with the pandemic. It would require actual leadership. It’s the kind of thing, Winston Churchill might’ve done. Right? But not the kind of clowns we have, frankly, in both political parties today.
Joshua: We’d be done now. I mean, the thing would be under control as it is in some other countries who were just rigorous about it and said, “Look, this is going to really suck, but we’re just going to suck it up and just close everything for a month or two.”
Jim: It only takes a month if you do it rigorously, right? No country did it rigorously. Even countries that did it fairly rigorously, still had 30% of their people going to work. I’m talking about nobody except maybe the people that run the water supply and the electrical system, that’s it.
Joshua: We’re going on a year and we’re going to be well into another year before this thing is even close to under control. We could have been much farther out of the woods, had we just had leadership and stuck to the knowns. The crazy part is that the original guidelines for the administration were to keep everything closed down until you have 14 days of virtually no activity. And they abandoned that. That was a good guideline. The administration said, “Don’t do that actually liberate your state instead.”
Jim: And actually no state did it, even where they had… I don’t think a single state followed that guideline. It’s quite interesting that even in states that were not Trumpians, that weren’t influenced by that wave, it’s just damn hard to do. We just do not have a good leadership. My own personal theory of that is because the selection process for political leadership today does not attract good people. Would you be willing to run for public office? I know that I sure as shit wouldn’t in the current ridiculous political system we have today. And so we’ve made the politics so toxic that no good people will come. So we have incompetence, right? Or people relatively low ability.
Jim: I know a fair number of politicians and they’re nice enough people, but shit, you and I both know dozens of people that are smarter and more qualified than these people.
Joshua: Yeah. The zero order of the politician agent would be pretty crude. I have to admit.
Jim: Yeah, exactly. How do I get my votes? How do I get money? I get contributions and votes, right?
Joshua: Power urge with an applause meter for a brain. That would be my politician zero picture.
Jim: I love it. You should do that. That would be hilarious. Do politician zero. All right. Final topic before we wrap it up, we’re coming up here on our time, is have you been involved with any other thinking about epidemiological modeling? For instance, in a podcast I recorded yesterday with Melanie Mitchell and Jessica Flack, they brought up the fact that we’re both Santa Fe Institute folks, as you know. They brought up the fact we were talking about, how simple models can be wrong. The fact that we now know, didn’t know that is that the contagion in COVID is not well-described by R naught, rather it’s a few super spreaders and a lot of people who aren’t contagious at all or damn close to it.
Jim: Have you seen or worked on or thought about epidemiological battles, goddammit, of COVID that there are richer and bring in some of these fat tail kind of ideas?
Joshua: Yeah, for sure. I know several of the authors of this very nice paper making this point. But yes, R naught was one of the most abused parameters in this entire debate. People thought of R naught A, as inherent to the disease, which it’s not. It’s not a biological property of the disease. And they thought of it as the kind of exponent for [inaudible 01:11:48] multiplying type of model. I give it to two people and then you give it each to two more, so there’s four. Now they give it to two more. There’s eight. And now there’s 16. And it’s two to the N, sick people. When N is this R naught number. But that’s not at all how it works because in the course of the disease you run out of susceptibles, the thing bends over just like you can’t keep multiplying rabbits. If you have a hundred rabbits, you run out of rabbits at a certain point.
Jim: Run out of grass, if nothing else. Right?
Joshua: Run out of grass. You run out of something. But the point is that R naught is not…. All it means is if I take one infected, put him in a pool of susceptibles, how many of the susceptibles does he directly infect? No further transmissions from one person to another, just that first generation of transmission. And this allows you to give some crude estimate of the early exponential phase of the epidemic, but nothing more than that. Also even that, if I keep running that experiment, I take, as in these agent models, when we try to produce the R naught in a computational model, we run that experiment again and again, and again. We put the guy in a bowl of susceptibles and sometimes he infects 10, but other times he infects five, other times he infects 13 and so forth.
Joshua: You get a distribution. R naught is just the average of the distribution over these many, many experiments. And what’s interesting now about this work is the distribution is very skewed. You get some people, super spreaders who have a very high individual R naught as it were and others, they transmit very, very little. That’s very important when we think about controlling these dynamics or for example, allocating vaccine. I mean, in our work on smallpox, we also noted that there’s a distribution and that there are some super spreaders and some less spreaders and the whole trick would be to vaccinate the people who are super spreaders.
Joshua: Why are they super spreaders? Is it because of their biology or their immunocompetence? Or is it because of their position in a social network? Their contact dynamics, their movement patterns, all sorts of things. So again, that they’re super spreaders is also not purely biology. It depends on who they bump into. And this couples biology, social dynamic. This is a very important insight. It’s not entirely… I think most people knew there were super spreaders for a long time. That they’ve identified that it’s a Pareto distribution is very interesting, but the general phenomenon is well understood and has been for a long time. And to the extent that this work helps us identify those individuals for early vaccination or isolation or measures that get them out of contact one way or another, then it’s very useful.
Jim: Indeed. I think we’re going to end it right there. Josh, I really want to thank you for an amazingly interesting and very clear. I think our audience has learned a lot about agent-based modeling, approaches to it, what it’s good for issues, et cetera. So thanks again, Josh, for our fine episode.
Joshua: Thank you, Jim. Always a pleasure to be with you.
Jim: It was fun.
Production services and audio editing by Jared Janes Consulting, Music by Tom Muller at modernspacemusic.com.