Transcript of Currents 015: Jessica Flack & Melanie Mitchell on Complexity

The following is a rough transcript which has not been revised by The Jim Rutt Show or by Jessica Flack & Melanie Mitchell. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guests are Melanie Mitchell and Jessica Flack.

Jim: Melanie Mitchell is Professor of Computer Science at Portland State University, and External Professor and Co-Chair of the Science Board at the Santa Fe Institute. Melanie has also held positions at the University of Michigan, Los Alamos National Laboratory, and the OGI School of Science and Engineering. She is the author or editor of seven books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems, including her latest, Artificial Intelligence: A Guide for Thinking Humans.

Jim: Jessica Flack is a professor at the Santa Fe Institute. There she directs SFI’s Collective Computation Group, also known as C4. Jessica was formerly founding director of the Center for Complexity and Collective Computation at the Wisconsin Institute for Discovery at the University of Wisconsin. She was a graduate student at Emory University and was associated with the famous Yerkes National Primate Research Center. She’s been my go-to person for questions about primates and monkeys. I guess monkeys are primates, aren’t they? Yeah.

Jim: So, both of them are returning guests. Melanie was on EP 33, where we talked about her book, Artificial Intelligence. Jessica has been on twice, once where we talked about complex systems dynamics. She did a COVID Extra, where we talked about what opportunities to COVID-19 and the changes that it will cause when our society open up. So, it’s great to have you both back.

Melanie: Yeah, good to be back.

Jessica: Yeah, it’s great to be back, Jim. Thank you for having us.

Jim: Yeah, today, we’re doing a current episode, which are shorter forms and less highly structured. Essentially, what we’re going to start is our starting point, very interesting essay, which caught my eye when it was published, I don’t know a month or so ago, called Uncertain times published on the Aeon website, that’s A-E-O-N.co. If you Google it, Uncertain times, Flack and Mitchell or Aeon.co Uncertain times, you’ll find it. It’s really quite good. I call it a practical guide to how to think in terms of complexity about the problems of our society. I would strongly recommend that people actually go and read the article. It’s well worth your time.

Jim: So, let’s start off with the opening paragraph or so from the paper or the essay and then get your guys thoughts on that. You start off by saying, “We’re at a unique moment in the 200,000 years or so that Homo sapiens have walked the Earth. For the first time in that long history, humans are capable of coordinating on a global scale, using fine-grained data on individual behavior, to design robust and adaptable social systems. The pandemic of 2019-20 has brought home this potential.” Potential, that’s a good word.

Jim: “Never before has there been a collective, empirically informed response of the magnitude that COVID-19 has demanded. Yes, the response has been ambivalent, uneven and chaotic – we are fumbling in low light, but it’s the low light of dawn.” So, what do you see that’s hopeful in humanity’s response to COVID-19?

Jessica: I think for me, the thing that stands out and I keep returning to is this observation or this convergence. The convergence is incredible microscopic data actually at all levels of organization in science, molecular all the way up to human societies that we just have not had before, coupled to the development of machine learning and AI and really great pattern detection, at least, and the flourishing science of micro to macro that’s been coming out of the Santa Fe Institute and other places. It’s this convergence of these three factors.

Jessica: Plus, the fact that we can communicate over vast scales of lightspeed now, because of the internet that have made the possibility or have given us the capacity to designed empirically informed responses to perturbations like COVID over global scales. Now, I say capacity and that’s a very important word, because as a number of people have pointed out to me in response to the essay, we’re still not very good at this, but the pieces are all there for us to put together. I think it’s the first time in history that we’ve had this capacity. So, for me, it’s extremely exciting. I want to see us move in the right set of directions.

Jim: Melanie, your thoughts on this?

Melanie: Well, I agree with Jess, of course. I think that in a way, this whole pandemic and the response to it has hit home for a lot of people and made people realize that complex systems are really important to understand. Made us realize how complex a world we’re living in and how little prepared we are to make predictions and to do the optimal thing at all times. That’s really a lesson of complex systems in general. So, I think it’s a change of mindset as much as a change in our capabilities to do the kind of science we need to understand and make the kinds of decisions that we need to make in these uncertain times as the title of our article says.

Jim: Indeed. You guys talk about the fact that the default human mind is fairly linear, simple cause and effect reasoning. That kind of reasoning will get you into trouble when you’re dealing with complex systems. Maybe talk a little bit about how some of the manifestations of COVID, since it’s an example we’re all still familiar with, exhibited what we might think of nonlinear or non-simple cause and effect relationships.

Jessica: One of the best examples of that is it’s still going on, but it was particularly problematic early on, say, March when we’re in early days of COVID modeling. Models would make predictions about the number of deaths or so forth by week X. And then that number would not be met? What are the reasons for that? Well, one reason is that the model isn’t very good or the input into the model isn’t very good. That’s another reason. The third reason is that our interventions change the outcomes. That’s about feedback. That point, even though insane in this conversation, makes it seem so obvious. It’s very easily forgotten. I think that’s an example of how our minds default to when you’re reasoning, we forget to factor in these feedbacks.

Melanie: Yeah, let me add one thing to that. I think one really good example of the effects of linear thinking is this thing that people talk about in terms of how infective a virus is and that’s that R(0), which is the number of people you’re likely to infect if you yourself are infected. So, people think about R(0) and they talk about R(0) across a country or across a state or across some region, but R(0), of course, is an average, which is a very linear quantity.

Melanie: You add up all the parts and get an average. But it turns out that infectivity is really very much more clustered that we have these super spreaders who will infect a huge number of people. There’s most people are not affecting anyone at all. So, it’s not a real linear system in which you can talk about these averages in any meaningful way. I think a lot of people got misled by focusing on these averages like R(0) in terms of making models. That’s one reason why models didn’t really reflect reality.

Jim: That’s a wonderful example, because if I did pick that up actually from the data that wait a minute here. It’s not as if we’re all clustered around some R(0) mean, right? It’s as you say, there’s a few people that are spreading to 25 or 50 people while lots of people spread the zero or at most one, often a person in their household. If you model it, assuming it’s all clustered around R(0), you’ll get a very different configuration of spread than if you assume that it’s a relatively small number of super spreaders. So, a really good point.

Jim: Let’s see, another topic you guys talked about and how a complexity perspective of thinking helps one deal with solutions in a non-obvious way is you talk about how basically noise oddly enough can actually help organization. You gave a very interesting example about schooling of fish. Maybe talk about that example a little bit.

Jessica: So, I mean, I think it’s well known in the theory literature that noise has surprising effects, but for most of us, we think noise is something to be eliminated. No, we want to maximize the signal, minimize the noise. That’s understandable. But the fish study in particular, it’s a type of cichlid fish. These researchers found that the fish copy each other to decide which direction to move in. The copying behavior is often thought to be the driver of these collective states like schooling. But in the case of these fish, actually, small noise in there copying behavior feeds back on itself.

Jessica: When it happens, the big enough effect, it actually induces a transition to schooling to an ordered state. So, the noise in the copying behavior is really important for producing this organized collective state. I think that’s very surprising to a lot of people. Of course, there are other examples. So, the canonical example of the importance of noise comes from stochastic resonance, which is a phenomenon where a signal that is normally too weak to be detected by, say, a sensor or an agent in a system is boosted by adding white noise to the signal.

Jessica: I guess, the point is there are all sorts of ways that noise plays a role in generating this collective state we see. Sometimes those states are good and sometimes they’re not, but we want to be aware of that. One example in the COVID case, where I think it’s important to be aware of these things is say in the enthusiasm for contact tracing apps, where you trace the contacts that an infected individual have. The thinking there is that by knowing where the infected individuals are and avoiding them, that will be safer.

Jessica: When you couple the noise point to general understanding of synchronization and collective behavior, you know that you need to be wary of that conclusion, because often surprising, collective states can be generated. So, we might actually end up in a situation with those contact tracing apps where we are putting more infected or asymptomatic people next to each other than we thought we were going to be doing, right? So, you get these counterintuitive results. Only with thinking about noise and complex systems and synchronization do you realize that this is a potential concern.

Melanie: Yeah, this is one of the things that I personally found most fascinating about biology versus human engineering is how much nature has decided that noise is unavoidable and noise I think of as synonymous with randomness. It’s just something that cannot be engineered out. So, instead, what biological evolution at least to seem to do is to embrace randomness as a way to deal with uncertainty. That these systems as Jessica pointed out for instance with this fish schooling, all of these biological systems are just embedded there just noise as ubiquitous. They embrace it and they use it, because the environment is so uncertain.

Melanie: That just contrasts with our normal thinking about how to engineer behavior, how to engineer control is to get rid of all the noise as much as possible and try and make our predictions as precise as possible. But that’s not the way nature does it. It seems like in our complex world, that’s not the way we should do it either.

Jim: Interesting. There’s another good example about noise. In fact, it was involved with the research I was involved with when I was a researcher at the Santa Fe Institute with the [inaudible 00:13:07] Farmers Group, which is surprisingly enough, you can think of stock markets as having a significant amount of noise in them. It turns out that the noise is what allows the system to function the way it functions.

Jim: In our case, we define noise traders as people who had some specific reason to buy or sell for reasons that have nothing to do with the market. So, they needed to sell some stock to pay for their kids’ college education or they were buying a long-term position for their retirement account. They believe in the buy and forget strategy. So, they bought and sold for reasons not particularly determined by the current action in the market.

Jim: While many of the biggest market participants are essentially strategic investors on various timeframes, we’ll talk about varying timeframes here in a minute. If you don’t have noise investors to essentially dampen the system, the system can get out of control in a hurry as the strategies interoperate with each other in ways that are very, very, very hard to predict. So, we would actually have considerably less stable financial markets if there weren’t noise traders. Isn’t that interesting?

Jessica: Well, actually, in a compliment to that, Jim, is the idea that in March with all that volatility in the stock market, the market became more inefficient than it had been, and hence offered an edge to savvy traders long term investors. The reason for this that’s been proposed, and I don’t think it’s well studied yet, but I’ve seen it proposed in a variety of sources, is that essentially more speculators entered the market. So, these are also noise traders. They were coming from the Reddit community and from the Robinhood app that enabled the everyday person to do trade more easily with zero fees and also from the sports bettors who had nothing to do, no sports to bet on.

Jessica: So, these three communities entered the markets and introduced potentially, because they were making possibly poor decisions or motivated by other reasons, a lot of inefficiency to the markets. So, then to noise, they’re not only, as in your example, dampens things or keeps things under control by balancing out strategies but can also create opportunities for others to exploit.

Jim: Exactly. And The cool thing about stock markets. You can actually think about them as food sources, right? A dumb investor is a food source to a smart investor. However, a smart investor, when he takes advantage of the called statistical arbitrage is actually polluting his own environment. So, that the amount of food that’s available goes down. It’s actually very, very interesting. The noise trader is a very important part of the ecosystem. You’re right. You get enough noise traders in and it can indeed produce changes in the underlying name of the game.

Jim: I love that fact that you guys quoted in your article this theory that part of it was sports bettors with nothing better to do. Definitely possible. Again, actually, that’s a very interesting example of what we’ll talk about a little bit later is the coupling of systems. Who would have thought that because there was a virus running around loose sports bettors would jump into the stock market and change some of the fundamental dynamics of the marketplace? Talk about a peculiar linkage that would have been pretty hard to anticipate in advance.

Jessica: Yes, a nice example of a second order effect or maybe even third order.

Jim: At least second, I would say maybe easily third, right? Now, let’s talk about another one of our favorite topics at the Santa Fe Institute, which is complex systems seem to be especially vulnerable to events that don’t follow normal or bell curve type distributions, what we call fat-tailed distributions. We can argue endlessly about whether they’re power laws or just approximately so, doesn’t really matter. What it means is that large scale outcomes can happen more often than the equivalent of calculating the results from flipping coins. What’s the probability of five heads in a row, etc.? You also talked a little bit about that, about how complex systems seem to have a special affinity, shall we say, for fat-tailed distributions and what that means about planning.

Melanie: Well, the example I gave earlier about the R(0), that’s an example of a fat-tailed distribution if you ask, “Who is spreading the virus?” Most people who have the virus are not spreading it, but a small number of people who have it are spreading it to a huge number of other people. So, we might call those people hubs in the network of spreaders. This gives rise if you plot the frequency of spreading how many people you spread it to, you get this kind of fat-tailed distribution. So, this is a signature of a complex network of interactions. I think that it affects the kinds of models you want to use to predict what’s going to happen and to understand what’s the dynamics of the system.

Jessica: Yeah, I mean, I might add to that that one of the reasons I find them so interesting, these heavy-tailed distributions, is because in certain cases in some systems when you get these rare events, which are hard to predict in the heavy-tailed distribution, then you get second order events in the tail generated by those first. So, earthquake aftershocks are an example. And then the subsequent gyrations in the stock market after a crash are other examples. What happens and we think in the case maybe of markets is that the rules that you were playing by, once there’s a really precipitous drop, change again. When the rules change the second time, that makes even lower probability events more likely.

Jessica: So, what is initially almost impossible to calculate or forecast now becomes more likely. That in and of itself is fascinating to think about, but what it says to me is that we want to move a little bit away from forecasting and more towards scenario planning. That’s an old idea. It’s been around a long time, but really important in complex systems where you consider both using the past and your imagination, principled imagination, what possible scenarios could unfold. You don’t spend so much time trying to assign probabilities to them, because the probabilities are probably really low. And then you hopefully can design some general mechanisms or general solutions that will work over a variety of those low probability scenarios.

Jim: Yeah, exactly. What I recommend to people trying to do that kind of planning is to create various models and then generate a large ensemble of trajectories. As you say, don’t try to get too smart about picking which trajectories are likely or which ones aren’t. But look at the aggregate statistics over the ensemble. That will often show you that these fat-tailed distributions are likely to be more common than you think.

Jessica: Yeah.

Melanie: Yeah. So, my main field is artificial intelligence, rather than epidemiology or any of the things that we’ve been talking about. But there’s a great analogy in the field of artificial intelligence, which is that right now, when we’re talking about deploying these AI systems, like for instance, self-driving cars, what people try and do is they try and plan for every possible thing that might happen by either writing rules or trying to learn from data. So, you can have all these really unlikely situations that might happen in driving, like a firetruck stopped in the middle of the highway. Okay, or you’re on a road and there’s a tumbleweed that crashes into your windshield or all these different unlikely things.

Melanie: But the way that we humans actually drive is we don’t plan for all these individual events. We have a more of a general, what we call common sense approach, where we have a general strategy for dealing with lots of different low probability systems. I think that’s what the goal of this emergent engineering idea really is, is to not try and precisely predict everything that’s going to happen, because that’s impossible. But to have strategies that will be adaptable and workable across many different situations.

Jessica: To build on Melanie there, I mean, emergent engineering stresses a known story, process over specific outcomes, designing for robustness and adaptability, finding solutions to unanticipated challenges through collective computation. What I work on are collective intelligence, very closely related. I want to hammer home before I say a little bit about the three levels of emergent engineering as I’ve been thinking about it, an important point. That is emergent engineering in its strong form is quite provocative. In a truly EE capable system, I would argue this, there’s no commitment to any particular outcome. That could be quite frightening to people.

Jessica: So, the objective would be to let good solutions, be they policies or social structures, organizational design, to the challenges that a system faces emerge through to a collective intelligence. This is a fundamentally different strategy and philosophy than we embrace now. A somewhat weaker version would be to build an EE capable system that constrain them. So, that they respect certain principles or values we collectively deem important. So, while not guaranteeing, for example, a form of governance, which we might let emerge as appropriate given some environmental challenge, we would guarantee a high minimum quality of life. That would be like a weaker form than I started with.

Jessica: And then the weakest form but still non trivial to design and which draws heavily from ideas in biology, as Melanie mentioned earlier, is the idea again that we find designs that stress process of outcome, robustness and adaptability, scenario planning, but I would say one of the biggest pushes in the weakest form is that it proposes building tuning mechanisms into systems that control how fluid and responsive a system is. So, you add levers that allow for adjustment, I know we’re going to get to this, Jim, of timescales to control time scale separation and adjustment of say, mechanisms that allow you to tune the distance from a tipping point. That’s a novel concept. Some of this may sound familiar. These ideas have been long discussed at SFI.

Jessica: We have some progress on levers already implemented in applied settings like markets or financial systems, the Fed adjustment of interest rates, the circuit breaker and so forth. But the idea that’s different is that we can optimize for robustness and adaptability by tuning timescale separation or distance or a critical point doesn’t have many design precedents. That’s really a new design idea.

Jim: Yeah, it really is.

Jessica: I’ll make it a little bit more concrete by returning to the fish school and then let’s talk about it. So, some fish do school. By schooling, I mean, they form. They swim, they’re all aligned. They can move fast to escape a predator, right? That’s essentially what schooling is. Most of the time, these fish that school are in what’s called a shoal. In a shoaling state, they’re loosely aligned. They’re aligned just enough so that if one of the fish sees a predator, that information can spread across the group. They can rapidly shift into the schooling state. So, in the shoaling state, they’re essentially sitting at a critical point. This is still under investigation, the biological literature, but we’ll simplify a little bit in this conversation. So, they’re essentially sitting at near a critical point.

Jessica: By sitting near that critical point, they can make this rapid change. Notice one thing about this already, so this is obviously beneficial for the fish to be able to do this. Often in the popular literature, tipping points and critical points are presented negatively. So, that’s something just to keep in mind for future discussion. So, just a couple things to keep in mind about this example, the fish have two states or two scenarios rather to come back to our scenario planning, the predator scenario and the foraging scenario. They have strategies for both, shoaling while foraging and schooling when the predators around, right?

Jessica: The critical point, the level of alignment, to simplify, essentially allows them to shift states fast. There’s uncertainty in the system, because they don’t know when that predator is going to appear, but they do know what to do when it does. So, that comes back to our earlier points about scenario planning. When you might not be able to predict the probability of some event, but you know that if it happens, this is a good strategy for dealing with it. Of course, the last thing I want to point out is that there’s a robustness trade off.

Jessica: So, if one of the fish incorrectly detects a predator, then that fish might transmit this information to the rest of the fish. They might inappropriately change state, which has a cost, because they should be foraging and instead, they’re switching to schooling. So, tuning how sensitive the system is, how sensitive the fish are to the likely presence of a predator would be an extremely beneficial strategy. We’re not sure the fish can do it. We do have evidence from a primate system and some other areas of biology that these tuning mechanisms exist, but this is new. It’s something we could exploit in designing EE capable systems.

Jim: Very good. When I was reading about the tipping points, I had thought which I was going to run by you, which it might be useful to distinguish between easily reversible tipping points, like switching in and out of schooling, and tipping points of the sort that you alluded to in climate discussions. People listen to my show regularly know I often talk about the distinction between homeostasis and hysteresis, systems that can return to their previous points and those that don’t. Putting both of those in the same bag may indeed be misleading. I know you were trying to weigh people off. Don’t think about the climate tipping point when we’re thinking about switchable tipping points. So, maybe some new terminology so that tipping points that are easily reversible are distinguished from those that aren’t.

Jessica: So that’s absolutely an active body of research and a totally critical point, Jim. You’re right. I want to emphasize though that so in the fish case, they’re still different states, it’s just that in the examples you’re giving, we’re going from an organized to disorganized state. In the fish case, we’re going from a loosely line to a more organized state. So, these are all active areas of investigation. But even in the case where you’re going to the disorganized state, we should not place a judgment on that a priori.

Jessica: Sometimes if the distribution of the environment, environmental states, if it’s fundamentally changing, then the strategies that we have at present will be fit or overfit to the past, right? They’re very unlikely to work under this new distribution, this new environment. So, in that case, we may actually want to be in a system where the tipping point is not so to speak reversible, where we move to for a short period of time at least a disordered state, so that we can explore new solutions.

Jim: Frankly, that’s my view of what’s needed in our social, economic, political operating system. These things were not designed to operate in a world of hard limits that we’re approaching very rapidly. So, it strikes me we have to set off into the unknown and make some transition through probably disorder until we find what’s on the other side. So, very good point. There are times when we actually do want one-way tipping points and sometimes you may not survive if we don’t find them.

Jessica: That’s exactly the goal of emergent engineering in a way but maybe to do it in a somewhat safer, with some safety mechanisms to design EE capable systems. That’s the second form I mentioned, not the strong form where you’re free to explore any new outcomes. But this weaker form where you might build some constraints of the state space that the system can explore, the governance systems and so forth. You might build something, but then within that constraint space, you leave it open.

Jim: Yeah, yeah. Another way to address that which was one of the things that we’re dealing within our GameB socio-economic, political design work is operate at different scales. For instance, if you have a theory about money or governance, try it at the level of a village before you inflict it on the whole world, right? I think I would add that to your EE toolkit. Can you find small-scale, low-risk experiments that nonetheless will provide confidence on the core principle with certainty, of course, because as we know more is different. But at least that’s one way to get some experience without large scale costs from unpredictable EE type experiments.

Jessica: There’s a very interesting idea in biology called… David Krakauer worked on it long ago. I forget who the main guy is, I’ll think of his name in a second. … called arena selection. The idea of arena selection is that the body, for example, will try out certain competitive strategies in an arena within the organism. And then strategy that wins in this artificial arena is then sent out to the world.

Melanie: Yeah. There’s a great example of that in immunology, I know, which is where your immune system cells like your B cells are grown in the bone marrow and tested to see if they recognize the self or not. If they do recognize the self, they’re destroyed, because you don’t want your immune system attacking yourself. If they don’t recognize yourself, they’re released. So, that’s a great example of an arena where the body’s trying something out before it releases it.

Jessica: In fact, I think that idea was… Melanie, that’s a great point. … originally developed in that context. In fact, David might even have written about it.

Melanie: Okay.

Jim: Yeah, there’s another example too, which close at least is the pandemonium theory of cognition, right? Marvin Minsky, and then it was later extended a little bit by Daniel Dennett, some others.

Melanie: Selfridge.

Jessica: Yeah.

Jim: Selfridge originally. You’re right, absolutely. There’s competitions in the brain for unconscious thoughts that are essentially competing to become conscious and to turn into perceptual objects or affordances, etc. So, maybe a general pattern that we should be looking at.

Jessica: Yes, absolutely.

Jim: Yup. Okay, another topic you that we alluded to, which we can go into a little bit more detail here is timescales. You gave a pretty good example, I sharpened it up a little bit. I would restate a little bit by saying, with respect to COVID-19, pandemics happen on relatively short timescales, weeks to months, while normally building hospitals take years. As you guys suggested, gaining the ability, the planned ability to build hospitals in weeks would be a significant game changer in how we would adapt to pandemics.

Melanie: Yeah, absolutely, not just hospitals, but all of the kinds of infrastructure that we’re lacking right now, all the PPE. When we finally get a vaccine, all the things that we need to distribute the vaccine, the glass vials and whatnot. All of that infrastructure takes so long. That’s really the bottleneck. So, if we had some way to be more nimble, to be more ready for many different possible scenarios, as Jessica said, we could really alleviate some of these problems.

Jim: Of course, this gets to the fundamental problem of robustness in a capitalist economy, who pays for it? Things like for instance having extra capacity to build glass vials for distributing vaccines, normally no vaccine company or no vial company has any economic incentive or receives any economic signal to have extra capacity. It’s just a dead weight investment, which in any timeframe that a CEO’s worrying about his bonus, i.e. no more than three years. They’re not going to do it. So, again, as I was reading the paper, I three or four times wrote down, “Cost, who pays for it?” Very seldom can you get robustness for free. Any ideas on how we should think about as a society, setting up a higher order set of signals such that robustness does emerge?

Jessica: So, biology has the same problem. This is like your point about incentivizing investment in robustness mechanisms is a huge area of research in biology, how nature does that, because the same logic applies in two ways. One, often the perturbations are hard to predict or you haven’t them experienced before. So, it’s hard to build a strategy when that’s the case. The second is that there is a maintenance problem. So, even if you have a gene that’s a duplicate, the gene for which is a duplicate is destroyed or damaged, maintaining that when there’s no perturbation, given the costs is hard. Again, this is an active area of research and there’s a lot of controversy around it.

Jessica: But one idea is that the way nature solves, or genes solve that problem is to have genes with multiple functions. So, there’s partial overlap in a network of genes instead of just two that are duplicates of each other. So, the pressure to diverge is maybe less or the cost that you pay is maybe less. So, we have to think of strategies like that. I would also add, and we talked about this last time, Jim, maybe in certain communities, this isn’t a very novel idea.

Jessica: But I don’t understand why this kind of equipment in the human case, ventilator parts, vials and so forth can’t be 3D printed, so we can develop technology that makes for a more fluid production system. Of course, we don’t want to generate a lot of garbage. So, we want the materials we use in this more fluid production system to be recyclable or biodegradable or something. But that seems like an obvious area in which to invest. As Melanie pointed out earlier, we have these general purpose solutions that apply to a wide variety of scenarios, more fluid production mechanisms seems to be one of those.

Jim: That certainly could help. Though I always point out, I was involved with 3D printing from the very beginning. Unfortunately, there’s material science problems at the intersection of 3D printing and lots of products. I always tell people as an experiment, next time you go to Lowe’s or Home Depot, look at your basket when you’re checking out and say, “How many of those things could have been printed on a 3D printer?” The answer is usually not very many of them, because the materials aspects of what you can actually manipulate in today’s 3D printers at least doesn’t solve an awful lot of problems. Not clear to me, for instance, how you’d print a mask for instance?

Jim: Though, in the future, they can probably design ones that can do that. So, it’s certainly an area that would be worth researching and investigating and planning and maybe in investing in. Again, social investing, that’s maybe what it comes out to in our capitalist society that we have to at a higher level say, “Socially, we are going to allocate $10 billion a year to building out the robustness in our systems, because no individual business person is going to do it. So, socially, we have to make the decision to do it.” Of course, if we’re going to do that, we have to have a damn good judgment about where to invest that money.

Jessica: Or again, maximize that general purpose.

Jim: Yeah, flexible response.

Jessica: Flexible response.

Jim: Yeah, absolutely. Well, let’s get to our final topic here. We’re getting a little long here on our time, which is when I think about this and when I hear you all write about and hear other people talk about it, I go, “Damn, thinking about these complex systems as a human is really hard as an individual human.” Our cognitive ability was not evolved to do this really. We lived in a world that had certain amount of complexity, but it was relatively stable with occasional switches, as you pointed out, Jessica. But we’re not individually very good at this.

Jim: If we’re going to solve these problems, we got to get good at what you talked about in the paper, which is collective intelligence. What can you both say about what we know about collective intelligence and what we need to learn about collective intelligence, if we’re going to manage this transition into a high complexity world that doesn’t self-destruct?

Jessica: Well, let’s see, I think the first thing I want to make is that I suspect a lot of people think we have a better understanding of collective intelligence than we do. I think, personally, that our understanding is nascent. One of the things that I do feel we have got our heads around is that there are two parts to collective intelligence. There are two phases if you like. There’s the information accumulation phase by agents or individuals in the system when they’re looking at the environment and trying to figure out what the regularities are and then they have opinions about that. And then there’s the aggregation phase when those opinions are pooled or that information is shared. Some collected output is produced. Hopefully, it’s useful given the environment.

Jessica: When it’s useful given the environment is when we say broadly that it’s intelligence. But what we don’t understand is what the best way to do that aggregation is. The current problems we have with our voting systems, the electoral college and the popular vote are good example of the challenges of information aggregation. In many places, information aggregation is talked about just like that. It’s basically a black box. It’s black box in markets, how that happens, how price is discovered and so forth. So, totally open set of questions. I think, though, we can focus our questions a bit.

Jessica: For example, if we know that individuals have certain biases in the way they collect information, we might be able to compensate for those by designing algorithms that compensate for those weaknesses. Correspondingly, if it is very hard for a system and this comes up maybe more in biological systems for a particular algorithm that might be ideal from an optimization perspective, it’s hard to evolve that or hard to discover it. We might be able to compensate by building smarter components. So, the information that they extract, those components extracting the environment is better.

Jessica: So, I guess, the point I’m making is that we can trade off costs. We can either invest more in the agents, so they are smarter and gathering better information or we’re maximizing the differences in the information they gather. That’s the diversity point that you sometimes see made in the collective intelligence literature. Or we can invest in the algorithms for sharing and pooling that information and potentially play those off against each other. So, there’s a lot to do.

Jessica: The last point I’ll make is that we have now a new journal on this topic, on collective intelligence that both Melanie and I are involved in. It’s in progress now. The first papers will be accepted probably in January, but it is dedicated to finding to the empirical and theoretical foundations of collective intelligence from adapted matter of physics approaches all the way through to human organizations and hybrid AI human systems. So, really transdisciplinary, dedicated exactly to these kinds of questions. So, we can improve both understanding in basic science and application.

Jim: Melanie?

Melanie: Wow, that’s hard to follow. I mean, that was a theoretical description of collective intelligence. But I think I’ll just say two things. From more of a personal standpoint, I think you’re right that we don’t easily think in complex systems ways. We’re much more linear thinking, even those of us who work in the field, but it’s possible to train oneself to think in a slightly different way, to approach these things in our own lives in a slightly different way. I think that’s possible. I try to do it, I don’t always succeed.

Melanie: But I also think that one of the biggest problems we have today for all of these systems is that people who aren’t scientists often don’t trust science. They don’t understand science. They don’t know what it is. They don’t know how it works. They don’t trust the recommendations that scientists give. They don’t even know how to judge the reliability of what scientists say. We see this with the whole debate about wearing masks. That’s a huge problem that I hope is something that complex systems will address, is to how to get people to better understand what science is and when and how to trust it, because I think that is one of the biggest roadblocks we have to solving some of these problems.

Jim: Yeah. That’s getting to be a damn big problem, just the crazy shit you see on the internet. 50% of Americans are now skeptical about the COVID-19 vaccine. Do you know how many Americans have died from vaccine problems since 1950? Take a guess.

Melanie: I can’t guess.

Jim: A hundred.

Melanie: Yeah, I mean, there’s a lot of examples of this mistrust of science that are really impeding our health and our well-being and our environment. I see that as one of the biggest problems that scientists themselves have to face.

Jessica: Now, let me make two remarks on that and to take the other side just a little bit, not anti-science, but add some nuance to this. So, the first is that I’ve been wondering how much fake news and mistrust is greater today than in the past. I’ve seen some reports indicating for example, the use of propaganda in ancient Rome is astronomical. I’d love to see a study that addresses this question of whether there is an increase because of the internet or other reasons, the increase in misinformation or disinformation even. That’s the first point that we should hit on a little bit.

Jessica: And then the second is there are many issues around vaccines that in addition to death and so forth. One is just whether they work and how much money goes into producing the vaccine possibly based on faulty science. One of the things I do worry about with COVID is because there’s so much pressure to produce this vaccine. There’s so much financial incentive to be the company that does so. We already know that biomedical science suffers from a lot of replication problems and effect size issues. We do have to worry about sloppy vaccine production. So, what can we do as a society to incentivize thoroughness there?

Jim: Yeah, I think that’s clearly a good point and that not only thoroughness but document the thoroughness and make sure that people understand that the protocols have been followed. That a certain asshole who remain nameless didn’t jam the thing through a week before election day.

Jessica: Exactly.

Jim: Yeah. We need not only good processes, but transparency about the processes, so that people have good cause to be confident.

Jessica: I think that would help.

Jim: Yeah, that would help.

Jessica: To Melanie’s point about trust.

Jim: Yup. A good question. It’s an interesting one about, “Are we actually in a worse world for fake news and misinformation than we used to be?” I hadn’t really thought about that. But think back to the 19th century, literally, the era of the snake oil salesman. Guy come into town on a wagon selling his nostrum that’ll cure a long list of diseases. Literally, it was snake venom and probably some alcohol, right? People bought it. The press in the 19th century was mostly partisan. We didn’t even attempt to have non-partisan newspapers. Typically, all newspapers were associated with one of the political parties or the other and were amazingly vicious.

Jim: Go back to the election of 1800, Thomas Jefferson against John Adams. The personal attacks were almost as bad as what we’re seeing today, maybe it’s sometimes worse compared to the norms of the time. So, it’s an interesting question. I don’t know how you’d get the data on whether our mimetic space is more polluted than that in the past.

Jessica: Yeah.

Jim: Well, maybe it doesn’t matter if it was more polluted in the past or not. Maybe because of what we were talking about that we’ve now emerged into a world where global complexity are the issues that has to be managed. That even if we had shitty mimetic hygiene in the past, it may be that we’re in a point now where we can’t survive that. Maybe that’s the real argument.

Jessica: Jim, the other element to this debate that’s so important is humility and analytical skills. So, these points apply as much to someone’s grandmother as they do to some of our very esteemed colleagues. It’s amazing to me how much certainty people have about their positions or their solutions that they suggest or even the workings of their model scientists or people reading about these things. Personally, to make a personal point, I personally not get my head around it.

Jessica: I think one of the things we need to do is to, not just as everyone’s been arguing, teach more probability in school and the importance of informed reasoning and so forth, we also need to teach people that science is fundamentally about uncertainty. We need to teach people, it’s like a little bit of related to our essay, to embrace it and to have a little bit more humility. That applies. It’s amazing that you can go through your coursework all the way through the PhD through the postdoc, be a professor for years, and you still don’t have this humility. You can hear Feynman quotes every day and somehow, they still go in one ear and out the other.

Jim: I say that’s particularly true when you’re dealing with complex systems. The words I use a lot is epistemic modesty. Our ability to really predict a complex system, especially if we perturb it in a major way, not all that strong. People claim that they can make solid predictions. One should be skeptical. As you say, that needs to be somehow part of our baseline education to realize that particularly with complex systems, that we need to not over claim and not over believe.

Jessica: Absolutely.

Jim: Melanie, any final thoughts?

Melanie: Oh, well, yeah, I totally agree about the humility part. I think that’s really important. We have to know when we don’t know something, but we also have to have confidence in some of the things that we do know. I think that it’s a wrong message to tell people that all science is uncertain. Different amounts of it or different types of it are more certain than other things. That’s something we have to let people know what they can trust and what they can’t. I think that right now, a lot of people think you can’t trust anything.

Jim: The problem is we don’t have any authorities that are respected across the board. It used to be Walter Cronkite said it was true in 1965. It was probably true for most Americans where there is no source of authority like that anymore.

Melanie: Yeah, each bubble has its own sources of authority.

Jessica: I mean, it’s also though that as I always tell my students, every paper has errors and can be improved. Of course, you want to do your best to not make those errors, but that’s just part of science. But where trust comes in or where more authority comes in is over time. It’s a collective consensus, it develops over time about some results or set of questions. That’s subtle, but it’s that time that gives that confidence. Sometimes we have to speed it up admittedly, like in COVID. But that point seems to be lost on a lot of people, in every paper, there’s errors.

Jim: If we’re living in an epic of exponentially accelerating change, we may not have the time. Do we have the time to convince people that the anthropomorphic climate change is real? We might not, at least not the typical Thomas Kuhnian couple of generations. I don’t think we got that kind of time. So, we got to find some other way to reach collective intelligence that results in reasonable assumptions about what’s going on. Difficult challenge.

Jessica: Yeah.

Melanie: Absolutely.

Jim: Well, I think on that note, I’m going to wrap it up. I think this has been a very interesting conversation. Thanks, Melanie Mitchell and Jessica Flack.

Jessica: Thank you, Jim.

Melanie: Thanks, Jim.

Production services and audio editing by Jared Janes Consulting, Music by Tom Muller at modernspacemusic.com.