The following is a rough transcript which has not been revised by The Jim Rutt Show or by Daniel Schmachtenberger. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Daniel Schmachtenberger. Daniel’s central interest is longterm civilization design, developing better collective capacities for sense-making and meaning-making to inform higher quality choice-making towards a world commensurate with our higher values and potentials.
Jim: Daniel was an early guest on the show back in EP7 where we explored a wide range of Daniel’s thinking, all kinds of interesting topics. If you’re stimulated by this conversation, go check out EP number seven. Today we’re going to be a bit more focused talking about sense-making, and we won’t forget that sense-making is principally important due to its relationship to decision-making and action-taking. Daniel, welcome back.
Daniel: It’s really good to be back, Jim. Thank you for having me. Looking forward to the discussion today.
Jim: Yeah, I think this is going to be great, and needless to say, it’s highly timely. Why is sense-making so important right now?
Daniel: Why I think sense making is always important is we can look at all of the object level problems in the world, whether we’re talking about environmental problems or war, or pathways to war, or infrastructure issues as coordination issues. Why are humans choosing the things that they do? And particularly large scale behavior type issues.
Daniel: And that if we want to solve climate change or wealth inequality or whatever, we have to have humanity coordinating quite differently. The basis of large scale or collective choice-making is collective sense-making and our ability to communicate effectively.
Daniel: I think that, for instance, if we’re trying to solve climate change and we’re not looking at the attendant issues that are associated, say the economic issues or the geopolitical issues, so we come up with a solution that, say, if the U.S. and Europe were to implement a particular kind of carbon tax and China wasn’t and there was no enforcement mechanism, it would have geopolitical ramifications that are also very problematic.
Daniel: As long as the solutions we’re coming up with to problems are based on that kind of partial sense-making, then you’ll have lots of people who care about the thing that is being harmed by the very partial solution that is trying to benefit one thing, they’ll actively resist it. Even if it’s not that, even if we had a fairly good solution, but we didn’t have high quality sense-making generally, where a lot of people didn’t understand the issues and were against it, it’s very hard to do anything at scale when you’re fighting very high friction of a lot of people disagreeing and not wanting to implement a particular solution.
Daniel: Right now, I would say we’re kind of at peak bad sense-making where on almost every consequential issue, either people have no idea what’s true or they’re very fervent about what’s true in total polar opposition to other people that think nearly opposite things, and we see that in the COVID time with mask wearing and Hydroxychloroquine and lockdowns, that these got to the level of massive civil tensions in the U.S. and in other places in the world.
Daniel: The question of, is COVID actually an exaggerated, mostly hoax, or is it a very serious pandemic? Is lockdown the appropriate solution? Or does that cause problems that are much worse than it? Or is systemic racism the biggest issue in America, or really a kind of non-issue? What are Chinese intentions with regard to the world and how should U.S. relate to it? Pretty much across all the spectrum of all the most critical issues, there is nothing like shared coherent sense of what is true and what should happen, and so not only can we not solve any of the problems, but then the inherent polarization of the different views on it becomes its own source of civil breakdown.
Daniel: So, we’re watching a very rapid civil breakdown right now. If we can’t do a better job of getting people to be able to make sense of the world in a more coherent way and communicate in a more effective way, then I think we see civil breakdown continue to unravel.
Jim: Yeah. I’m afraid you’re right. A couple of points I’ll toss in is, you mentioned scale and that’s something that I think is fundamental when we’re thinking about the problem, and that the world is now finally coupled on a global basis at high density.
Jim: That’s really only been true since maybe the 1980s. If you look at the growth of, for instance, trade. World trade previously peaked in 1914, and didn’t recover to 1914 levels as a percentage of GDP until the ’80s, kind of a surprise, but true. That of course has continued to grow since, and of course the globalization of finance, et cetera, and the fact that we are apparently at or beyond the carrying capacity of the earth for our civilization has made the earth itself a global player, right?
Jim: Those things make the game much more difficult. Hard enough to coordinate in your family, right? What are we going to watch on TV tonight? Can you imagine trying to have to coordinate at a global scale? The other is, as you alluded to, the high dimensionality, or we could even call it the complexity, of the problems that humanity is dealing with. These aren’t like, “How do we make iron into steel more effectively?” And we come up with the Bessemer furnace. That was a kind of a narrow, limited shot. Very big innovation that made a big difference for humanity, but it’s pretty much one or two dimensions, right?
Jim: How to deal with climate change or inequality, or how do pandemic spread on a globally, highly interconnected network are problems of high complexity? And problems of high complexity typically are not resolvable by formal analytical methods because the dimensionality of the space of possibilities is too high, right?
Jim: We have to proceed by experiment by heuristics, et cetera, because there are no closed form solutions. There is no Bessemer furnace for solving climate change, for instance. Particularly not if we have to address the various domains that you mentioned.
Jim: The other, and I think this is somewhat under appreciated by many people, but clearly some of the experts are starting to understand it, is we have stumbled into a communications ecosystem that was not called for, not designed, that it just emerged, right? I’m old enough to remember when there were three TV networks, and the typical American household watched one of those three at any given hour in the evening, and something like 60% or 70% of Americans got their news from those three TV networks. What’s replaced that has been a cacophony of voices, none of whom have any high status with any high percentages of the population.
Jim: The other day we were having a chat. I looked up online that Breitbart, for all of its influence, reaches about five million unique people directly per month. In the old days that would have been considered a fringe network, but because of its ability to propagate on these social networks and email and talk radio and all of these coupled, highly fractionated networks, we’re living in an unprecedented information ecosystem.
Jim: I think this is a theme I’m going to come back to. I’d be interested just to get your thoughts on it. That this communications ecosystem is just what you think any ecosystem would be. It’s a platform for evolution, and stuff’s evolving very rapidly. That’s why I said it was not called for, it was not designed, it’s just evolving.
Jim: No one directed from on high for Zuckerberg to design Facebook, and God knows what exact reason he actually did it for. Maybe it was just to get laid. Who knows? But it’s ended up by good execution and by being at the right place and right time to have created an unprecedented way for people to communicate on a global basis, which we have no experience in dealing with.
Jim: With those themes out there as ones that I’d be interested in getting your thoughts on is global scale complexity, and the fact that we have an evolved and radically new communications platform in which to try to do sense-making.
Daniel: Yeah. Okay. We’ll come back to the complexity. I’ll start with the communications platform. So, interesting thing about human communication is even if we’re trying to share true information, it’s hard to decouple it from agentic interest.
Daniel: If I’m sharing something with you, you’re taking it as something that could inform your better choice-making, but I’m also taking it as an opportunity to influence your choice-making in a direction that’s possibly beneficial for me. Lying is not a new thing. People have lied or at least just created spin or marketing since forever. And you can read Sun Tzu, and see that the whole idea of narrative warfare and info warfare is a thousands-of-year-old idea. And that people actually really crafted how do you get deception right to win war and run kingdoms and things like that.
Daniel: Then as the media technology changed, the ability to do effective influence increased. Obviously with the written word, you could scale a message much further, and then with the Gutenberg press and then with various forms of broadcast. Then whoever it is that controlled broadcast has such an unprecedented influence over the whole population.
Daniel: There’s so much power to that, that those who are seeking to win at the game of power, seek to capture and influence the broadcast. And so we can see that, when we look at like broadcast stations being consolidated financially right now by Murdoch or Koch Brothers or whatever, that’s also not new, right? You see Henry Luce setting up the media empire a century ago, seeking to do a similar thing, “Let’s own all the magazines.”
Daniel: Even when you had that broadcast scenario, there was manipulation that was involved, but from a social coherency point of view, as you mentioned, when there were three channels, it’s pretty likely that even if what everyone is seeing has some distortion, they’re at least seeing the same thing. And so they can agree or they can disagree and they did, fervently, but they had some basis of shared information about the world to agree or disagree on, because most of us have no information about most of the topics that we are passionate about through direct info.
Daniel: We weren’t in Beirut, we weren’t in Wuhan, we weren’t in D.C. or Portland when the protest happened. All of our beliefs about it are being mediated to us by media. In the current information environment, I can find a Trump supporter and a Biden supporter in two different parts of the country who can scroll their Facebook feed for 10 hours, and not necessarily see a single piece of news in common, even though they’re seeing news the whole time.
Daniel: There’s basically no shared reality basis for them to even be in reaction to. And democracy just can’t work in an environment where people can’t have effective conversations, and then you don’t have any shared base reality. And so we can see that in the U.S. there is no real kind of sense of who are our countrymen, where we feel some fealty across, for instance, a left/right divide at this moment that creates national unity and the ability to have something like a democracy or a republic or participatory governance.
Daniel: We have instead so much internal enmity and in-group/out-group dynamics that for the most part, more enmity than we have towards any external forces that that’s of course driving the breakdown and lack of coherence of that system as a whole. When we look at the movement from broadcast to internet, and one of the things we see is… You were there, you remember well.
Daniel: There was this kind of libertarian idea that that would be awesome because it would remove the ability of a few people who held the monopoly on broadcast to control the message. Everyone could put out their own stuff through YouTube, and maybe the best ideas would emerge to the top, which was a naive, hopeful thing. Because when so much content gets produced, that there’s a billion search results for any topic, and no one could even… It would take your whole life just to go through the search results for one topic.
Daniel: Then of course, how do we create algorithms that curate all of this content becomes the central topic. And so we see that with the YouTube recommendation algorithm and the Facebook newsfeed algorithm, massive machine learning, indexing technologies that can actually sort through that. Then the question is: what is their basis to sort through the information and put it in front of you?
Daniel: And when we have broadcast, the TV couldn’t get a lot of information about me directly. It had to have very generalized kind of marketing. It could tell more ratings or less ratings, but when I’m on my newsfeed and it can pay attention to what I click on, what I share, how long my mouse hovers over things. By the time I have liked 300 things on Facebook, the algorithm can predict what I’m going to like better than my spouse can, and then develop a psychographic model of me that is actually better than any kind of psychologist, psychographic model of me could be.
Daniel: Then it curates all the content to my interest, which in a way… But my interest in a very specific sense, because these platforms are businesses that have their own agentic interest, and their agentic interest is that they have a business model that’s based on selling advertising and they sell more advertising by maximizing user engagement, mostly meaning time on site and also sharing and things like that.
Daniel: Most people don’t plan, “I’d like to spend six hours on Facebook today,” and their life would be better if they got off and went and did something else. Time on site is usually maximized by getting people to not be in their prefrontal cortex remembering what their plans are, but to get hooked somehow. How to put the content in front of me that will maximally hook me? Ends up being a combination of appealing to emotional triggers and cognitive bias.
Daniel: When there’s so much information, and I don’t know most of it, my relevance filters are going to pay attention to the things I already know are important, which is going to have me double down on bias. And the things that scare me or anger me are worth more of my time for kind of evolutionary, protective reasons. As there’s so much content and there’s micro-targeted psychographic information about me, and it’s curating that feed uniquely to me, not even to a demographic profile, but an equals one optimization to maximize time on site for everyone, then everyone gets more bias and everyone gets more emotionally hijacked, so you end up in a world where you have not just two competing points of view, but increasingly more fractured narrative camps.
Daniel: All of whom have less shared Venn Diagram overlap with each other, and everyone is both more certain and more outraged while simultaneously being more wrong because the complexity is such that no one can really do epistemology. People believe ‘yes’ or ‘no’ on climate change or whatever the topic is, and almost nobody has tried to read all of the data and model it themselves.
Daniel: There’s a kind of epistemic nihilism of, “I can’t possibly really make sense of it.” So, if I can’t make sense of it, there’s almost more of an unconscious move to a tribalism of, “Who are the leaders that I want to be led by and who are the in-groups that I want to identify with?”
Daniel: This environment, this media technology in particular, I think is destructive to shared sense-making and to individual sense-making in a way that is totally unprecedented in the history of the world, and the scale at which it happens. And then that environment… And it’s not that Facebook is trying to make people more radically left or more radically right. That’s a second order effect. That’s an externality of it trying to maximize their time on site, but using a profoundly powerful AI to do so that’s really effective at it, that has these very tight, empirical feedback loops.
Daniel: But as it’s doing that, and everyone is kind of getting more certain and more bias, it becomes easier for other actors that want to move people in a direction to just kind of push them further in the direction they’re already trending. And so then it becomes very easy for other state actors or other kinds of non-state actors, but that have interest, to be able to come and make the right become more right, make the left become more left by just feeding them more of the stuff that they’re already oriented to, going into the chat rooms and being able to exacerbate certain types of dynamics. And so, the narrative and info warfare to turn the enemy against itself becomes radically easy in that information ecology.
Jim: Indeed. Indeed. Yeah. Let me respond to several different points that you made, which are good. One, you’re right. I’ve been doing this since 1980. I worked for the source, the very first consumer online system. And I can tell you, we all thought we were doing great work for democracy in the future to have access to all the information they need, and democracy would be better than it ever been before. How naive we were.
Jim: On the other hand, I would suggest that there was a phase change that occurred in the economics of online that actually caused it to trend towards the direction that you so vividly outlined, and that was prior to 2004, 2005. Most information sources on the internet that were of any quality or depth were paid. There was an alignment between the person that operated the service and the user to get you the most value as quickly as possible and get you the hell off because they didn’t want to pay for your time online, which was also more expensive in those days.
Jim: Around 2004, 2005, the economics changed such that platforms and bandwidth became cheap enough that you could actually fully fund a service at scale by advertising alone. And suddenly, the dynamic that you described came into being where the business model wasn’t in alignment with the customer, which was to provide them as much value for as little time online as possible, but quite the opposite.
Jim: The revenue source was tightly coupled to time online, so now the platforms, think of them, Google, Facebook, Twitter, et cetera, have a overriding business incentive to maximize your time online. And as you say, you add in the new capability to do group of one micro-targeting on what hooks you the most, what hijacks your attention to keep you online for as long as possible.
Jim: Then to loop back with, I guess you’d call it dopamine hijacking, which is to provide the most vivid, the most insane. There’s been a lot of work on the fact that, for instance, that fake news, which is intentionally designed to be high-impact will spread five or six times farther than true news on a similar topic. And we all know how easy it is to get sucked into violent arguments on the internet rather than a thoughtful discussion about the pros and cons of a piece of literary fiction.
Jim: Again, these were not designed in the sense that someone said, “I think we should hijack people’s dopamine to make them feel bad about themselves and their neighbors, so that we can have a great business.” It went step-by-step in kind of a natural evolutionary way from the introduction of fully advertising-based products in the 2004 and 2005 timeframe.
Jim: And now we get to the next step, which I haven’t seen that much about yet. If we have a system based on dopamine hijacking… If you know anything at all about drug addiction or not so much addiction, but the adverse effect of certain illegal drugs, there’s the concept of dopamine exhaustion. Why do people who inject amphetamines day after day eventually go insane and see their life fall apart? It’s because of dopamine exhaustion.
Jim: Dopamine no longer sends a meaningful signal. You build up a tolerance, and you end up in despair, essentially in depression, and you wonder why things like suicides are on the uptake and why drugs that call on that kind of despair like heroin, and other kinds of depressive drugs, are on exponential uprise is because one of the results of this kind of dopamine hijacking platform is dopamine exhaustion and the related despair.
Jim: I thought I put those things in. And then again, just sort of point out and maybe name something, the fact that non-state actors and people who are operating in bad faith can use the same tools that evolved for advertising. I think about that it’s the affordances of the platforms. And again, if we think of the platforms as evolutionary contexts, one of the things that starts to shape what gives life in an ecosystem are the affordances, i.e. what can be done.
Jim: Once you have micro-targeting that was designed for rather mundane, trying to sell you a stereo that you don’t need, or a slightly fancier version of ketchup or something, those same affordances can now be used for ways that they probably were not anticipated in originally, for non-state actors or state actors or political actors of any sort. Again, step back a little bit and think about the ecosystem and the evolutionary context in which it provides.
Daniel: Yeah. It’s actually worth just taking a moment on this topic of hyper normal stimuli and dopamine to just simplify radically. Dopamine is a molecule associated with motivational networks, other things as well, pattern matching and things that we would have motivations, but kind of feels good, do it again, type dynamics.
Daniel: And so the things that give dopamine hits evolutionarily are things that also would confer selective advantage. Let’s say in a natural hunter/gatherer environment, pre-agriculture, most of human evolutionary time has been, it was pretty hard to get salt. It was pretty hard to get a lot of fat, pretty hard to get sugar because small amount of berries, lean meat. So, salt, fat, and sugar conferred evolutionary advantage because of caloric density and electrolytes to make it through famine, so there was a increased dopamine hit on that over, say, leafy stuff that was quite abundant, but didn’t have the same kind of caloric density.
Daniel: Obviously salt doesn’t have caloric density, but the other ones. Those people who worked hardest to get those things would have made it through more, so there would have been also a selection for kind of strong dopaminergic response to that. But that was evolutionarily useful in a time where it was very hard to get those things.
Daniel: Then we move forward to modern environment where we can make fast food that is just combinations of salt, fat, and sugar, and run the experiments for maximum palatability, make it not stuff that doesn’t require chewing too much. And we get obesity epidemics because the dopaminergic dynamics haven’t changed, but they’re no longer evolutionarily adaptive. Now they’re actually anti-adaptive.
Daniel: So, we basically extracted the dopamine part from the associated evolutionary context and the kind of nutrient part, and now that creates its own kind of catastrophic risk. And what fast food is to food, and nutrition, is the same thing that porn is to sexuality and relationship.
Daniel: Take something that has a normal dopaminergic process associated with something that has evolutionary advantage for raising kids and bonding and et cetera, and just extract the hyper normal stimuli parts, devoid of any of the things that would actually be relevant to human life. The same is true for what social media relationships are to social dynamics and relationships in general.
Daniel: And we can see that from a kind of marketing perspective or from a business perspective, every business owner wants to maximize the lifetime value of the customer and maximize the number of customers. Addiction is a really good way to maximize the lifetime value of a customer. And so, from the supply side, figuring out how to get marketing that manufacturers demand, and even to the point of manufacturing addiction, is a straightforwardly profitable thing.
Daniel: We can see a McDonald’s or a Hostess or a Philips Morris or whatever, could do that well in the domain of chemistry. But now we have dopaminergic hits that are just as powerful as that, that are photon-mediated rather than chemically mediated, can be sent out to everyone, can be personalized to them based on empirical feedback. And that we can start giving kids with the screens that they’re looking at and touching as children, and not control in the way that we would control for you have to be 18 for a cigarette or 21 for alcohol or whatever else.
Daniel: I think understanding our susceptibility to hyper normal stimuli, and the perverse incentives to optimize hyper normal stimuli, and then for each company on its own-
Daniel: … demise hyper-normal stimuli. And then for each company on its own to optimize hyper-normal stimuli, but then for a company that is a curator of other things, where its motive is to maximize your engagement and your engagement with all the things that are maximally sticky for you, maximize your engagement for it in total. And to really get that the kind of machine learning AI that’s running the Facebook algorithm or the Google-YouTube algorithm is a more powerful AI than the one that beat Kasparov at chess. That that is maximizing for hyper-normal stimuli across every axis for you that it can in the platform.
Daniel: It’s an important piece of context to understand, and for us to really think seriously about, how do we as individuals and parents and whatever, be cognizant of that and create some resilience and protection to it, factoring how profound it is? And as a society, how do we start to remove the perverse incentive for hyper-normal stimuli? Because, when we talk about, what is a healthy civilization, what are the right indices? We know it’s not just GDP per capita. I would say one of the inverse indices is addiction. The more addiction a society has, the less healthy it is writ large. And so, this is this a meaningful part of the conversation.
Jim: Yeah. I think that’s a very deep insight. I don’t think I’ve ever heard it described quite that way, but a measure of addictive behavior across all modalities is actually kind of a measure of anti-sovereignty.
Daniel: Yeah, exactly.
Jim: That’s actually deep. I’m actually going to think about that for a second, at least in the background. But while I’m processing it in the background, I also want to kind of go in a slightly different direction. Which is, the result of all this, one of the results is fragmentation and destruction of all authority, or at least any consensus about authority, and replacing it with nothing so far, at least. But it’s also worth remembering that the good old days weren’t necessarily so good. Remember there was a book called The Best and the Brightest, talking about the early smart people that worked in the Johnson and Kennedy administrations.
Jim: Of course, the title was meant to be ironic, because those best and the brightest led us into the idiotic quagmire of the Vietnam War. One of the problems with too much agreement is not enough checks and balances in our sense-making and people who think they are the best and the brightest can march us right off a cliff. I think that’s worth keeping in mind too, so how to deal with this new emergent world, where there is no authority and it’s been replaced by nothing, probably we don’t want to replace it with a single definitive authority, the status quo that was so powerful in the top heyday of broadcast, say 1965, seems as close as any of the high watermark.
Jim: Let’s see what other comment that I want to have on that. Oh, the other is that, of course, this network world that we’ve created, or that has emerged, I think, again, I can continue to say that nobody planned this thing, it’s just emerged, co-evolved in a game theoretic sense with a bunch of players that were operating around maximizing money-on-money return in the context of what is technologically and economically possible, has resulted in a lot of good things. We have now high dimensional exploration of the design space of possible alternative civilizations.
Jim: People I have on my show, the peer-to-peer network people. In fact, I have Michelle Bauer and is coming on tomorrow again for the second time. Regenerative ecology people, political meta-modernism, like Hanzi Freinacht, who I’ve had on three times. Actually, did the third episode last week and it’ll be out in a couple of weeks. Even our Game B project, which both of us had been involved with at various times. So we should also always keep in mind that this wide open information ecosystem provides a substrate for good as well. So when we’re thinking about interventions to down-regulate the bad, we also have to make sure that we don’t eliminate the good at the same time.
Daniel: Okay. Yeah, this is very interesting. I’ll address the authority part first, and then the things getting better and worse and how we’d address that. The reason we don’t want a single monolithic authority on what is true is the same reason we don’t want a single, top-down world government, is because we don’t trust anybody with that much power and we shouldn’t.
Daniel: And so, one of our best ways of dealing with the corrupting nature of power is to at least, keep it in check with other powers. This is one of the reasons why free speech issues are so tricky. Is, should speech be absolutely free? Meaning, should we not bind any kind of speech? We know that libel and slander and yelling, “Fire,” in an open building and fake bomb threats are problematic enough things that we actually want to regulate them in some way, but what about in an environment where the original consideration’s around free speech?
Daniel: In this country, we’re in a time where at most, if I was going to say something that was not true and dangerous, I might be able to have a couple of hundred people hear it in a town hall and somebody else could get up and talk. I couldn’t have it scale to millions or billions of people through kind of viral-type dynamics.
Daniel: And so, in a situation where the things that actually appeal to bias and emotional hijack will get up-regulated in a way that they could never have been before based on the algorithms and the platforms. Free speech, there’s such a different consequentiality around the wrong information, so it’d be very tempting to say, “Okay, well, we should have fact checkers,” or something like that.
Daniel: But then the problem becomes, who is the arbiter of what is actually true and what is not true? And is there anyone that we would actually trust with that power?
Daniel: I would say a way that I think about it, going back in time as you were mentioning, to previous administrations where we have a government authority that is going to be the arbiter of truth in an area, or academia, or going back to when it was the church, is whenever something becomes the legitimate authority on truth for a topic, it’s extraordinarily powerful. Because, what everyone else thinks is real, which is the basis of how they’re going to behave, is actually at the bottom of the stack of power.
Daniel: And so, even if a legitimate authority emerges rightly, because it’s actually earnest and doing better empiricism and whatever, as soon as it starts to get that power, there will be maximum incentive for all of the power players to try to corrupt it and influence it in various ways. Which they usually can because, which science is going to get funded is going to be based on someone who has funds putting money into something that will continue to support or advance them having funds.
Daniel: And so, even if say, a piece of science is technically accurate, it’s not wrong, it might be that only certain topics within a domain that have ROI more associated get funded, other ones don’t. For instance, patentable, small molecules and pharma compared to peptides or biologics or plant-based things. And so, then the preponderance of research doesn’t actually map to the overall space well. So even things that are true, can still be mis-representative or misleading. So this problem of the legitimate authority within a economic game-theoretic environment will always get captured or influenced to various degrees. And so, how do we address that?
Jim: Yep. I think that’s getting damn close to the center of the problem. As people listen to podcast know, I consider myself a Madisonian. James Madison, the somewhat neglected, but in my mind, deepest of all the Founding Fathers, who set up our system of checks and balances. Probably not quite the optimal system for the 21st century, but his fundamental belief was, sooner or later, you’re going to have bad people capture levers of power and you better build that into your design or you’ll regret it.
Jim: In the same way, when you talk about one-world government, a number of my friends think we should have a world government. And I say, “Ah, when we have five planets, then I’ll be okay with having a world government. So if one of them fucks up, there’ll be for others to recover and to provide alternative models for each other.” Again, very, very, very important design characteristics.
Daniel: I want to just say one thing on the one-world government, we need to have governance. I’m going to separate governance as a process and government as a established top-down enforcement of rule of law with monopoly of violence.
Daniel: We need to have governance at the level that we’re having effects. Meaning, we have to be able to actually make sense of the effects we’re having and factor that into the choices that we’re making. When we have planetary effects on the atmosphere and the ocean, et cetera, but we don’t have planetary governance, then we just get multipolar traps. Where, okay, we don’t want to fish all the fish out, but if we can’t make an agreement that China or somebody else is going to also follow, and they’re going to get ahead economically, and the ocean is still going to get ruined, then they’re going to use that economic benefit to damage us, not only will we not make an agreement to manage the comments, we actually have to race to fish all the fish out faster than they do. Or make AI weapons faster than they do or whatever else it is. It’s exploitive.
Daniel: And so, not having global governance when we’re having global effect leads to catastrophe, but having global government of the types that have only ever become corrupted, and then as a result broke down, also leads to catastrophe. And so, we need something that is different than either of those things that have been imagined so far.
Jim: I think that’s pretty much bang on. Or we need to be very clever in creating emergent networks that essentially produce coordination without actually having mechanisms of governance. That’s a fairly subtle job. But as an example, I think you actually mentioned it somewhere earlier. Suppose China doesn’t conform to carbon neutrality and the rest of the world does, the rest of the world could use massive tariffs on implicit carbon to stifle trade from China until they do come along. So there’s a way to essentially set up a signaling system in which we get the coherent behavior without actually having formal institutions of governance or mechanisms of governments. Again, that’s at least one area to think about a little bit.
Jim: We talked about free speech. This is something we really do have to think about. Again, as a Madisonian, there’s a reason the First Amendment is the first one in the U.S. constitution. I do believe that free speech is hugely important, and that giving large powers the ability to suppress speech is dangerous.
Jim: But nonetheless, I think there’s at least one line that I think most of us can agree on. Unfortunately, this is what’s propagating most rapidly now on the nets. And that is speech that is made in bad faith. I actually remembered something that you said about what is good faith, and then I’m going to read this back to you and maybe you can react to it. That there is a correspondence between the signal that you’re communicating to me and what you believe is true.
Jim: That strikes me as a good definition of good faith, and yet we have people like the Russians meddling in our elections or the oil companies putting out false narratives about climate change, which seem pretty clearly to be bad faith discourse. It strikes me that bad faith discourse is to the memetic sphere what pollution is to the ecosystem, and that if there were way to detect bad faith discourse, that would be the first line to draw.
Daniel: Yeah. I mean, yes. And it’s tricky. I would say that the information ecology is polluted, both intentionally, but what you call bad faith discourse, where someone is sharing something they know isn’t true, and unintentionally, where people are sharing something they think is true that’s just really wrong. Both of those end up damaging the information and epistemic comments, but who’s going to then be the arbiter of what is actually true, and how do I know what someone else’s actual intent was?
Daniel: Maybe if I find a whole paper trail that shows that that person knew the thing wasn’t true and was setting the thing up as a psyop, sure. But oftentimes, it’s very hard to prove intent, especially when the actors are hidden behind several layers of sock puppets. So now we have to say, “Okay, well, is it about a judge or some kind of centralized authority that should be able to tell if something’s bad faith or not, or is it about increasing the collective intelligence that is aware of and sensitive to these things where everyone is doing it and those things simply don’t trend in the same way?”
Daniel: This starts to get into the kind of answer in theory of chains that I’m particularly focused on these days is, we have this system, a Republic or a democratic-type system of participatory governance, that wants everyone to be able to participate in governance rather than just be ruled by one or a few people. If we look from the beginning of what we call civilization, we exclude Dunbar number tribes because it’s a very different phenomena, very few civilizations since the beginning of civilization till now, have been republics or democracies. They’ve almost all been feudalistic or autocratic of some kind.
Daniel: That actually makes sense, because one guy ruling everything or some very small number of people that can talk, coming up with a consensus and then being able to rule, is way easier than getting a huge number of people, mostly who are anonymous to each other, to all actually be able to make sense of the world and coordinate. A large number of people coordinating is actually a very expensive, tricky thing. And from the way I see it throughout history, the few times democracies emerged, they emerged following cultural enlightenments that had a few things in common.
Daniel: When we look at the Athenian democracy coming out of the Greek Enlightenment, stoicism plus the Aristotelian school, a few things had caught on to the place where they had a cultural value around education. That everyone would learn formal logic, and everyone would learn rhetoric and history and those types of things. Plus the kind of stoic culture, where they’re all learning emotional regulation and the Socratic method, where they’re learning how to take each other’s perspectives, debate any side of a conversation.
Daniel: Well, if I have a bunch of people who are trained to be able to assess base reality clearly on their own, they have emotional regulation so they are not as susceptible to emotional and cognitive hijacks and group think and have the courage to disagree and things like that, and they can take each other’s perspective and they’re actively seeking to, well, those are the prerequisites to something like a democratic system being able to emerge. Because those people can have a good quality conversation about shared sense-making, recognize that some compromises to agree are better than warfare with each other and come up with solutions. So collective choice-making emerges out of the collective sense-making, meaning-making and conversation ability.
Daniel: Our country, similarly coming out of the post-European Renaissance Enlightenment phase, was the idea of Renaissance men, Renaissance people who could have expertise across a lot of topics, not just be specialists. Because specialists across different domains have a hard time being able to communicate really effectively towards governance that requires looking at a lot of those things, but the idea that we could become Renaissance people, that we could all have empirical capacity, the scientific method and the Hegelian dialectic, that we could hear a view and then seek the antithesis to that thesis and then seek a synthesis, some higher order reconciliation, that again, gives rise to the possibility of participatory governance.
Daniel: You can see when you read the documents and the letters of the Founding Fathers, and of course, there was a lot wrong with the Athenian democracy and a lot wrong with our country that had genocide and slavery as parts of its origin. But the whole world had genocide and slavery as parts of what were going on at the time, and it was at least moving in the direction of increasing participatory-style governance. The thing that the Founding Fathers talk about so much is the need for very high quality universal public education and very high quality fourth estate that’s independent. Or news as the prerequisites of democracy.
Daniel: George Washington said, I’m not going to quote exactly, it’s something to the effect of that the single most important goal of this government should be the comprehensive education of every single citizen in what he called the science of government. I think the science of government is such an interesting phrase because we’ve separated science and the humanities so formally, and the science of government would be history and game theory and political theory and the things that people need to know, the shenanigans that happened so that they can prevent them.
Daniel: Franklin said, “If I could have a government without news or news without a government, I would take news. Because if the people really know what’s going on, they can self-organize and overthrow a government. If the people don’t know what’s going on, they can only be captured.”
Daniel: Our public education or education of any kind here, and the kinds of civics that people would need to really understand what’s happening in government and understand regulatory capture and be able to bind it, obviously is close to non-existent. And the news has been mostly captured by economic and political interests, so there is no chance for a bunch of people that identify as being in almost tribal warfare with each other, who aren’t sense-making reality well, who don’t understand government, who don’t really understand the markets, who only have pejorative straw mans of each other and don’t seek each other’s opinion, those people can’t do a Republic or a democracy. So it will simply devolve back to an autocracy, which we see happening.
Daniel: The way I think of it is like, if there’s a bodybuilder who has a huge amount of muscle, the moment they stop working out, they start losing it because it’s very expensive metabolically to keep that much muscle. They have to kind of keep working at it.
Daniel: It’s very expensive to keep an entire population comprehensively educated and actively engaged. Once it seems like the government’s working well and a generation passes and the kids didn’t go through the pain of the revolutionary war and the grandkids didn’t, it becomes very easy to just get engaged in your own stuff and not keep participating in the governance. Then, you stop having a government of, for and by the people and you start having a class of people that do government.
Daniel: When the people stop checking the state, the state stops being able to check the predatory aspects of the market, which is what it was really intended to do, and then the market ends up capturing it. You get regulatory capture and then, rather than a liberal democracy, you get a kleptocracy that eventually becomes an autocracy.
Daniel: That’s what I see that we have right now. And so, do I trust any particular authority to arbitrate truth or good faith or whatever right now? No. Do I trust a collective intelligence that’s actually increasing in its authentic intelligence? I would trust that much more. It’s an expensive, hard proposition, but I don’t see any other good choices.
Jim: Yeah. I think I agree with you. A key part of my own thinking that’s been formulating over the last year or two is that, as we talked about earlier, we live in a era of scale and complexity which is completely unprecedented in human affairs. I think it’s reasonable to assert that the combination of scale and complexity has made the ability for an individual person to make sense, no matter how smart and educated they are, inadequate to the task at hand.
Jim: I read a lot. I know a lot. I’ve been through a lot, but I don’t feel myself competent at all to make certain kinds of decisions. Nor in this era of informational nihilism, is it easy to find who the right people are to make these decisions. Let’s less true. I mean, if you do some work, you can find people who actually know more than you do.
Jim: And so, I’m wondering if the Founding Fathers’ enlightenment view that we can bring everybody up to the level to be able to make good sense in high complexity and high scale is realistic, or whether we ought to be thinking about new institutional structures.
Jim: I’ve been informed quite a bit in the last year by the writings of Hanzi Freinacht in his political metamodernism, two very good books, The Listening Society and The Nordic Ideology. He makes the distinction between hierarchical complexity, you can think of that as kind of mental capacity, and code, which is our institutions and operating systems. He’ll admit, when you push him, and I’ve pushed him some in interviews, that hierarchical complexity can only be moved a little bit in any one generation. While code can be moved quite a bit very substantially. We have things like the French Revolution or the Russian Revolution, where totally new codes were developed. Not necessarily always for the better, but code is much more malleable than hierarchical complexity.
Jim: And so, maybe in addition, because I do agree, we absolutely have to upgrade the education and thinking tools for people, but might not … We also have to think carefully about new institutions that take into consideration the fact that we’re dealing with unprecedented complexity and scale.
Jim: What I’ve talked about a fair amount, and I continue to believe might be an answer, is so-called liquid democracy. Where, instead of myself having to become a good enough expert on 30 different domains to make some form of decision about it, I can proxy my vote to someone who I believe knows more than I do and is aligned with me on values. And then, under liquid democracy, that person can re-proxy the same way.
Jim: And so, the preponderance of our decision-making power, at least in the political realm, gets concentrated towards more informed people, and yet people who are still aligned with us on values. Of course, the other nice thing about liquid democracy is to the degree you believe that one of your people holding a proxy is no longer doing a good job for you, you can change your proxy at any time. Or you can reclaim it in just for a single issue, if you want. While I may proxy my defense vote to my uncle who was an Air Force colonel, I may choose to hold for myself the question on whether we should go to war in Iraq or not. I’m thinking that code may be more malleable than actual individual human hierarchical complexity.
Daniel: In terms of exploring, how much can we change at the level of the individual, how much at the level of the communication protocols for collective intelligence and how much at the level of the actual types of shared choice-making systems, it might be different than just current representative democracy.
Daniel: What we’re talking about here as the context, because we’ve set the context of the damage to the information ecology, but we’re also talking now about the increased complexity of the issues. I think we should actually just build out the understanding of that problem a tiny bit more because it affects the way I think about it. The solution, is all right?
Daniel: Because you brought this up at the beginning and I wanted to come back to it. So partly, it has to do with scale, for sure. Which is the Founding Fathers thing here. There was an idea that everybody could go to the town hall and discuss the issues that were mostly geographically close to them, that everyone could sense base reality on their own. And you could have a small number of people in the town hall that, for the most part, the people who had something to say could actually say something about it.
Daniel: When we moved to a place of, most of the issues are globalized, whether we’re talking about finance or supply chains or environmental issues or geopolitical ones, obviously, that’s a level of complexity where people can’t depend on their own base sense-making and they can’t process that much information in second and third order effects and confounding effects as well.
Daniel: One topic I’ll just enter here, because we haven’t discussed it yet, is the concept of hyper objects. Which is connected to, but a little bit distinct from, just raw complexity. That we evolved to be able to apprehend and understand objects that were available to our senses.
Daniel: When I’m talking about something like climate change, I can’t see climate change. I can’t taste it. I can’t hear it. I can only conceptualize it. I can see a drought. I can see a fire. There’ve always been droughts and fires, but to understand climate change, I have to think about some kind of statistics and complex mathematical models on the droughts and the fires. I also can’t see world hunger. I can see a hungry person somewhere, but I can’t actually-
Daniel: … World hunger. I can see a hungry person somewhere, but I can’t actually directly apprehend. I can only do it conceptually, which also means I don’t have the same felt visceral experience of things. And the same is true with, I can’t actually directly see or apprehend AI risk or biotech risk or nano tech risk or the nature of markets. And so, we have a world where the most influential things are mostly all not apprehendable to the senses. And so, we have not just the issue of can we have a better felt sense or intuitions are more right, that our kind of sensibility of what’s likely true, as well as our kind of formal analytic thinking, can we get hyper objects? But then can we get the connection of lots and lots of hyper objects, markets interacting with social media environments, interacting with climate and things like that.
Daniel: And I think it’s just important to state that, to have a sense of how different the problem space of the things that we need to understand and think about are now, compared to the evolutionary problem space most people had to think about. And how different the kinds of collective sense-making need to be and choice-making need to be. And then, I’ll also step back and say, even in environments where we weren’t dealing with as much in the way of complexity and hyper objects, our sense-making was actually usually still adequate to solve local problems in ways that very often caused other problems somewhere else or down the road. And this is an important part of understanding this is like the model that we can look at all of our problems as a result of either conflict theory or mistake theory. Conflict theory, meaning we know we’re going to cause harm somewhere, and we’re doing it anyways for game theoretic purposes. Mistake theory, as you were mentioning with Facebook, we didn’t know this was going to happen. And it was simply an unintended externality.
Daniel: So we have to deal with both moving forward. How do we remove the basis of conflict areas so no one is motivated to do things that will knowingly harm something else? And how do we also deal with the mistake theory of being able to anticipate externalities and internalize them into the design processes better? And so, the mistake theory side, you can go back to, we didn’t necessarily know that the development of stone tools would lead to us becoming apex predators, that we’re increasing our predatory capacity faster than the environment could become resilient, that led to us starting to extinct species at scale, and then become an apex predator in every environment and start the Anthropocene.
Daniel: Or you can come to a closer one and say, when we were making the internal combustion engine to solve the problem of the difficulty of horse husbandry pulling buggies, we didn’t anticipate that in solving the horse husbandry problem, that we’d be causing climate change and oil spills and wars over oil and the U.S. petrol dollar. And so typically, if we’re going to make a solution that solves a problem, the solution has to be somehow bigger than the problem, faster, whatever, to be able to scale and overtake it to really solve the problem.
Daniel: But if we define the problem in terms in a narrow way, there’s one or two or some small number of metrics, and the solution we’re trying to create has a first order effect on a small number of known metrics, but it might have second and third order effects on a very large number of unknown metrics, we can see that the safety analysis is actually harder in kind than the solution analysis.
Daniel: And this is something that we actually have to start factoring is, I don’t just need to understand the problem I’m trying to solve. I have to understand the problem embedding landscape of the adjacent problems and the adjacent topics and the adjacent meaningful things. And I have to start thinking through second and third order effects better, whether I’m designing a proposition or designing a piece of technology or designing a company to solve a problem, to say if that solution’s effective, what other things is it likely to do and what harms could that cause to complex environments? And then, how do we actually factor that into the design process?
Jim: Yeah. All true. But how the hell do you get a human to actually be able to process at that level? And again, the humans aren’t going to get smart enough to do that. None of us can do that. Right? And so, the answer has got to be, as you keep pointing to, some form of collective sense-making and collective decision-making that’s high fidelity and in congruence with reality, I think. I mean, I don’t feel any sense that education plus a better child rearing environment at home is going to get anybody to be smart enough to deal with that constellation of issues.
Daniel: Well, so notice, as you said, that you and I are probably more oriented to try to sense make the world than average population and have better educational resources, but still can’t make sense of everything. But you also noticed that when we talk, I know that you know things that are important that I want to know that I don’t and vice versa, so I’m actively seeking to try to understand your opinion. And so, there’s an interesting balance that is important to understand that is it one of the dispositions beyond empiricism that orients towards the capacity for collective sense-making to emerge? I’m neither oriented for agreeability nor disagreeability with you because I’m actually oriented toward respect relationally, but then also, respect to reality objectively. So there’s an inter subjective and an object of respect. So my object of respect for reality is, I’m not going to agree with you if you’re wrong, but I’m also not going to listen to you where you might be right.
Daniel: And so, I want to listen in good faith, seek to understand what you understand, pushback where that’s relevant, not because I have a disagreeable personality and I want to push back or have an agreeable personality, and I want you to agree, but because I care about what is true enough, and I’m in good faith with you enough that we can seek better understanding together. So the enlightenments that I’m talking about are not just people being able to have better objective sense-making, but also better intersubjective capacity that leads to higher quality conversation so that collective intelligence can start to emerge.
Daniel: So you were mentioning being able to find somebody that knows… And first, let me just say, before we discuss a perfected system, we can just discuss how to stop some of the most egregious things about the current system. Current system, people are radically certain about things that they have no basis to be certain about. And it’s actually their false certainty that causes most of the problems. If they simply acknowledge that it was too complex and they didn’t know, they at least wouldn’t be going to war over dumb stuff. So to simply be able to have people be like, “I don’t know yet, but I’m interested. And I’m going to try to seek to know better,” would actually slow the rate of breakdown tremendously.
Jim: Yeah. I believe that the phrase I don’t know is one of the most important phrases in human collaboration. It’s interesting when you read scientific papers, the word it is not yet known appears at high frequency and the better the science, the more frequently it appears.
Daniel: Right. This is one of the things that is a value I really want to have come across, is people having a mature relationship to the topic of certainty. And I think it’s easy to get it wrong in both directions, which a disposition to want to be more certain because either I think the certainty gives me security or I seem smart or something like that, just makes us bad sense makers and makes us dangerous. And so, the fear of uncertainty and the desire for excessive certainty, excessive meaning more than the desire for certainty that has us jumped to it before we’ve done the right epistemic process, is dangerous. But the compensation, there’s a kind of postmodern trend that takes any certainty is probably some kind of imperialism and only holding uncertainty as virtue, also doesn’t give the basis to act.
Daniel: So what I want just to say, I’m comfortable being certain, and my certainty will never be a 100%, but it might be 99.9%, depending upon how much experimentation I’ve done relative to the thing. The fact that it’s never 100%, means that I’m always open to the possibility that there’s data I didn’t factor that’s important. But I also need enough certainty to act where inaction is also consequential. So factoring that action and inaction have consequence, and I have an ethical binding, both to the quality of my action and the results of my inaction, then I want to say, well, how do I get enough confidence to make the right choice for this particular thing factoring the consequentiality of the thing? And do I want to be comfortable with uncertainty, and I want to be comfortable with relatively higher certainty through the right epistemic process that informs right action?
Jim: Yep, absolutely. And I would also say that the very important one that you mentioned that most people do not build into their sense-making, the decision-making, is the cost of inaction. And I had a long and interesting and fun business career, and I think I was more successful than most. And one of the reasons was, I very carefully calibrated about that. How much information and how much certainty did I need to pull the trigger? If this was something where if you don’t decide you’re fucked, then you decide when you’re at 40%, because the alternative is worse. There were many times when you’re at 52%, you should pull the trigger because frankly, the downside isn’t that bad. Maybe you’ll lose some money. So what? If the stakes were higher, maybe at 60%.
Jim: Anyway, to calibrate how much certainty you need to pull The trigger, is one of the great skills of decision-making, which is not taught by anybody that I’ve ever seen. I fortunately was able to pick it up by getting my MBA with real bullets, so to speak, in the real business battlefields. But if we made that part of our education and epistemology, that by itself would be huge, right? And of course, it would totally attack the fundamentals of American politics in which it’s always the right thing to do to kick the can down the road not confront it, no matter what the costs are for kicking the can down the road.
Daniel: Yeah, no. This is a tricky thing is there’s almost a negative gradient problem here, where the people that do less good thinking come to certainty faster and then make actions faster. So if someone doesn’t want to make the wrong kinds of choices, so they want to actually do deeper sense-making first, then if someone takes that too far, they’re just seeding the control of the world to people with the worst sense-making and the most action bias. And yet, we don’t want a multipolar trap where everyone just says, “Okay, I’m pretty sure I’m right. So I’m going to go for it.” So this is why that balance of there is responsibility for inaction, as well as action, and different things have different consequentiality like, what confidence do I need that this is going to be a better recipe to experiment with how much spice I put in the food? Not that much, because at worst, I ruin the meal. But how much confidence do I need that a particular application of AI is actually a good one for the world?
Daniel: Well, it should be a fuck ton, higher confidence based on much better processes of looking at second and third order effects because the consequence is so high, and the reversibility might be so low. And this is where we’re in a situation where there are market incentives to be first mover, which means to move as fast as possible in some of the areas of highest consequence, where we should actually be moving slow and doing really good safety analysis. And yeah, that’s a problem.
Jim: Yeah and unfortunately, it’s baked into the current operating system, which is hill climbing incrementalism, right? Every player in the AI world has lots of incentives to make a move every quarter, at least, that shows they’re making progress, drive their market cap or the world’s perception of their importance, et cetera. And usually, the moves aren’t that big. GPT3, which is the buzz in AI at the moment, it’s a pretty good size move, but it’s not huge, but it’s incremental. And nobody stops to really think what a whole series of incremental moves over 10 years is going to result in. There is no meta system to evaluate these kinds of risks and to keep people from playing the hill climb or game, theoretic game, of incremental moves.
Daniel: Okay. This is interesting. Let’s look at the ascendancy or return to power, depending upon the perspective you want to take, that China’s going through and the kind of increasing fragmentation and descendancy you can see in the U.S. Descendancy in coherence, at least. Term limits make long-term planning quite difficult. And we know why we like term limits was we didn’t want people to get too much power and corruption and start thinking dynastically of for and by the people rotating. But it also does create an incentive for people to do stuff, where either for their reelection or simply for their new economic opportunities getting out, or the way that they’re judged, they’re only going to be oriented to do stuff that will create positive return within the time of their term limit. Obviously, it’s even a shorter time limit for directors of corporations that have quarterly decision profit cycles.
Daniel: And so, a dynastic monarchy that is thinking not only about the person’s whole life, but about their kids, can actually do better long-term planning. And if I have left versus right differences, where when anyone does something for four years, the next four years somebody will undo it. And whatever the internal coordination costs of the disagreement are so high, it’s also very hard to unify enough to do stuff, which is why the U.S. has had a hard time fixing its infrastructure since the ’50s, and China has built high-speed trains all around the world. And so, for a lot of reasons, we can say, wow, monarchy, dynastic monarchy, is just a better system because getting a lot of people to coordinate is really, really hard. And having someone that can do long-term planning be able to actually implement with some coherence, because there’s not much internal descent, seems like a better system, and that system can check its own AI. That system can do a lot of things.
Daniel: Whereas the system where all the companies are competing against each other in a quarterly way, and the parties are competing against each other, seems to be stuck on a multipolar trap governed situation. I actually know a lot of people who’ve thought about multipolar traps a lot whose only answer is, we actually return to monarchy and most of them think a global monarchy run by a benevolent AGI. And so, I think right now, it’s pretty easy to see that if the U.S. continues the kind of left, right, and all the divides that it has, its shaping of the world continues to decrease in relevance. Probably China’s continues to grow, and we see that not just China, but Turkey and Venezuela and Brazil and a lot of places have moved to less participatory, more autocratic style of governance. I think we lose the 21st century to that and while that is, because it’s more effective in a bunch of ways.
Daniel: And I think the only way to be able to have something like a republic or a participatory governance be more effective, is if the many coordinating with each other actually produces higher quality results than just a few being able to control the thing. And that’s only going to happen if it can harvest the collective intelligence. And there actually really is some high collective intelligence.
Daniel: And now this comes back to your meta modern question. Do I think that we can get the hierarchical complexity or the ability of people to process information up? Yes, even before that, if I can start getting them a mimetic immune system to where they aren’t just cognitively and emotionally hijacked. Right now, it’s mostly not even can they do good epistemology? It’s that they’re just captured by narrative warfare. If I can simply get mimetic immune system, where people start to notice how [inaudible 01:06:28] conjugation and lay coff framing happens. Yes, that scientific article said something, but the news article put spin on it. Can they notice how the spin’s occurring? Yes, that’s a true statistic, but it’s cherry picked. When you look at all the other statistics, it doesn’t look like that same picture at all.
Daniel: And people start noticing those kinds of info and narrative weapons and become inoculated enough that it’s not like absolute lowest common denominator collective intelligence, that will make a huge shift. So we have to factor the kind of mimetic, emotional immunity and cognitive immunity topic. Then, better epistemology for the individuals and better orientation towards Socratic dialogue. Hegelian dialectic like seeking shared understanding, because of understanding the need to coordinate being less bad than more fair, not coordinating, then I think we start to get emergent collective intelligence, where more sovereign and intelligent people and better conversations start to produce systems of coordination to bottom-up effect, where those systems of coordination produce a top-down effect that continues to incentivize that bottom-up effect better. And you get a recursive process between better systems of collective sense making and choice making and better development of individuals and their communication capacity with each other.
Jim: Hmm. Yeah. That’s a beautiful picture, but it’s got to fight hyper normal stimulation. Right? How does it escape that trap?
Daniel: Well, that’s one of the immune systems that I’m talking about. How do we fight the trap of group identity? That’s a fucking hard one. There’s a courage required to not agree with the thing where my friends are saying silence is complicity, and you’re either with us or against us, to actually be willing to be honest and say, “I’m not sure. I haven’t looked at the statistics, I haven’t made good enough sense of this,” requires a kind of profound courage and epistemic commitment. The ability to mistrust my own certainty, this is one of the insights that really got me into this, was I was certain about things where I later realized I was wrong enough times that I became very dubious of my own certainty. And then I recognized, fuck, I’m clear where I think a lot of other people are wrong. And I’m clear where almost everyone historically believes stuff fervently that I’m pretty sure they were wrong about; flat Earth or which God is true or whatever.
Daniel: And I’m clear where I was wrong in the past, but there isn’t a single belief that I hold now that I can say that I’m probably wrong about, even though statistically, I’m probably wrong about most of them. And that when I go forward into the future, there’ll probably be some I recognize. That asymmetry is really problematic. And this is what got me to start wanting to calculate what is my basis for belief and what is the confidence margin and where do I believe things more fervently because it’s a good story and I get to sound smart or I get to be part of a group or because it gives me security? And how can I start to have a psychology that is more independent of those things, so my belief system is not bolstering me identity wise or existentially. So that’s a kind of psychological, emotional memetic, social immune process that has to be part of the enlightenment we’re talking about.
Daniel: And so, basically I’m saying, participatory governance is hard. Autocracy is just easier for a bunch of reasons. Either we should start designing the monarchies we want to be part of, or participatory governance only ever emerged out of, and was successful as long as, there was a cultural enlightenment that made it possible. So we either have to redrive a cultural enlightenment to be able to revivify participatory governance, or we should start steering the kind of autocracy we want. And that the enlightenment we want to drive, is not just a cognitive enlightenment. It is the susceptibility to hyper normal stimuli. It is a value system around a respectful engagement with other people and seeking their points of view. It’s all those things together.
Jim: Yep. Let me hit on a couple of point topics and then I’m going to turn it over to you because I know you’ve thought about this more than most, what are next moves that need to be done? But first let’s hit on two other topics.
Jim: One, we talked earlier about bad faith discourse, and then you alluded to the fact that not only is there bad faith discourse, but there’s good faith discourse that’s just wrong. And I will say that our current evolved information infrastructure seems to have produced in an evolutionary context, which produces some very virulent forms of being wrong. Anti vaxxer being an interesting one, right? It’s like what the fuck? A hundred people have died from bad vaccines in the United States since 1950. Millions have been saved. How could how this even be an issue? And yet evolutionarily, a mean plex has been created that has attracted millions and millions of people.
Jim: QAnon’s another interesting one, a classic example of a evolved artifact that by any objective measure, just seems fucking insane. And yet, not only do millions of people believe parts of it, hundreds of thousands, at least, seem to have it as a main hobby in their life; trying to do the research, figure out the clues, et cetera. And so, something I guess I would point out compared to 1965, you are not going to see either anti vaxxers or QAnon on the CBS Evening News with Walter Cronkite. The gatekeepers of that era, if nothing else, we’re good at keeping absolute nonsense out of widespread public circulation.
Jim: Then the second point, which you just hit on, is tribalism seems to be what we fall back on when we’re so overwhelmed by an information evolved platform, which is driven by bad incentives plus complexity plus scale. And we now throw up our hands and say, “Fuck it. I can’t figure it out. So I’m team blue. Whatever team blue believes, I’ll believe.” “I’m team red. Whatever team red believes, I’ll believe.”
Jim: We’ve all seen those big posters with all the logical fallacies on them. I’ve come to believe that this epoch of tribalism has made one logical fallacy dominant all over the rest, and that’s confirmation bias. Everything I hear, I would say, not me, I’m the one exception, but let’s say, laughing here, ironically, that the mass people who are in the tribes, they compare every fact or item they hear against what’s the tribal take on all this? Rather than trying to look at something objectively and decide whether they should believe it or not. So Daniel, what can be and should be done starting tomorrow morning? What do we do?
Daniel: Should we start at the level of what individuals can do on their own? Or what we could do to try to attack these issues project-wise?
Jim: I would suggest a bit of both, and of course, best is where project-wise cycles back to the individual and then back to the project, but go with what you think is the priorities.
Daniel: Well, I want to just say one thing first, and this is a funny thing to say, but if we talk about anti vaxx or QAnon or any particular view like that, I can really empathize with those. And I think it’s important that everyone can, because I think most people sense that there are a lot of things wrong with the world and moving towards catastrophically worse, that those who are in most power, both politically and financially, don’t seem to be doing a good job with. Though they seem to be increasing in their own personal wealth or power or whatever else. So there’s a sense of corruption that most everyone senses. It’s just then they want to fit it into very simplistic narratives, where there’s good guys and bad guys, and they can be on team good. That kind of thing.
Daniel: So the right all think that the left’s news is fake. And the left thinks that the right’s news is fake. They each think that the other one’s politicians are corrupt. So the idea that politicians could be corrupt or news could be fake, well, that’s actually true. It’s just not true that it falls just along those partisan lines. It’s more that you’ve got to understand as for anyone to ascend the stack of power, they had to do well at the game of power-
Daniel: For anyone to ascend the stack of power, they had to do well at the game of power and there’s a lot of things that that involves, to do well at the game of power. And so I think the idea that there are these institutional authorities and we should just totally trust them has pretty much broken. You were mentioning in 1965. In 1965, the people who were alive paying attention to the news directly remembered World War II and their life experience, right? And that had a unifying basis, a patriotic and unifying basis, and an empirical basis. We had to really double down on science and technology to be able to win the war. But then there starts to be an institutional decay where not that many people who were alive paying attention to the news now, or very large numbers of them weren’t around in World War II. And they don’t have any real embodied sense of what breakdown in the developed world could look like.
Daniel: And so similarly, they have much more sense of the enemy that is near than the enemy that is far and those types of things. And then so we look at vaccines, for instance, it’s actually a bit of a complex topic, because if you’re making something that is a drug that’s intended to have long-term effects, as opposed to most small molecule drugs that stop acting after they’ve left your body, could they have long-term consequences? And if we’ve done mostly the studies on individual vaccines, and yet they’re given in schedules with lots of them together, do we really know what the collective effect of that much immune modulation long-term on lots of them together is? And if at the same time that they’ve been going up, polio and whatever’s been going down, but hygiene and antibiotics have also been happening during that time, and then auto-immune disease and other things like that seem to be going up, is the auto-immune disease and the infertility going up because of the vaccines? Or is it going up because of the glyphosate or ubiquitous environmental toxins or stress?
Daniel: Those are actually complex topics. And so it’s very easy to say, “All vaccines are bad,” or, “All vaccines are good, anti-vaxxers are stupid,” and neither of those are complex enough to actually make good sense of it. But the need to take a very strong position because the other view is so bad and dangerous we have to fight it ends up being the tribalism confirmation bias. And then people only have a pejorative straw man, or even worse, a villainized version of the other. They can’t even imagine how anyone could be that stupid or bad as to believe that thing. And of course, you can’t have participatory governance when that’s how you think about the other people you’re supposedly part of a republic with, or part of a civilization at all with. So where people start to doubt the institutional authority, there’s something good in that, but then oftentimes what happens is rather than really learn the epistemology needed to make sense of it, they just jump to the new authority, and usually the new authority is whoever it is that’s telling them to doubt the other one.
Daniel: And so that’s inadequate. They’re not actually moving up vertically into more hierarchical complexity, they’re just shifting authorities. I just wanted to say that as a preface.
Jim: Yeah. I think that’s a good preface. And I will confess that I am a real knee-jerker on anti-vaxxer. Have I done enough research to be absolutely sure about it? No. And so I will plead to a little bit of tribalism on that when in fact it’s the only topic which I have, at least when I was running the show over in Rally Point Alpha, the only topic which I banned discourse on was anti-vaxxer. So make [inaudible 01:18:30] continue.
Daniel: And this is tricky because let’s say a new person that’s coming into a topic that has been very well addressed, but they don’t know, do the people who’ve well-addressed it have to re-go through it every time? That’s actually a tricky topic, right? From the good use of time point of view. So in terms of collective intelligence, if a group has done really earnest dialectic to come to a certain confidence margin on something, the path there, not just the conclusion, but the path there should be left in some intact way that other people can walk. And then if someone has critiques that are legitimate object level critiques of the actual path there, the conclusions, we should listen. And that’s I think how science can continue to evolve is where we can continue to critique our best understanding and have descent around it without having to repeat the things that have actually been done well many times.
Daniel: Okay. So your question, what do we do tomorrow at the level of the individual? Forgive me, I’m going to make a religious reference. The Commandment in the Bible to not have any false idols, the way that I take that, I take it very similarly to the first verse of the Tao Te Ching. So I think a lot of cultures had something wisdom wise, and maybe I’m just reading it the way I want to read it. So no false idols. Any model I make of reality isn’t reality. It’s me trying to info compress the complexity of reality into some small number of variables and operators on those variables, where I can model the outcome with hopefully enough numerical accuracy that it’s useful. So this is the all models are wrong, some of them are useful. The moment I make a model truth, now I’m not in direct relationship with reality anymore, I’m in relationship with the simulacra.
Daniel: And so the idol of the models that I have of reality, I want to hold them as useful and see which ones are more useful while never actually having any identity attached to them, never having any sense of sacredness attached to them, because I know that better ones will come along and I want to be seeking those. So if people can deepen their commitment to a direct relationship with reality, which binds them to a certain amount of uncertainty. And as a result, be dubious of being over devoted to any models of reality, that’s very helpful. It’s almost like a sacred oath at the foundation of it. And then if that’s the case, that’s what will motivate them to be dubious of their own cognitive biases and emotional traps.
Daniel: And when they feel very outraged and when they feel very certain, and when they feel a very clear repulsion to a group that seems obviously bad or a very clear identity with an in-group, they should just be dubious of all of that and step back and say, “What’s the chance and what are the possibilities that I’m being emotionally or cognitively hijacked and I’m not seeing the whole thing?” If people can just start to actually have a bias checker in themselves as a value, as a discipline, because they want to be in better connection with reality and truth, and they know that more than anything else, it’s their own miss assessment of it to check, that’s very helpful. Then they can start to learn tools that will help them of like, “Okay, let me actually learn some of how narrative warfare works. Let me understand how Russell Conjugation is used by the left and the right.”
Daniel: “Or how Lakoff framing is used in marketing and on all sides. Let me see how cherry picking of data works and how the funding of science works in a way that makes the preponderance of data not necessarily fit the whole picture.” As I start to learn that more, I start to be less manipulated by what occurs in the media environment. And also simultaneously, I want to seek dissenting views that seem earnest on everything. So when you were mentioning that you will seek people that know more than you about a topic, I will try to seek the people that I think know the most and who disagree. But who seem in good faith about their disagreement, because the dialectic between them, I have more faith in than I do in either of the positions singularly held.
Daniel: And so if I can see where the IPCC has a standard model on climate change, but both James Lovelock on one side and Freeman Dyson on the other side have done quite thoughtful analysis and come to different conclusions, I want to say, “Do I understand why each of them came to the conclusions they do well? Can I facilitate a dialogue between those perspectives to try to maybe have more epistemic flexibility and come to better understanding?” So these are just examples of a few things individuals can do on their own. One of the things that looks like is just obviously remove the social media apps from your phone so you don’t have a continuous drip of micro-targeting. And if you keep it on your computer at all, curate your own feed so that you’re using it as a tool more and it’s using you less. So unfollow all of the things that you’re automatically following that are not actually enhancing your sense-making, and then intentionally follow things that represent different views on every topic.
Daniel: So there are far left, moderate left, moderate right, far right conspiracy, theorist, other countries’ views on every topic I follow, because I want to get the parallax on all those used. One, because I want to understand why huge amounts of humanity are thinking those things and two, I want to make sure I’m not missing anything where there’s partial signal in what they’re saying.
Jim: Those are good. Let me respond here a little bit. Let’s start with social media, you missed one, which I think is key. I had Renee DiResta on the podcast yesterday. She’s a researcher at Stanford on social media exploits by bad actors, mostly. And she pointed out the one thing that we could all do is ask ourselves three times before we share something, “Is this actually good for the world to share this,” right? Because what you read is important, but we’re also in social media, we are actors in the formation of the epistemic commons. And when we share something, or on a Reddit platform we up vote something or like something, we are making a shaping of the commons. And so at that point, think three times before you act at all, I would suggest that’s useful.
Jim: And now in terms of here where we pivot to more institutional things, I would also like to comment that okay, yeah, I understand that if we really want to think through climate change fully and understand built in biases, where are the aero-bars, et cetera, it’s very good to start with Lovelock on one side, Dyson in the middle and a bunch of guys in between, process all that data and come to some conclusion on what might be true and what the aero-bars actually are. But let me tell you, that’s a job over the pay grade of almost every single human being. Maybe if I did nothing else for two years, I might be able to do that. Realistically, I don’t see very many people being able to process high scale, high complexity issues in that kind of way, which then pivots me to what can be done institutionally or as artifacts, or in groups to accomplish the same thing so that people can consume the result of a process like that?
Daniel: Well as you know, I’m part of a group that’s working on building a project to try to do some of this. And so I can tell you the thing that we’re trying to do that compared to the scope of the entire information ecology and the algorithmic bias on these major platforms, what we’re doing is actually pretty small and humble compared to everything that needs to happen. But it’s the highest leverage, easiest to achieve things I’m aware of as a starting place to then hopefully be able to have more stuff come. So for instance, let’s take that particular topic, the climate change one. There’s both, how do we make sense of the realities of climate change? Then there’s how do we make sense of the right choice for climate change factoring the adjacent issues, like U.S. China issues and other geopolitical issues, and market issues and things like that?
Daniel: So one of the things that we’re doing is in this project that is trying to upgrade and fix the media ecology and remove the effectiveness of the narrative and info weapons by pointing them out. So as we make the covert things over, do they have more ability to be resisted? So hosting not debates, but dialectic conversations between the best thinkers on topics who are earnest, who are willing to engage in a non-rhetorical authentic sense-making process together and who disagree, where the facilitator is both skilled and dialectical facilitation, but also understands the topic well enough to be able to facilitate it between these people who understand it exceptionally well and represent the variants and views. And if someone is oriented to just want to win a debate or be rhetorical, we don’t invite them on. They’d have to agree to the dialectic process.
Daniel: And then we do things like, “Well what do we all agree on?” what is known, what is unknown? Do we agree on what’s known? Do we agree on what things are unknown? And then do we have a different weighting on the things that we think are known or unknown, but have some probability on why do we have those weightings that lead to why we have different senses of things? And so can the listener who’s watching this see how the best thinkers think about it, see where the differences are, see where they should have some certainty, and where there’s uncertainty and get up to the best general sense of where the topic is at? We’re doing one of those right now with different experts on immunity looking at what is likely to occur for COVID immunity long-term. And so topics that are both consequential and debated, being able to facilitate a process like that. That’s one example.
Daniel: And so this does, like you said, make it where if it’s beyond the pay grade of a person, can we make an institution that is doing that that doesn’t have any other institutional incentive, doesn’t have economic or political or whatever kind of bias and is really seeking to simply be a steward of the information commons and is making their process transparent enough that it can be held to that? So another thing that we’re working on is for any topic that is highly polarized, but also trending and consequential where it’s not just that we can’t solve it if we don’t have a shared understanding, but where the disagreement itself becomes a source of open violence in the streets, which has obviously gotten worse recently than it has been in recent history and can get much more worse moving forward if we’re not very effective at changing that trajectory.
Daniel: So one of the things is pick a topic and we do this meta news. So an assessment of the landscape of narrative, sometimes there’s really just two narratives, the left and right ones that are dominant. But sometimes there’s seven narratives that are really trending, that are really fundamentally different from each other. So do an assessment of here’s the primary narratives in the landscape, here’s a number of different publications or news sources that are clustered together pushing each of these. And then let’s actually steersman each one. Let’s make the best argument for it so that people who don’t believe that narrative can see why some other people who do believe it are compelled by it and have a version of other people that is not just villainizing, and where people are starting to learn to seek other people’s perspectives and learn to steersmen them.
Daniel: And then an analytic process. So that first process is intersubjective. The next one is objective. Can we break the narrative into individual propositions? And can we try to see what evidence was put forward to support the propositions? And then can we look at all the rest of the evidence we can find that either support or refutes them? And what we then do is find where is there identifiable signal in all of the positions? And then where are there clearly falsifiable things? And then where is there a bunch of stuff that has to sit as neither falsifiable nor verifiable currently is just conjecture for which we should have no confidence one way or the other? And so this process as people go through it, it helps them do things like learn propositional logic, learn how to break down a narrative and not reject or accept the whole thing, but be able to see there’s probably some signal, probably some noise and a lot of conjecture, and how to calibrate their confidence margins.
Daniel: And then how to step back and say, “Based on the things that we can identify that are true across these narratives, what can we probably say about the space as a whole?” Which is a synthesizing process where the earlier one was an analysis process. So there’s a interpersonal narrative intelligence, then a analytic empirical intelligence, then a synthesizing intelligence, all of which are needed to make sense. So if we can do that with people that get highly trained in that, and it’s their full-time job to do, but then also walk people through the steps, it will help them make sense of what’s going on and just understand why the conflict is there and diffuse the conflict. And then also show where each of the different sources are employing, whether it’s intentional or not, narrative and info weapons. Show where they’re putting spin on things of what the original thing that was true was and then the way they framed it.
Daniel: Show where they’re doing statistical cherry-picking, and if they looked at these other statistics, how it would change the overall picture so that people start noticing the narrative and info weapons and they start to look for them. And they look for them across the political spectrum, an ideologic spectrum rather than just in a tribal way. If we can do that for people, but also walk them through the steps of how we’re doing it so they learn to do it on their own, if we stopped being able to do it, they could keep doing it. These are examples of processes that I think can help upgrade the info ecology quite a bit. Also, when we’re doing the news itself, just base sense-making on a topic before we look at the meta news of how everybody else [inaudible 00:18:05]. When we say what we’re able to assess, we’ll also show our process of what data are we factoring in, what epistemic models are we including? So if someone wants to see how did we come to that? How did we calculate our confidence? They can see it, and then they can learn the epistemic models.
Daniel: So can we make it to where… You and I both know that understanding Tainter helps in the way we understand a lot of things in the world, or understanding McLuhan or Gerard or [inaudible 01:33:36] analysis, things that people didn’t get and they’re not going to go read all the books to get them. Can we just can press the epistemic models into the shortest essays, then also show how to apply them and give sense-making reifying power, and then link them where they’re useful so that people can start to increase… It’s almost like an optimized public education and how to make sense of the world that will empower civic engagement. So these are some examples of basically, how do we start to do a broadcast where people aren’t trusting it because of some authority, they’re trusting their own ability to make sense of reality better because we’re showing the transparent process and we’re increasing their epistemic capacity and not just making sense of base reality, but also making sense of the way it’s being represented in the narrative landscape.
Daniel: I think that this has the ability if we’re successful to start de arming the effectiveness of the narrative and info weapons and the tribalism and the outrage certainty, and increasing people’s own epistemic capabilities. Then if that becomes a strange attractor for a group of people that are attracted to that, who start to also engage in higher quality conversation, you start to get a cultural attractor that could become very meaningful.
Jim: Ah, it sounds really good. I really hope you’re able to get this going. And I think then there’s a second order effect and third order effect, which is if these artifacts that are created are compelling and have sufficient production values that they can spread virally and mimetically, then they can compete in the idea marketplace and perhaps push back some less well-formed ideas.
Daniel: Yeah. And I would say even more excited than viral sharing, because viral sharing is mostly not thinking, right? It’s just being mimetic repeaters. And I know you and Jordan have talked about this, simulated thinking, but for the most part, people, especially in say the Facebook environments, aren’t actually thinking, they’re just looking at the thing and saying, “Does this conforms to my current thinking ideas that I should repeat it? Or not? And if it does, then I hit share or whatever.” So most people are just acting as mimetic filters and propagators where thinking would be, “Is this true? What are the other opinions? How do I know? What deeper process do I do?” Research is just the vetting, “Is it true?” Thinking is also not just looking externally, but looking internally at what is meaningful about this? How would I go about making sense of this?
Daniel: And so while viral replication has some value, I would actually hope that a success of what we would do would be eventually the end of viral replication in that way. And instead say some people who started to get much better at facilitating high quality dialogue, at facilitating dialectical thinking and steelmanning different positions as well as empiricism, and that those people become parts of everybody’s friend group and just start to facilitate, not just, “We’re sharing this piece of content,” but, “We’re facilitating authentically better conversation in all these other environments.”
Jim: Yeah, I think that’s good. But I also think it’s a little bit wishful thinking, I suspect, because of the issues of the distributions of hierarchical complexity. The number of people who can really deeply think about these large scale complex issues is small. But there’ll be some number that can engage in it and having more engaged in real thinking and modeling real thinking as opposed to simulated thinking will also help up-regulate the credibility of the artifacts for people who are going to be consumers rather than thinkers for themselves. But this sounds like really exciting stuff. I look forward to watching you guys progress. And I think on that note, we should wrap it up. We’re over our normal timeline. But this has been such a fascinating and interesting discussion as it always is with Daniel. So thanks for coming on board and sharing your thoughts.
Daniel: This was fun. Thank you for this sense-making platform that you have shared so much for everybody and having me on.
Production services and audio editing by Jared Janes Consulting, Music by Tom Muller at modernspacemusic.com.