Transcript of Episode 107 – Tristan Harris on Our Social Dilemma

The following is a rough transcript which has not been revised by The Jim Rutt Show or by Tristan Harris. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guest for this current episode is Tristan Harris. Tristan is the president and co-founder of the Center for Humane Tech. Prior to that, he worked as a design ethicist at Google before that was a tech entrepreneur. Welcome Tristan.

Tristan: Thanks for having me, Jim.

Jim: Yeah, it’s great to have you back. Tristan was on the show about a year ago in episode EP38, to talk more generally about the work of the Center for Humane Tech. And we talked about a broad list of issues at the intersection of technology and society, but today we’re going to focus on issues raised in the recent Netflix film, The Social Dilemma, in which Tristan was the lead narrator. And I happened to know was significantly involved in the whole planning of the film. So Tristan, how did the Social Dilemma film do?

Tristan: Yeah. Well, the Social Dilemma actually broke all records for Netflix in terms of documentary films. It’s now been seen by an estimated 100,000,000 people in about 190 countries and in 30 languages. And I think what makes me excited about that is that I think everyone, frankly, people who are probably not as tuned to your podcast and been following some of these trends. People feel so confused about what is going on in the world and this kind of breakdown of common ground and the inability to even have a shared conversation. And I think what the Social Dilemma does is create new common ground about the breakdown of common ground and how we lost it. Going through topics like addiction, polarization, I mean, this has now become the kind of standard agenda of how the news is in a daily basis, talking about these issues.

Tristan: It’s incredible how much case making we had to do over the last five years that now everyone just takes as obvious and given, but the thing that distinguishes the film is it has all of these tech insiders from the inventor of the like button Justin Rosenstein, to the guy who brought the business model of advertising to Facebook, Tim Kendall, who said famously in the film that, “What I’m worried about is that it causes civil war and that this thing does cause people to kill other people and kill themselves.” And it’s amazing how much that felt like too extreme, a statement to make at the beginning. And I remember there was actually a moment in the editing process of the film when someone said, “That sounds like too extreme misstatements, civil war, do you really want to say that?”

Tristan: And there was a big debate about cutting it out and eventually they kept it in. And now so many people looked at the events of January 6th and said, “This was perfectly predicted by the Social Dilemma.” So there’s a lot more to say about it, but I think the film has become a common basis of understandings that now we can proceed and say, okay, how do we unwind ourselves the collective psyche from this mind warping process that we’re now 10 years into? And I think people have to see, yes, of course we’ve always had partisan media. We’ve always had partisan radio, extreme radio, extreme television views, but we haven’t ever had this sort of self-reinforcing cycles of hyper-personalized information or affirmation, I should say, of what you would want to believe. When you really see that we’re 10 years into that mind warping process that leaves us less and less common understanding of reality.

Tristan: It really makes it almost impossible to do anything. And that’s the kind of broader problem I’m concerned about is if you think of a society’s problem solving capacity as kind of a societal IQ, IQ is really less of an intelligence quotient and more a quotient of what is your capacity to solve problems. So you can think about IQ as an individual thing, but if you think of it as a collective thing, our society’s problem solving capacity, essentially social media drops that to zero. And that’s really my biggest concern, especially when you consider the geopolitical race between the West and China, we’re sort of just falling into constant division and outrage and conflict and the inability to ever see a charitable part of what you’re saying and try to agree with it and build on top of it. And instead just fall into division. And I think that’s metastasized into culture, so it’s not even just technology driving that now it’s like you could take away the technology and we’re all running malware, divisive malware in our own minds.

Jim: Well, I do like to not be too pessimistic because yes, there’s a lot of bad that comes from the networks and particularly the social platforms, but let’s also remember, there’s lots of very interesting organizing and ideas being created and people finding the other on social media. While the temptation is there to hit the delete button, there’s still a lot of good though I still… I’m along with you guys that somewhere probably around 2005, 2006 when advertiser only business models became practical with the moving ahead of Moore’s law, the balance between the good and the bad started swinging more towards the bad. And we’re probably fairly far into that field now, but let’s not forget there’s a lot of good too.

Tristan: Well, let’s actually just stop on that for a second, because I think people, your listeners won’t really trust our conversation if we don’t steel man what is good about social media. And could we actually even divide a line between what we like about it and what might be damaging or let’s say unsafe, unpredictably risky and unsafe. Having the ability to go look up on an address book online and find high school friends of yours from 20 years ago and reconnect with them or old sweethearts or crushes or whatever, or blood donors, you could find blood donors to rec for rare diseases or conversation groups. All of those things can exist. It’s really this advertising business model and the virality of user generated content and the business model that’s based on having this asymmetric AI pointed at your brain, playing chess against your mind, knowing your next move before you know your next move and then using hyper personalization and confirmation bias to deliver that perfectly tuned lab created political red meat in front of you.

Tristan: That’s really where the problem arises. And so we can actually separate out Facebook as kind of an address book or sort of a directory of basic services and conversations in small groups like it was in 2005, 2006 from this kind of runaway engagement machine, this Frankenstein that can’t be controlled. And I think what the film, The Social Dilemma was really trying say is that this business model that powers this kind of unsafe, unchecked virality model where anything that gets the most clicks, likes and shares goes viral, that’s what’s kind of unsafe.

Tristan: Now, obviously there’s goods that come in that space, too. We get to hear about niche scientists who are doing independent research of COVID and might find something that later gets contributed into the comments. That’s all great. The problem is we don’t have a good way to gate those kinds of things coming in and lots of bullshit essentially winning. And I think the surface area of bullshit is much bigger than the surface area of these edge cases and wins. And that doesn’t mean I want to take it away, we just need a safer way that this can be negotiated so we don’t end up with crazy town as the new public square and culture.

Jim: And again, in terms of absolute aggregate time, people spend on social media, it may well be that they spend much more time in the good than the bad, but the impact of the bad is unfortunately getting very bad. As you know, I’ve been involved with the creation of the online world since 1980, when I went to work for the Source, which was the world’s first consumer online service, where we had most of what’s on the web today, minus porn by 1982. We even had early precursor something called social media. But a lot of it was based around hobbies, guys that owned Packards or people that were interested in a computer language, or I remember even on the Source, there was a very active sub-community about intentional communities, way back yonder. And that has persisted throughout the next generation CompuServe and AOL had a lot of that kind of the ability to concentrate narrow interests from far fields that they’re strong enough to have a community.

Jim: And there’s still a tremendous amount of that out there on the web and on social media, but it’s this other stuff where the shit storms arise. And in fact, as we’ll talk about are kindled by the algorithms. Not because they want to kindle a shit storm, but because that’s what they do. Before we go on further, I’d like to say in my own view, damn enjoyable film, worth seeing if you haven’t and about the impact, I can certainly confirm that from my limited perspective. I remember a week after it came out, I watched it on the second night it was out, I happened to be talking to a 75 year old grandmother I know, and this a good friend of my wife and I’s.

Jim: And she said that her and her husband had watched it the night before. And they were like all stirred up about it. It was great. So this small town, Virginia by day four, it was penetrating the 75 year old grandmas, hi Betty. Let’s start first with the harms before we get to the mechanisms because I think those are somewhat where they’re a little obviously related. It’d probably be useful to talk about them separately. So what are some of the things you guys surfaced in the film about the harms of social media?

Tristan: Yeah. Well, we often talk about the harms of social media as kind of a climate change of culture. When you look at the news and they talk about various harms of social media, you might see an article one day on addiction or mental health of kids. Another article another day on shortening attention span, multitasking, inability to do things that are productive because we’re just constantly bombarded. You might see another article about polarization, you might see another article about far right extremism, far left extremism on the rise because of social media. You might see other articles about as was just happened, there was showing that during the January 6th insurrection, Facebook was actually showing ads saying, what is it, give violence a chance. Two insurrection groups letting you buy t-shirts and gear or military grade, tactical gear, things like this.

Tristan: So when you look at all of these examples, they can sound like a bunch of different things. But what we like to do is say, well, imagine that before we had this unifying concept for the greenhouse gas model of climate change, and we understood the emissions connection to coral reefs and deforestation and all of these things. Those would be seen as separate events, you’d have some people studying the coral reef, some people studying the melting glacier. Some people studying Amazon deforestation, some people studying dead zones in the ocean, and there wasn’t kind of a unified model for why all these things are happening. And we like to think of what’s happening with social media and this business model of attention extraction this race to the bottom of the brainstem to deliver more and more predictable dead slabs of human behavior, that produces the set of effects that we can predictably say will continue in society.

Tristan: So in other words, a person in the attention economy to these platforms is worth more if they have shorter attention spans, if they’re addicted as opposed to not addicted, if they’re isolated versus if they’re connected with their friends, if they’re polarized, outraged, falling into extremism and disinformed, because that means that the business model was successful at producing essentially lots and lots of engagement. And so at the end of the film, Justin Rosenstein, invented the like button borrowing a line really from our friend Daniel Schmachtenberger, who you’ve had on this podcast several times. I think that’s so eloquently so long in this economic model, a whale is worth more dead than alive and a tree is worth more as two by fours than as a living tree. Now we are the tree, we are the whale.

Tristan: We are worth more when we are addicted, outraged, polarized, disinformed and so on, than if we’re actually a thriving citizen who is critically examining his or her own choices and trying to make do in the world. And that’s the thing that we sort of say is the climate change of culture, that those effects will only continue because as this gets more aggressive, AI is going to know more about us rather than less about us. It’s going to be able to know our weaknesses better than we know ourselves with more accuracy, not less. It’s going to have more data on us in the future, not less. It’s going to be able to addict us more to getting attention from other people, not less in the future. It will be better at virtualizing our experience giving us virtual mates, virtual chat bots, virtual friends, virtual worlds that will out-compete the difficulties and challenges of a physical reality that are simply not as competitive to a virtual mate that is right there next to you.

Tristan: Now I’m kind of veering into the diagnosis territory, which is the thing that connects all of these harms is that technology is hacking increasingly more and more of human weaknesses, the sort of singularity folks got it wrong. We were all watching out for the moment when technology would take off and get smarter than humans and take our jobs and be smarter than our IQ. And that was always thought to be 20, 30 years away, but we miss this much earlier point when technology doesn’t overwhelm our strengths, but it undermines our weaknesses. And that’s how you can look at all these harms as they all come from a progressive hacking or undermining of human weaknesses. You take distraction and information overload. Why are we seeing that? Because technology hacked or overwhelmed our seven plus or minus two short-term memory and the limits of our executive control system, trying to focus on what it’s trying to do.

Tristan: And that was kind of like the Marshall Islands or the kind of first big jump in the climate change of culture that we undermined the limits of our prefrontal cortex in our minds. Then it starts to overwhelm and undermine showing us things that stimulate our emotions and our amygdala, hate speech polarizing political red meat, things like this. And that drives up political polarization. So you can keep going through, but I think that’s the kind of core diagnosis of the film that really follows through our work at the Center for Humane Technology.

Jim: Another couple that people don’t seem to bundle with is that you guys haven’t historically, but I think are related and that we’re thinking about, it was we’ve also been hacking the midbrain, not necessarily the amygdala, but other parts of the midbrain with deep sexual hoax. Even back in the beginning of the web, I think 1994, I saw numbers 25% of the traffic on the internet in 1994 was porn. And then really cynical exploitations of human deep behavioral hoax with things like Tinder. But that’s just [inaudible 00:13:59] that we’d have this society that operates on something like Tinder and yet it’s an obvious exploit. Anyone that’s hung around young people know that sex, sexual anxiety and all of that is just an off the chart lever to pull and Tinder is essentially a methamphetamine to work that particular line of human anxiety and libido.

Jim: And those are other domains where they just get better and better. I just love your framing of exploiting human weakness. And again, think about that, our society has had to deal with that before. A well-known human weakness is alcohol, and for a long time, alcohol was unregulated in the United States. In colonial America on the frontiers, they say that the average adult male drank close to a quart of whiskey a day. And then first we had Prohibition, which was a bit of a shit show as we know, it turned out to be a catalytic surface for organized crime on a vast scale and political corruption, everything else. But on the other side of it, we did have a pretty stringent regulation of alcohol.

Jim: Addictive drugs like heroin, again, amazingly they were legal till about 1910. Then we realized that the harms they had were extreme and then even things like marijuana, or we’ve realized the harms aren’t as bad as people claim they were, we still don’t want people under 21 using it, at least in theory. So we have dealt with physical phenomenon, drugs, alcohol, et cetera, cigarettes that hack our weaknesses. Again, I love that phrasiology and we have chosen as a society to do something about it. So maybe the time to do something about these particular forms of exploitations of our human weaknesses. Gambling is another one, another good example gambling that was certainly legal in 19th century. Anybody who watches the Western, the poker tables in the corner is a standard part of that, that’s more or less true to life. But we realized that 10% or 15% of Americans are just incapable of stopping their gambling addiction. And so we regulate it probably don’t regulate it enough, but so I think it’s useful to make the point that exploitation of human weakness via business models is not new with the online world.

Tristan: And I think we, just to make sure we’re really steel manning where the cynicism of some of your listeners might come in here. So we obviously know that it’s not a new thing for there to be features of society or industries that are in domains that are all about hacking human weaknesses, whether it’s porn or alcohol or gambling. And so I just want to acknowledge here, those are almost the basic ingredients, the kind of 19th century, 20th century versions of something that’s existed for a long time. But imagine you point AI, and the same we have AI for doing drug discovery, can we actually use AI to come up with brand new drugs that will actually solve problems that we couldn’t discover on our own. And suddenly they’re going to come up with whole new cocktails, whole new combinations of things that will actually help us heal things that we would see as unsolvable before that.

Tristan: This time we have the exact adversarial opposite things. And I think this is all very obvious [inaudible 00:17:13] to many of your listeners, at this point now it used to feel novel, but it’s hopefully dead and sort of easy for everyone to grasp now, but the book ‘Salt Sugar Fat’ that was written in 2012 by Michael Moss, the New York Times. Really got this for me because he talks about the evolution of big food and these industrial food companies that used to just discover or come up with cheese as some basic formula, someone just made this a salty cheesy Cracker. But then of course, what happens over time is people don’t realize the amount of testing and the amount of hyper precision engineering of mouthfeel and a whole bunch of other features I forgot the names of them, they include in the book.

Tristan: And they come up with these new cocktails, these new persuasive ingredients. And it’s almost the sort of AI drug discovery, but it’s almost like a taste discovery. Can we deliver brand new addictive tastes and we’re dealing with a finite stomach share. So one of the insights of the attention economy is you have a finite pool that you have to draw from. You can’t get more intention than there is out there. And just like with stomach, the big food industry looks at the finite amount of stomachs that are out there and says, “You’re not only going to eat or drink so many calories per day, we can stretch that a little bit by getting everyone obese.” And so we did that successfully, but there is sort of limits and it becomes hyper competitive and a race to the bottom.

Tristan: The same thing is true for attention, much like with food, by getting everyone obese, we can actually… If we get everyone multitasking paying attention to two or three things at once, or looking at your computer screen and your phone, or while you’re listening to this podcast, maybe you’re actually looking and reading a webpage at the same time. By doing that, we have doubled or tripled the size of the attention economy, but again, you can’t multitask forever and infinitely grow the size of the attention economy. You’re going to run out and by doing so, you’re also debasing the quality of attention that is inside of it. We’re debasing the culture, the quality of presence we can give to each other, or when you’re reading the news article that you’re reading right now, while you’re listening to this do you think that you’re picking it up as well as if you weren’t doing two things at once?

Tristan: So when we shallow out the attention economy, we are starting to frack for attention and just like fracking debases, the otherwise regenerative capacity of the environment and kind of degrades the capacity of the environment, that’s what it’s doing to our minds. And so that’s why we look at this as saying if there was ever a thing to cause us to wake up to extractive economic logics, where you have an infinite growth paradigm running on a finite substrate of human attention in this case, or in the case of the economy, a finite substrate of the environment and the regenerative capacity of the environment.

Tristan: This would be it because if I tell you about climate change or something like that, you can say, “Well, that’s a problem that’s 30 years away. It’s going to take too long to get here.” And I can convince myself, it’s only a problem my kids or grandkids are going to have to deal with. But suddenly when I can see that my fingers scrolling is the exact same extractive process. And now it’s pointed at me, I’m the tree, I’m the whale. And I think that is one thing I think resonated deeply with people about the Social Dilemma film is showing how that economic logic is at the root of the same problems we’re seeing in technology.

Jim: I’m really glad you mentioned race to the bottom because I was going to mention it if you didn’t. My good friend, Bret Weinstein talks about this all the time, how that if there’s multiple players in an industry, none of them can retreat from the bad behavior, or if one of them innovates with bad behavior or bad for humans behavior, but it is efficacious economically, the others have to respond. And so there has to be an outside coordination function that basically says, “All right people, we’re all going to agree that high fructose corn syrup is not going to go in kids’ drinks,” because otherwise it’s cheaper than cane sugar, and indistinguishable by your average nine-year-old. So of course they will, unless there’s some external coordination, for now the only real external coordination we have is regulation.

Tristan: Jim, this was actually important for me when I was starting to work on this in 2012, 2013. And I remember reading in the book, Salt Sugar Fat, that the industry actually did convene all of the industry leaders together, along with their food scientists and started giving presentations on the growth of diabetes and obesity in the United States. And then tried to get a game theoretic agreement, could we basically restrict how much salt, sugar, and fat collectively as an industry, what are the limits we’re going to put on high fructose corn syrup, they actually discussed these things. And then what happened is famously in the book, they couldn’t get an agreement. And at one point a new CEO came into General Mills, I believe, I think her name is Betsy Morgan or something like this in 2004 and decided just, you know what I’m done with this, I’m a mother I’m going to put caps on the amount of salt, sugar, and fat in our foods.

Tristan: They did that and they did it unfortunately at a time when the rest of the industry was growing really fast. So they had to answer to Wall Street and Wall Street said, why are your sales not growing up as fast as the the rest of the industry, they fired the CEO, removed the limits and went back to business as usual. And I think Jim, this speaks to kind of where we are now, where you and I have been talking the last couple of days about given the events of January 6th and the massive de platforming of Trump and many other accounts and the actions that platforms are unilaterally taking to try to stop the sort of dangerous violence that might occur in their view I think if they don’t do something, this is time for almost a constitutional convention for the digital world, because we are now a digital society.

Tristan: And I’ve been using this metaphor recently that in the same way that when we hold a phone in our hands, I’m no longer just a human being. I’m actually a cyborg because my brain is augmented. My daily use of attention, my emotions, my anxiety are now pulled in this new system loop, this new reinforcement loop that involves my phone. My phone is now part of my mental sense making and choice making process. And everyone feels this as you do Jim, advocate for a day away or a weekend away or a week away from technology, you really feel that difference because you’re kind of removing the cyborg like quality from the way that our brains work. And we’re surprised I think when we all do that just how different our brains feel.

Tristan: So we become cyborgs on an individual level, but what I want to argue is that we’ve actually become a cyborg democracy. We are no longer just a society or a culture that’s dealing with media. We actually now have technology that has been upstream from culture and screwing with the cultural feedback loops in such a way that if we are a cyborg democracy and we have this brain implant in our society, that brain implant needs to be democratic. There’s two ways to fail here, one is for tech platforms to have no articles of impeachment, or just have lawlessness online. So you could have accounts that reach 20 million people who simply call for violence and then you don’t do anything, that be one way to fail is digital lawlessness and kind of the hyper libertarian vision of everybody on their own. The second way to fail is to have autocratic decision-making by two or three tech CEOs, making a decision on behalf of everybody else and not asking them.

Tristan: And so, we’re failing in both counts right now. Censorship is neither a good solution, but neither is not doing anything when there’s real violence that is getting amplified in self-reinforcing feedback loops. So if there was ever a time for this kind of industry agreement and for that to be democratic and not just a private closed meeting between the tech companies, but some kind of open public democratic process that involves legal scholars and sociologists and academics and multi partisan or multi political party representation. That’s the time that we need that because we are witnessing the birth of kind of a digital nation. I think given the similarities people are drawing between January 6th and what people who I think were in that siege thinking of it as 1776. Okay, we are in certainly a constitutional crisis of some kind. I think we need some kind of new digital constitutional convention to align ourselves in the new 21st century.

Jim: Well, we had that conversation about a week ago, and I’ve been thinking about it since. I came up with an analogy that I’d love to float and see what you think, the analogy at least suggests we may not even need a constitutional convention, which is prior to about 1908. There was no traffic laws in the United States, least to speak of, there were some common law about if you ran over somebody with your horse you were liable or et cetera, but there were no police, there were no traffic tickets, there were no stoplights. Roads had essentially grown up organically from trails, and they may be government owned in many cases they weren’t, they were on common law, right aways across land owned by other people. And there was literally no regulation of road space until about 1908 when in some of the urban areas, it was actually some of the smaller urban areas that first started the beginnings of rules speed limits.

Jim: The first speed limit was somewhere around 1908 and gradually our democratic sphere passed laws regulations. Basically now of course, the public highway is very highly regulated place with the mandates for insurance, annual inspections, got to have your car not only mechanically inspected, but pollution inspected. The manufacturing rules on a car got to have seat belts, got to have those crash balloons, what the hell they call them. And I think most recent innovation is this year all cars have to have backup cameras. So there’s an interesting analog where cyberspace, I’ve been doing the cyberspace things since the beginning, prior to the advertising business model was not sufficiently either pernicious or large that the world seemed to feel that it needed regulation. Just the road/trails of the 19th century in United States, nobody even thought about regulating them because it didn’t seem like it needed it. But once automobiles showed up at ever increasing size weight, speed, and noise, suddenly we realized that this was a domain that had to come under democratic supervision.

Tristan: Yep. And much like with the beginning of airplanes and we need an FAA to manage the aviation commons, we need something to help us manage the attention commons and create the game theoretic standards and limits in not just our country, but in the world too. I think finance is an interesting place to look in terms of a regulation there because that’s a regulation framework in finance that crosses international borders. And one of the challenges we face here is that you have private companies mostly situated in the United States that let’s say we just get a national agreement here in the U.S. to not do dopamine loops, self-reinforcing personalization funnels, rabbit holes, things like this. Well, TikTok or China, some other next TikTok version that’s coming out of China or UAE or Saudi Arabia will just not do those things.

Tristan: And so we need actually a kind of global paradigm for how we want this to work. That’s what gets tricky about this is it feels like it’ll take forever to get kind of agreement globally. And then of course, there’s going to be regional differences in what norms people want, but I think we need to be able to establish some principles. And it’s not the first time, we had airplanes coming out of the U.S. and I don’t know the history of this, but I’m sure that we had to come up with some of the regulation and rules, norms that we wanted to exist here first, and then it would pass on to the rest of the world. Right now we’re just letting the whole thing, run wild. And again, I think if the tech companies don’t want this recent set of events and the total de platforming of what people view as a power grab by the left to be seen as a political action, which I actually don’t think knowing that people in the tech industry that I do that that’s what’s intended here at all.

Tristan: I think for them to prove that this is not something that’s going to just further drive up and escalate conflict and polarization, they have to turn this into a democratic process if we want this to be fair and democratic. And institutional power and inconsistently applied principles is equivalent to tyranny. That’s what we say about cops that are inconsistently applying their enforcement discriminatory over-policing certain neighborhoods than others, that’s a form of tyranny, but if we have Facebook or Amazon web hosting expressing institutional power and being inconsistent in how they’re applying their principles like de platforming Trump for inciting violence, but not de platforming many other world leaders with similar levels of follower counts that’s going to be a problem.

Tristan: So we need equal enforcement. We need an articles of impeachment for the brutal world. We need an emergency broadcasting system, you can imagine essential information for health being kind of an emergency broadcasting system when you have a pandemic. We have that for television, we don’t have that for the online world. We need kind of preemptive penalties so that we can sort of dis-incentivize people from even contributing things to the attention commons, because they know that the penalty will be worse than having contributed at all. An example there is, if something that you share is later found out to be completely debunked, imagine that we tell you that the fact check correction will be spread to three times as many people as who originally saw your falsely spread information.

Tristan: And that will dis-incentivize you from feeling like you want to share things that you haven’t confirmed yourself. So there’s things like this, that there’s a whole long list of them that we would want to put on the agenda of a constitutional convention to help manage the attention commons. And I think like you said, we’ve done this before. We always start with a period of lawlessness and confusion and car crashes or horse and carriage crashes, and then we figure it out after the crash. And I think this time the cost is too high to not do that immediately.

Jim: Yep. I agree. And people who know me know that I have been a, let’s keep the internet out of the hands of government kind of guy for a long time. In fact, I was a supporter of John Perry Barlow with his famous declaration of the independence of cyberspace. I think we were naive, however, at the time, and as we’ll talk about in a little bit, I do think that going to the advertising only business model, coupled with machine learning was something we just couldn’t even have envisioned in 1991 or 1992. And so from that point of view and at that time, it seemed reasonable. At this point now I’m with you and I believe that people on both sides of the political spectrum should join up. For instance, I find it quite surprising that a lot of Progressives are cheering on these de platforming.

Jim: Obviously the reason they’re doing it it’s because it’s supports our team, but normally Progressives would not be in favor of giving essentially unlimited power to five oligarchs, five of the richest people on earth who have almost absolute control over these platform companies are making these decisions. Would your typical, Bernie voter in principle want to give the power of virtual life and death to five billionaires, all white males? I don’t think so. And the people on the right are concerned that at least at this moment, and I always remind everybody that the boot will change feed at some point, feel like they are the target of excess enforcement and they don’t feel right about it either. So I think this is a great opportunity if we pop up to a higher level of abstraction and get away from today’s scuffling both sides can agree that the time has come to extend the democratic rule of law.

Jim: I think that’s the important thing. For instance, one could easily imagine saying that any social media type platform that had say more than 5 million uniques a month, which is pretty small but not tiny, has an obligation of free speech, but with regulation. And there’s a specific set of legal standards on what the limits of free speech are and the definition of due process that all such providers must have. And that’s something that we can reach what’s the right balance. That’s what democracy is for, that’s what deliberation is for. It’ll change over time, there’ll be amendments to whatever law gets passed. And eventually the law of 2021 will be superseded by the the network constitution or the network regulation of 2028. And so I’m ready to bite the bullet as a former net libertarian, I think that the alternative of having a handful of oligarchs, have absolute power and the fact that they don’t think about it they’re just being driven by business necessity and ad ho curry. And nobody’s thinking about this in a systematic democratic way, time has come.

Tristan: And I think Jack Dorsey actually acknowledged this in his, it came out yesterday I think, but his sort of saying was this the correct decision for Twitter to make, to de platform Trump? He said, “I don’t think this is right for the world, but it was right for Twitter.” In the sense that he acknowledges this is not how decisions should be made, we don’t want every really big action to essentially be just a big emergency break, break the glass and pull the big lever and take the most extreme opposite of surgical kind of action. But I think what that revealed is that these things have become Frankenstein’s where the only actions that they can take are essentially large ambiguous and kind of in specific ones, because it just shows how messy the whole thing has become.

Tristan: I mean, their goal at Twitter supposedly is to help give the platform for healthy global conversations. And Jack, I think, added onto a [inaudible 00:34:13] and help humanity address its existential threats, because I think he’s been really, actually mentally in the space that I think some of us have been on that as well. And so I think these tech companies would actually agree that this is how it should work. Even Mark Zuckerberg has said, “I shouldn’t be the one making these judgment calls on behalf of 2 billion people.” The problem of course, is that if they don’t make any choice, the self-driving car of their algorithms drives all of their societies off of a cliff. And so we have to do something. And I think if you also… One of the reasons I think this metaphor of the birth of a digital nation is becoming a constitutional convention for the 21st century is we also have to update our fundamental way of regulating, the clock rate of regulating and governing what issues are going to arise from tech.

Tristan: Because we always take at the Center for Humane Technology, the problem statement of E.O. Wilson, that the fundamental problem of humanity is we have paleolithic emotions, medieval institutions, and accelerating God-like technology. And those three things operate on different clock rates, our paleolithic emotions are baked negative emotions and hatred and fear and uncertainty are very powerful emotions, and they’re not going to change. The best you can do is try to express them in healthier ways or have better awareness of them, but they’re not going to go away. You’re trapped in the mind body meat suit, that’s going to express the way it does. Medieval institutions of… It’s funny because that’s in reference to 18th century philosophy or 18th century democratically decided processes for how we want to meet in Senate chambers and make decisions about huge complex systems. That’s simply just not operating fast enough to deal with all the issues that we face.

Tristan: And then third is we have this accelerating God-like technology, and it’s going to create more and more issues faster than we have the medieval institutions to govern it. And so what happens when your steering wheel lags behind the speed of the accelerator of your car, which goes faster and faster every year, you’re going to get into a car crash. And so I think the reason we also need a digital constitutional convention is really to update to a 21st century paradigm of governance. And the question is, what does that actually look like? But one of my favorite things to talk about here is that currently the tech platforms have been defending themselves with 20th century ideas of philosophy and economics saying, “The user is in control, they’re the one who chose to use our service, they can always switch to another service at any time.”

Tristan: That’s a kind of a classic economic rationale, but while they design their products using 20th century behavioral economics. So in other words, they defend their decisions against regulation with 20th century economics and choice-making, but they actually designed their products with 21st century beneath the brainstem tine of hacking of the human mind. And we can’t have it both ways. So I think part of what we need here in this kind of 21st century constitutional convention to really acknowledge we are a new kind of cyborg country democracy and a new kind of digital infrastructure world is we also need a new language for what we know about the brain. We have to be able to make a distinction between speech and cognition. We have to stop talking about censorship and free speeches sort of these unilateral black and white concepts in the same way that if you just don’t censor what you really have is not censorship, you have crazy-making ship because you have a system that is automatically steering people towards crazy town.

Tristan: So we really need new philosophical concepts here to govern what we’re really after, because one of the things I think would be get wrong if we did the constitutional convention. And we just talked about what laws we want is we’d be governing the cancer, we’d be governing the fact that we have a cultural cancer creating process of this perverse business model. What we also need to do is reverse the cancer. We’ve now put the collective psyche of our society immersed for 10 years in this cult factory that I think we saw on display when you think epitome, the epitome is the guy with the horns at the Capitol building. And now it’s very visible that the… And this is kind of the most visible representation of the YouTube trolls and comment threads have now taken over the Capitol, that’s good.

Tristan: It feels like the direct analog of the internet now visibly expressed in the real world. So we also need to reverse this cultural cancer, and that’s going to be the harder one. And frankly, I think we probably need a lot of national broadcasting and promotion of culty programming. I think the whole country and the whole world needs to see lots and lots of documentaries about people who are in different kinds of cults, many different kinds and how you didn’t know at the time and how there’s very smart people that who got caught up into it and how powerful the group think is. But we need a kind of new literacy about how each of us are trapped inside of different self-reinforcing cults and filter bubbles. And if we don’t have that literacy, I don’t think we can get out of this.

Jim: Yep. That’s certainly going to be hard. And again, we have to be honest and say that this phenomenon on both sides, this strange new woke phenomenon has an awful lot of the same attributes of the MAGA Crowd. It’s a self-organizing network tribe that has, it’s very heterogeneous, it’s got some extremists, it’s got a lot of people who are much more reasonable. And unfortunately the 1% with a kind of gas in a match can cause all kinds of problems. And when people assemble as a mob, the 1% can ignite the 5% who can ignite the 15%. So this is not just about January 6th, this phenomenon is all over our society and there are echo chambers and filter bubbles of all sorts. And if something isn’t done about it, there’s going to be a lot more of them.

Tristan: That’s right. Yeah, and I really worry about where this goes without dramatic intervention. A lot of people again saw the Social Dilemma as being eerily predictive of things that we’re seeing just now. And the whole point of the film is we don’t want the world to go this way, we can have the collective self-awareness to realize the train that we’re on, that we are now 10 years into this mind war process. And it’s really hard to get that, like when we look at the other side, no matter which side you’re on and you think you understand them, well, you don’t see that you’ve been, as Justin Rosenstein does in the film, you look at the other side and you think yourself, “How can they be so stupid? Aren’t they seeing the same information that I’m seeing?”

Tristan: And then he says in the film, “That’s because they’re not seeing the same information that I’m seeing.” And it’s just so deep, also just to steel man another cynical point your listeners might be making right now, they might say, “Well, hold on a second. We’ve had partisan radio and partisan television, Fox News and MSNBC for a very long time. Isn’t this just a new level of that?” And the answer is partially yes, but I would also challenge listeners to think about where are the editors and curators of those radio and TV stations getting their news from? They’re sitting on Twitter, which means that they’re sitting inside of the self-reinforcing newsfeed outranking algorithms that this one company is really steering that influences largely the media and journalism ecosystem.

Tristan: So we really do have this media ecology in which social media and technology are upstream now, from the ways that all of us, including how the media production complex is broadcasting its information. I think we need even more literacy about this. I think people think that they understand it, but I don’t think that the mechanics of how this drives up more conflict and polarization are truly understood.

Jim: Yeah. And let’s take a turn now back to the movie, that’s a perfect transition because while you could argue that MSNBC or Fox are polarizing engines of the previous cycle, they’re fundamentally different in that they don’t have the feedback loop to self tune to your own behavior. And in the film, I thought one of the most wonderful pieces of it that kept coming, it was a running theme you had these three gremlins. I think were played by the same actor who represented the machine learning algorithms inside of a social media platform that sure looked to me a hell of a lot like Facebook.

Jim: That were constantly manipulating a teenage boy based on his own behavior, or even lack of behavior and putting things in front of them, sending them little nudges, et cetera, to get him to be more available to consume advertising. And these gremlins were making these manipulations based on his own data specifically to manipulate him, which is something that neither Fox News or MSNBC is capable of doing. Maybe you can take that idea a little bit further and give a little bit more detail on how these mega machine learning algorithms are now in play against ourselves.

Tristan: Yeah. Well, I love that you’re bringing this up and in the film for those who haven’t seen it, but hopefully you will take a look at the film, there’s these three different AIs artificial intelligence agents. One is called the Growth AI and that AI’s job is to figure out how can I increase growth and get you to invite other people to use the surface? How can I get you tagging other people in photos and comments and posts because that gets them to come back. So that’s the Growth AI and they’re calculating and trying to predict what would cause you to do things that would grow use of the service. An example of that in practice at Facebook is when you joined Facebook early on in the right hand sidebar, it’ll say here’s other friends you should add to be your friend.

Tristan: And they can actually strategically recommend people who, for example, may not have used the service in a while, so that they’re actually recommending people who will then end up resurrecting those dormant accounts. That makes sense. So there’s a predictive and manipulative purpose there on the Growth AI. The second AI is what’s called the Engagement AI, and this is the AI that’s trying to predict what will keep you on there longer, which video could I show you next, if you’re, as it is in the film, a skateboarding addict, can I show you Epic Fails of Skateboarding, and have that be the thing that gets you, or if it looks like you’re headed to an event called the January 6th Insurrection, maybe you might like these posts about guns and these inspiring videos of storming other capitols in history, examples of that that’ll work for you, that’ll increase engagement time.

Tristan: Which actually ironically, the film covers these kinds of gun based kind of examples in a strange way, which wasn’t so predictive of what’s happening now. And then the third example of the AI is the Advertising AI. And that AI is trying to predict what is the advertising that will be most successful at basically getting you to buy things. And so to make that example real on January 6th, as I mentioned earlier, Facebook was found to have advertised a t-shirt saying, give violence a chance. And also military gear, a tactical military gear to people who are in the groups that went to the January 6th event, because those were the ads that would have been most successful. Again, according to the AI’s predictions, because he doesn’t know what’s good, the algorithms are amoral they just know what works.

Jim: Let me break in here a little bit. AI being my field, one of my fields that’s the point I really want to make is that those machine learning algorithms didn’t know anything about guns or violence or January 6th. Today’s machine learning algorithms, not necessarily what we’ll have to confront in the future, which is even scarier, they’re basically really, really fancy and really, really good statistical engines that look at correlations. They have no idea that the ads are about guns, they just know that ads of this sort with these kinds of words in them and even sometimes with these kinds of sound deep male voices versus high-pitch female voices work on people that have the following attributes and the following behavioral profiles. And are like other people who these ads are run with. So when people say, “Oh my God, how could Facebook allow that to happen?” The truth is these machines are black boxes, which are opaque. You have no idea what’s going on inside and don’t deal with semantics at all, merely statistics. Really Facebook has no obvious way to stop something like that from happening when they let these machine learning algorithms loose.

Tristan: That’s exactly right. And I think what people need to get about this, there is a reason why we say the business model is the problem. So when I say that phrase, “the business models of the problem,” many people might be thinking, “Oh, he’s saying the advertisements are the problem because isn’t the business model advertising?” But no, it’s that the reason why Facebook doesn’t know the difference between guns versus something else that might be violence inciting a niche language in India, because keep in mind, there’s 22 languages in India, I believe. And how many engineers at Facebook do you think, or how many AI classifiers did they build to detect violence inciting speech in those 22 different languages? And think about all the different countries that Facebook operates in, all the different sub languages they have to then know about and all the different sort of dog whistles and symbols and terminology for things that might be violence inciting.

Tristan: Do you think that they have human judgment, human moderators, human editors that are looking at just like we have a television or radio with a five second delay and bleeping out things that we know to be the handful of short words that we don’t want going over airwaves to maybe children or something like that. We don’t have that for the hundreds of languages and hundreds of countries that any of these companies, TikTok, YouTube, Facebook, et cetera, are reaching. And so it’s an information theory problem. The scale and dimensionality of possible risk and harm is greater than the scale and dimensionality of safety enforcement. So you can think of it, I think of it like if you have a plane and you have, let’s say 60 passenger seats on the plane, you probably would want to have 60 parachutes or life vests in case that plane crashes.

Tristan: But if you had a plane that only had two life vests or two parachutes for 60 people, you have created a system in which the capacity for harm and risk is far greater than the capacity for safety that you’ve engineered into the system. The reason again for this is that the business model profits, by having automated machines look at content as opposed to paying human editors and human judgment, because that’s very expensive. And so really the dilemma that we’re stuck in is that the business model depends on it being some kind of unconscious Frankenstein, because we can’t afford to be as conscious as we need to of the possible harm that can emerge from the system.

Jim: And there’s also, I’ve made the note while you were talking about the same kind of theme earlier, that there’s some exogenous things that have happened that weren’t Facebook’s doing or Twitter’s doing or TikTok’s doing. And that is again, Moore’s law marching along basically communications is essentially free now, the cost for Facebook to have you make a post and put it on their server. I don’t know what it costs, but it’s got to be some tiny fraction of a cent. Not that long ago, it was pretty expensive, just the computation, the network and the disc. And so the fact that communications has essentially become free, so all restraint is off the demand side.

Jim: Unfortunately as you pointed out earlier, our human attention is not significantly greater than it was. And frankly, to the degree that it is greater than it was 10 years ago, that’s probably a bad thing because it means we’re being forced into a less productive multitasking. And then you add together the fact of the network effect that if you have the most users, you win from a business perspective, and you hold them on longer as possible. All the players in the platform game are forced into encouraging unlimited communications into a domain with limited attention.

Tristan: That’s right. And if one platform says, let’s say Twitter tomorrow created a rule that you’re only allowed to post once per day, which is how we can better, or they even introduced congestion pricing. So based on the amount of intensity of everyone else posting, it became more expensive, not in terms of money necessarily, but let’s say, I don’t know, some kind of scarcity credit for our ability to contribute to the attention commons and maybe the price for being wrong or being for sharing something later, turned out to be false or more of an assertion or something like that. The cost of being wrong goes up so that we have some congestion pricing mechanism. Well, that platform that introduces that isn’t going to do as well as platforms that don’t introduce those kinds of scarcity limits, because the ones that don’t are the ones where people can post more often share more salacious stuff.

Tristan: And that’s why as we say in our podcast, Your Undivided Attention, we interview many different experts on this topic. And we have one called the Bullies Pulpit interviewing Fadi Quran from Avaaz who really studies how the business model favors bullies, those who essentially use the platform to say the most extreme things about minority groups and to say them as often and as powerfully as possible. I mean, think about the number of times per day that Trump tweeted the election is rigged, the election is stolen, the election is rigged. Well, if he had only one… Imagine that the scale of the audience that you reached dictated how often you could communicate or what the sort of responsibility you would have for communication. And the fact that you can simply just without any cost tweet 30 times a day, 50 times a day.

Tristan: And if you already have 10 million followers or a hundred million followers or whatever he had suddenly the real problem here is we decoupled the level of power that an agent has to influence the psychological commons from the responsibility or possible harm that they could have. And to borrow another line from Daniel Schmachtenberger/Barbara Marx Hubbard, where I think he got this line from, you cannot have the power of gods without the wisdom, love and prudence of gods. If you have the power of Zeus, but in a sort of psychological capacity to sort of lightning both half the communication commons and the psychological commons, and you bump your elbow, you don’t know what you’re saying. You’re causing an exponential set of consequences psychologically for others without having the consciousness of what that might do.

Tristan: And so what I really worry about here is we’ve scaled up the sort of God-like, Zeus-like powers of psychological impact to each of us in dangerous ways, without giving us the capacity to know what kind of harm will cause. And one of the simpler and settler example of this is one of the failure modes of social media is context collapse. That you’ll say one thing in one context that feels resonant and true, and it will by, because you’re broadcasting to exponential numbers of people, it will be heard by people for whom it is not true or resonates very differently. So I think when the left says that it’s all about race, or we have to make sure we prioritize people of color, those who are poor white people who live in the most rural areas and are economically suffering, they hear that as that’s, why would we prioritize that?

Tristan: And so, again, the context collapse of what would a good faith truth in one context, but it’s not true for other contexts, that’s what leads to more conflict because those other people, those poor whites will then respond with more aggression, more hatred. And in a subtle fractal way this is happening everywhere where everything that you say gets misheard and mis-characterized by other people, and then it drives it more conflict in escalation. And I think until we recognize the cycle of that system of how context collapse produces conflict escalation, it’s not going to look good. And again, this is where I think a constitutional convention and even some kind of truth and reconciliation for this warping effect that technology has had on our collective ability to cohear. We’ve got to actually reckon with what’s happened and it’s much deeper than I think people tend to see.

Jim: And it’s interesting when you talk about that, this context collapse. Again, back in the day, the CompuServes and the AOL’s and the Sources, most of the stuff was in what you’d now call groups on Facebook say, where people were talking about things together, and there wasn’t a hell of a lot of overlap. The Packard car people weren’t arguing with the Cadillac Antique car people. But when you added the social dimension that there’s this undefined open space that doesn’t belong to any group and take it to Twitter, would take even further extreme, at least Facebook has limits on how many friends you can have, and you have to be a real person, et cetera. Twitter’s the ragged edge of craziness, where there was no concept of groups. Everything is in this highly fluid ill-defined space. You can have as many followers as y’all want and no wonder if we overlap natural human variation in a high dimensional space, of course, you’re going to have conflict constantly. That’s essentially what’s been engineered to Twitter for sure and Facebook to only a little bit lesser degree.

Tristan: Totally. You’re making me think of something totally different I’ve never thought of before. But one of the things that causes people to react with such negativity is just when you’re low ego debt, this sort of ego debt ratios where positive to negative feedback. Like what’s the ratio of positive feedback we get to negative feedback we get. And with social media, we get lots and lots of negative feedback because that’s what makes it easy for people to not just say negative things, then have other people pile on or say, “I like that.” And then you see people who you thought were friends piling onto negative things. And we all come from such a shrunk place, we feel really hurt and small because we tend to also, on top of what I just said, our minds tend to loop on and remember negative things that are said about us.

Tristan: Jim, if you posted a photo and you got 99 positive comments and one negative comment, where does your attention go? I mean, your brain has seen 99 to one evidence of positivity, but your brain tends to focus on the negative because that’s what’s evolutionarily useful. Now you imagine teenagers under that dynamic. And we just feel constantly there’s a lot at stake and we have to lash out. And so that’s what I think is causing some of this negativity to emerge when hate kind of becomes a habit. And then we all start getting into that habit together. And I don’t know why I’m thinking about this, but the thought came to my mind, what if there was this practice where you log into social media and the first thing it asks you before you participate into Twitter, and what you want to post is it said, “Who are three people who you’re grateful for, who you just want to send some buy for? Who are you really, would you want to just sort of celebrate or feel grateful for?”

Tristan: And imagine you woke up and everybody, you did that action. You were saying something to someone else that was positive and praising them, but then also you had suddenly a lot of people praising you. And because eventually I think our turns would all come around and just that simple change, examples like that are about understanding the kind of starting with an understanding of human frailties and psychological weaknesses and trying to design around it in a positive way. And that mirrors other forms of human wisdom, where gratitude practices are one of the simplest things that people can do. When you start your day, open a journal and make a list of things that you’re grateful for, or just meditate on that. And it’s a very powerful simple thing that we can do.

Tristan: And I’m bringing this up as an example of when we at the Center for Humane Technology, talk about humane technology and what that looks like. The word humane in our work comes from my co-founder Aza Raskin’s father Jeff Raskin, who is the father of the Macintosh project at Apple. And he said that an interface is humane if it is considerate of human needs and respectful of human frailties, because the whole point is you have to actually understand human frailties. You have to design around them to make them work for us. And so in general human wisdom, like a gratitude practice or a meditation tends to be based on a core insight about something that our mind or brain doesn’t do automatically, but actually would really help once you understand that frailty.

Tristan: I’ll give you a second example, is the serenity prayer, what is it? “God gives me wisdom to acknowledge the things that I can’t change, know the things that I can and have the wisdom to tell the difference.” Well, that’s because our brains are agency blind, meaning our attention is decoupled from what we have agency on. Now, if you think about in the Savannah, everything you could point your attention to that rock over there, that lion over there, you had some agency over. You could run away from that lion, you could grab that rock, you could do something with it. So what we put our attention on is coupled with what we have agency with. But when you go into modernity, and you go into social media, our attention, what we put our attention on, especially these big societal problems is completely decoupled from our agency.

Tristan: And so knowing this about ourselves, the serenity prayer is sort of a humane technology in the form of something we can just remind ourselves of to be able to recognize that which we cannot change or make a difference on and that which we can. And imagine if humane technology would help point our attention at the things that we actually can change. And there’s a positive feedback loop there, a virtuous cycle where the more we start taking actions around the things that we can change. And we see that life gets better in the local environments, whether that’s in our own lives, the little habits or practices that we pick up at the start of a new year.

Tristan: Or in the form of changing our communities, because instead of complaining about climate change and feeling completely overwhelmed by it, we actually really focus on passing one law in our local County or our local State. And that makes you feel higher agency. And I just want to give this as an example, because I don’t want people to be burned out by all of the negativity of all the problems we have with social media. I think we need a positive vision for what humane technology can look like. And in our work it’s based on starting with insights about human frailties.

Jim: Yeah. I love it. And I would add, and you mentioned early on that I’m a great believer in personal practice to manage these things just in the same way we manage our alcohol consumption. And I dearly love potato chips, but I also have a weight problem, so I don’t sit down and eat a bag of potato chips every day, even though I’d probably like to. I don’t drink every day, I have limits to how much I drink most of the time. We should encourage people, and frankly, my father gave me a good list of rules for not running into problems with alcohol. There was a fair number of alcoholics and his family, he was not one of them, fortunately. But he had seen it enough at close hand to be able to say never, never drink the offset of hangover, never drink before noon.

Jim: He had a whole list of these things and why can’t we have those same kinds of practices. As you know Tristan, I personally take a big, I take a leave from social media six months a year. I’ve been doing it on the WELL for 15 years, which was my first real intense online community. I’ve been doing on Facebook for four years, six months a year from July through January 1st, no Facebook. And this year I threw Twitter in too, because I developed a little bit of a Twitter Jones with the launch of my podcast. I really hadn’t done much Twitter until I started doing the podcast, but you sort of have to do Twitter if you’re a broadcaster. And so I just say six months a year, I ain’t doing that shit.

Jim: Now, that may be more than most people are capable of doing or want to do, but it works for me. The other thing I do, which I did not do interestingly, while I was on my social media sabbatical was I will often do cyber free Sundays where I’ll just not touch any cyber device except the Kindle. Just like a good [inaudible 01:00:17] I’ll think through the rules and how do I play the game, and I’ll say no technology except the Kindle, which is to my mind, more like a book than it is like a technological device. And I kind of waffle on whether I’ll use voice telephony on my cell phone or whether I’ll just use the house phone, but otherwise no tech one day a week. And in the same way, we all have our family rules, soft rules, recommendations of how to deal with alcohol and junk food and gambling and smoking and all these sorts of things. There’s no reason we can’t have developed family and community and maybe even religion based conventions on how to modulate our use and borderline abuse of these technologies.

Tristan: Completely. And I think that wisdom traditions in general give us different forms of practices. I mean, it’s interesting that so many religions include a Sabbath or some kind of day of rest in how they construct their religion. And it says something that’s probably more universal about what would be helpful for us. I think this is also where people like Jordan Peterson are kind of harvesting saying, “Hey, we can’t throw out the baby with the bath water.” And maybe these religions were wrong, but there’s core moral archetypes are core moral ways of being or practices. Even what is his famous recommendation of make your bed, like just this simple act of kind of keeping your own internal house in order and taking responsibility versus not taking responsibility. These are very tiny, simple things, but generally speaking different wisdom traditions tend to emphasize very similar things because they’re based on insights about how to work well with our intrinsic physiology.

Tristan: I’d love to give just a couple more examples, one of the things on what you’re sharing though is what would be inhumane is to force or to put the burden of responsibility about a systemic problem onto individual behavioral practices. And I think I want to name it sort of a both end here, where of course if each of us can buy a Tesla or double pane our windows, or do the things that we can for climate change, that’s fine. But when BP says here’s a carbon calculator, so you can calculate your own footprint when they promote individual behavioral practice to a systemic problem where we know that the top 100 companies make up as businesses 78% of our climate problem, as the Guardian said, I think a couple of years ago, we have to make sure we’re actually changing the system systemically well as individually.

Tristan: So what I hear you saying, Jim is on the one hand, obviously let’s figure out cultural practices that allow us whether it’s gratitude or a Sabbath or some pretty basic things like serenity prayer that we want to help us, but let’s also make sure we change at a systemic level. And I think again, to change it at a systemic level, we have to identify what in the system incentivizes a paving over of human frailties or weaknesses. And I wanted to give a couple more examples of where something has gone wrong. So another one is heightened polarization occurs due to frailties in the mind in terms of frame control that your mind has this thing called mutual inhibition. George Lakoff describes this really eloquently that when you see things in terms of one cognitive frame, it suppresses the other interpretation of that frame. And then political polarization is based on playing on that.

Tristan: An example is the same words can have multiple meanings. So defund the police can be perceived to mean completely abolished the police on one side, and defund the police can be otherwise perceived as a conscious political strategy to reallocate funding for community safety on the other. And so when we get into arguments, we’re actually often referring to the same symbols, but actually different frames of reference for what people are believing about that thing. So Black Lives Matter, some people have in their mind images of the completely unjust moment with George Floyd and that’s what Black Lives Matter and the protests over the summer refer to for people. And so that’s what they’re holding in their mind. And then the other people who are seeing Black Lives Matter in a negative way are holding a completely different set of representations of their mind of people tearing down statues of people who weren’t slaveholders or crazy fires and riots and destroying black owned businesses and things like this.

Tristan: And again, if we don’t recognize that we are inside of different frames, different groundings for the things that we’re referring to, and we get into arguments about those different realities without recognizing the fact that our minds are tuned to different things, we’re going to simply get into more conflict. As you can imagine, humane technology helping us identify these multi meaning frames and actually helping us pause before we talk about them and sort of naming that. I don’t know if you’ve seen Jim, there’s another example of this that I saw during the middle of those protests, this four-dimensional Venn diagram with the four circles, and it says you can be here in the middle. So for example, one of them is George Floyd’s death was murder and the cop responsible should be in jail. The second is the police system is structurally corrupt and regularly refuses to prosecute cops.

Tristan: The third is looting and burning businesses is immoral and counterproductive, and those people who do it should go to jail. And then the fourth one is mass protests and civic disruptions are legitimate and warranted actions. And the whole point is to be able to make these nuanced points, you have to actually almost specifically outline the kind of edges of where you’re standing, because if you just say, “Yeah, it’s okay for these protests to happen,” while not also saying, “Looting and burning businesses is immoral and counterproductive.” People will hear you using one frame and not hear the other. This kind of gets to the context collapse issue we’re talking about. But I think if we don’t know about our own minds, that we are perceiving different things when we argue with each other, then we’re not even getting into an argument about the same thing. And I think that so much of what’s going wrong in culture and our social media conflict, escalation machine is due to dynamics like this.

Jim: I love that. I use that example, in fact it’s been amazing. I have been having an online discussion today comparing Black Lives Matter and the MAGA Crowd, both as self-organizing network tribes, and then immediately people go, “How can you compare these two blah, blah, blah, blah, blah?” I go, wait a minute, I’m talking about them as a class of entity. And we’re looking at a much higher dimensional analysis than deciding who’s right, and who’s wrong. For this purpose’s analysis, it doesn’t matter who’s right and who’s wrong.

Jim: They have attributes they’re similar in some ways they’re different in others. And we want to understand the phenomenon, the class of self-organizing network tribes. And it was quite interesting. And the rhetoric I used there was saying, what you have to do is to look through a high dimensional lens. You can’t just look through a one dimensional lens. My enemy, my friend, rather let’s try to frame this intellectual concept of a self organizing network tribe. And what are the attributes that they have in common? How do they differ? But it was quite… Some of these people were amazingly vehement that somehow it was some horrible and fraction to even attempt to do this analysis. And it was clear to me that it was a failure of dimensionality of lens fundamentally.

Tristan: Yeah. I’m imagining even I would consider steel manning someone’s argument as a humane technology, because it’s a way of pre understanding human frailties on both sides, human frailties and misunderstanding. And starting with, let me actually make your point that I think you’re trying to say stronger than you would have said it yourself. This is Robert [inaudible 01:07:31] rules of persuasion in conversation, I think as well, because that’s how we get to somewhere. And we can show that we’re actually trying to get somewhere new. And again, the negative perception of that is by steel manning an argument that people think is illegitimate. For example, even giving credence to people who are against Black Lives Matter, people on the left would come after you for saying, how could you even do that? And so we have to also if you think about, I think Daniel Schmachtenberger mentions these three kinds of education that we need Aristotelian about virtue, ethics, and philosophy, Socratic education, which is for argumentation and logic, and then Stoic education, which is education about how to wrangle with our own emotions.

Tristan: And I feel like we’ve lost our stoic education because we are letting our own emotions so quickly jump to conflict before actually really trying to understand each other. And again, social media isn’t making this easy because I think short strands of texts are just pre-built in for conflict. I would want people to think about Twitter as just a conflict driven medium, when you can only communicate something in a flat perspective. I think we do better to send videos around that contain the full nuance of our perspectives. And when we hit record, it would remind us to actually say and steel man, the other perspectives we’re trying to talk about and incorporate so that we don’t accidentally fall into context collapse. And again, that’s the kind of humane technology approach is starting with an understanding of the frailties of how we interpret information, when it’s taken out of context.

Tristan: It would encourage us to communicate in ways that would address context collapse upfront. Another good example of that, by the way, I don’t know if you’ve seen that the Guardian newspaper adds a yellow label to older articles saying this article is seven months old to prevent re-sharing of things that are old or under false context. And it does that both, I think in the article itself on the Guardian website. But then I think when you share the article to Facebook, it includes in that header photo, it’ll put a little yellow bar in the photo itself saying this article is seven months old. And that’s again about helping us built into the system augment our contextual understanding appropriately versus getting confused about when something occurred. You see this all the time, even smart people, Jim, that you and I know in the Game B group or something they’ll post these things-

Jim: No, I’ve done it, posted an old article accidentally.

Tristan: Yeah. And we’ve all done it. And the point is, again you’re a really smart guy, you were on the board of SFI, I respect you a lot. And we are all in this very smart community, but it doesn’t change the fact that our human weaknesses are a different dimension of how we work. And this is what I learned from magic as a magician, it’s not about whether your audience member or spectator has a PhD. If they’re a nuclear engineer built 747s and you say, “Oh, I can’t do this magic trick because you’re too smart, you know how to build 747s.” The whole point is there’s weaknesses in our minds that very few people know about themselves. And those weaknesses where they are universal are where we need to point our attention for the broader reform of technology.

Jim: Absolutely. And there was a point I was looking for a time to bring it in. I’ll bring it in now because it’s perfect which is one very, well-known very powerful and very pernicious cognitive flaw, confirmation bias is actually positively correlated with education. The more educated you are, the more likely you are to rule out evidence that contradicts your current holdings, which is kind of interesting. People say, “Oh, we need a better educated America.” Well, actually, if you want confirmation bias to go up, then give people more of the current kind of education we have now per Daniel Schmachtenberger, and my friend, Zach Stein, there are other kinds of education that we could be doing that would help people avoid confirmation bias as they were better educated, but today’s educational system actually increases confirmation bias amazingly enough.

Tristan: Yeah, it completely. And one of the other aspects of intelligence is the smarter you are, the smarter you can justify anything in your self justifications for your belief system gets so complex and so self-reinforcing. And it’s harder for others to poke a hole in it because you have such elaborate brilliant ideas that others find it harder and harder to kind of poke into it. And again, I think what we really need is a self-critical or humble, you’ve said this before, epistemic humility. We need to operate with a culture of humility and learning and growth. To do that we also need again, to increase that feedback ratio so that we’re not all operating from such ego debt. I think social media produces this feeling of constantly feeling attacked, or even the future likelihood of attack that we all get very defensive at first.

Tristan: Especially in public environments where you’re attacked for having some view, it’s not like you’re immediately going to abandon that view. In fact, I think there’s studies on how if you make a public apology for a mistake, it actually net causes worst damage because people are not forgiving online. There’s a great article by someone in the humane technology community named Nick Punt, who I think is at the Stanford Business School. He wrote a really great article about what would Twitter be looked like if it was designed for forgiveness and mistakes? And so imagine there’s an, I made a mistake button, and there’s a way in which we can actually clap or positively reinforce when we make mistakes. And again, he goes through an analysis of why it probably won’t work, and here’s all the kind of pros and cons, but I think that’s the kind of exercise that we need to go through.

Tristan: We don’t even realize that we never had a national public square before. People talk about digital technologies as the new public square, we’ve never had a global public square, not even national, a global public square where everyone and millions of people are paying attention to you at once. And do you think it’s easy to be humble or say, “I changed my mind or I made a mistake,” to millions of people and that everyone in a decentralized capacity is going to do that? It’s very rare. And again, I think we need systemic support for that, not just some kind of personal responsibility where a handful of people conjure the will to simply say, “Yeah, I was really wrong about this and I want to let the world know.” So these are examples, I think as well.

Jim: Yeah. I like it, very good. We’re getting late here in our time. We’re going to have to wrap up here in a few minutes. I would suggest for as the last thing we do these are some things you and I have talked about over the months, kind of my style. I kind of look for brute force, simple, powerful moves that might fix problems. And amazingly in the business world, that was often a good idea. It may not be as good of an idea in the political space. And I will acknowledge before I go down this list of possible reforms that this real move to a constitutional convention and the democratic rule of law in cyberspace is going to be a long process and probably going to be a lot more nuanced than these ideas. But I’m going to throw out three ideas for you that I will admit upfront brute force, I believe would get close to the root of some of these problems. You’re ready to give your critique on them.

Tristan: We’ll give it a shot. Let’s go.

Jim: All right. You’ve heard this one from me before. What would happened if we just banned advertising period? The tech world existed before advertising and it was actually great. We loved, I loved at least, most of the products that I came to use. They weren’t trying to steal my attention, they weren’t trying to sell me something. Just try to use the one of these new games on your phone, a goddamn thing, blah, blah, blah, try this, play this. What happens if we just flat banned advertising from technological platforms? It would be crazy but it might be great. What do you think about that?

Tristan: I think it’s a question of you can only have the ethics or moral operating system that you can afford to have. So I think from a moral perspective and a kind of cleaning up the environment perspective, this is absolutely where we would love to be. You and I talked when we had this conversation before about you go back to the 1980s and 1990s, I want to be very clear that I personally am not anti-technology at all. In fact, I grew up thinking the Macintosh and all the creative applications that were on it back in the early 1990s when I was growing up was a wonderful creative, expressive place. I mean, remember what it felt like to add VisiCalc to your Apple II, or to add Photoshop or HyperCard or these creative applications, or Final Cut Pro, we were making things.

Tristan: And each time technology, a software you downloaded, you added to your computer, or you put in that CD Rom you were adding a new capacity that would help you. And there was nothing about Final Cut Pro or Photoshop that said, when you painted a brushstroke on your canvas, it didn’t notify six friends saying, “Hey, Jim Rutt posted a brushstroke on his canvas. Do you want to see it? Did you want to comment on it?” We’ve gotten into this totally bizarre, new normal that is completely not normal, and we shouldn’t treat it as such. That is based on again, using each of our actions as social candy to lure others back into seeing it and to use this virality model, to keep us distracted because of the fact that attention is a commodity. And one of the articles I wrote recently, it was an MIT tech review.

Tristan: And I referenced E.O. Wilson saying his solution for climate change in the climate crisis is just making sure we keep half of earth to just be hands-off just no extraction. Just let it go. And you can think similarly of the attention economy, we shouldn’t have attention be a commodity that should be atomized and for sale. And I use this metaphor of when we do do that, we’re essentially saying in the same way the tree can be turned into timber, well, human beings can be turned into dead slabs of predictable behavior. It’s like the deadening of human choice making, if there is such a thing as free will, and I think we’re all skeptical of the dimensionality of what that is to the extent it exists or doesn’t. But for sure, when you have AI pointed at our brains, trying to predict our next move and winning more and more times, it’s essentially deadening, turning into dead slabs of human consciousness, our choice-making at a collective level.

Tristan: And that’s what you can think about, I do when I look at people on subways and all around the world, getting caught in these loops of scrolling through their phones forever and turning into zombies it’s that deadening of human choice-making. And that society that zombie society cannot solve climate change, cannot deal with racial inequality, cannot deal with economic inequality. It does not work. And so I do think that solutions as aggressive as just banning advertising and banning the commodification of attention, maybe above a certain threshold like E.O. Wilson’s half earth or half consciousness, sort of half human mind solution is in the direction of how we need to think about the change that’s needed.

Jim: Yep. That’s a big one, and then people will always recoil when I say it and they go, “How could we do that?” I said, “We used to do it until 2005.” The second one, again, very aggressive. We talked earlier about addiction and there was a very dramatic scenes in the film about the person that put their phone in the little cookie safe with the timer on it, so that they could not use their phone. Then they come down in the middle of the night, break it with a hammer. I go, “Holy moly, I don’t think I’ve ever been addicted anything that bad,” but clearly there are people like that.

Jim: I remember when Facebook was first a thing, 2011, 2012, there were a bunch of these surveys. “What would you give up to keep Facebook?” And it’d be like, “I’d become a vegan before I would give up Facebook.” It was like, “Aw, what?” Just like alcohol, 10% of people are addicted to alcohol. Probably more than 10% of people are addicted to social media, may be as high as the 40%, 50% of people who become addicted to cigarettes if they smoke. So what about being real extreme and dealing with social media in the same way we deal with cigarettes, alcohol, marijuana, and restrict social media to only people over 21.

Tristan: I think, listen, there’s particularly pernicious design techniques that manipulate young people. When I read Salt Sugar Fat, the book I mentioned earlier, they talk about how all human beings respond to sugar. But kids actually respond at twice the rate to sugar than adults do. And for that reason, we have to be ultra-sensitive to the ways that sugar can kind of hijack children. And we do the things like that, we banned for example, as you said Joe Campbell and cartoon depictions of cigarettes that are marketed explicitly to children, because we know that’s a manipulative form of advertising. We don’t allow it. And coincidentally, by the way, we used to ban URLs any kind of web address in an advertisement to children on television. The former FCC commissioner mentioned this to me. And we don’t do that now.

Tristan: When we took YouTube and it gobbled up the sort of norms and standards for children’s broadcasting, it removed all of the children’s protections in advertising. So I think we should absolutely look at these kinds of extreme actions that I think of it from a principal perspective, it’s not like, “Oh the kids these days are a moral panic about children.” We’ve always had moral panics about children. I want to be very clear about how to be aware of all of them when we make aggressive actions like this. If we want to go at this from a principal perspective, we want to make sure that the level of power or the level of influence that is occurring is commensurate to the level of self-awareness or consciousness that you would need to wield that power.

Tristan: I always say, if you go to a kitchen store, you can buy a pair of knives. You don’t have to get a background check or get training about using kitchen knives, because the level of harm you can cause is very minimal and it doesn’t overpower you or cause you to go crazy or something like that. But if you’re going to go buy a gun, there’s got to be a background check and training, hopefully. And there’s a bigger power. If you want AK-47s, you want ICBMs, as we scale the power, we have to scale the responsibility, training and awareness about what you could do if you wield that weapon or power in irresponsible ways. When you think about children, what you have is a very early developmental consciousness that may not be aware of what the costs are of posting something without recognizing how many people will see it and what that could cause downstream.

Tristan: And that’s actually a lot of the issues when we talk about cancel culture for kids or bullying or the kind of anxiety that kids face. It’s that they post something and they don’t know that by the time they get home, if their reputation will be destroyed by all their classmates and the entire school has seen it because it spread not just to the kids, but to the parents and to the school administrators and even more broadly. And if we have that world what’s happened is that the power of the technology is beyond the capacity or the sort of consciousness of the users of that technology. So when we say things like ban social media until 21, I would rephrase that as, how do we make sure we align the power and the dimensionality of potential harm to the level of consciousness we would want to make sure is in place to wield that power.

Jim: Another way to think about it, I like the brute force approach because it’s easy to understand. That’s the radiant ways sometimes. And the final kind of over the top recommendation is ban recommendation engines, period. Or at a very bare bones minimum, ban any recommendation engine to use personal behavioral data.

Tristan: Yeah, this is really interesting. So when we talk about recommendation engines, people should know why we would want to do something like this. When Facebook in their own leaked documents over the summer Wall Street Journal Report showing that 64% of the extremist groups that people had joined on Facebook were due to Facebook’s own recommendation systems. So that’s like, oh, you’re on Facebook. And you joined this one group on Facebook for Patriots or something like that. And then it says, “Hey, by the way, would you like to join this other group called QAnon defeat the global conspiracy?” And 64% of the extremist groups that people joined were due to Facebook’s own suggested groups interface happening in the right hand side bar, which if it’s more than 50%, there’s a clear responsibility that Facebook and others have.

Tristan: And this is true of YouTube recommending you down rabbit holes. They recommended Alex Jones, conspiracy theories 15 billion times. I think I mentioned that a year ago when we did our interview. So recommendation systems can be incredibly damaging, especially the kind of conspiracy correlation matrix that your previous guests Renata Resta, who’s a colleague of mine talks about a lot. But I think what we would want our recommendation systems for what would help people. And again, to do that, we would need aligned incentives. I think about like a coach or a mentor is making recommendations based on asymmetric knowledge, but based on what you’re looking for in your life, what are your values and any advertising based or engagement based recommendation system, where the goal is to somehow keep you on the screen, keep you coming back, keep you doing things that’s where the perversion comes from.

Tristan: And so I think the blunt approaches ban all recommendation systems, but then you would lose the suggested other products you could buy on Amazon. Some people might not like that. So I think we want to be a little bit more surgical with the knife that we’re using in carving out engagement based recommendations from let’s say, merchandise recommendations or for sort of mentorship or coaching recommendations for what kinds of things would be helpful. I’ll give you an example, I have a sleep app on my phone called RISE that I actually really like, and it’s a good example of a humane technology because it deepens my own inner capacity of awareness for my own sleep cycle. It actually cracks your sort of circadian rhythms. And then the whole point is it can give you recommendations based on your current sleep cycles.

Tristan: How would you want to align that curve better? And you would want a system to give you personalized recommendations there. That’s because it’s not going to steer me into QAnon groups or to crazy down or to conspiracies or to God knows what else. It’s really because it’s on my side to sleep and live better. And so personalized recommendations that are about helping us live better, I think can be good, but we need to be really careful. I would put this on the agenda of the constitutional convention.

Jim: Cool. Well, I think you gave good thoughtful analysis to my rather blunt and brutal suggestions. And that’s good and that’s the kind of work that we would need to do at this constitutional convention. So I want to thank you Tristan, for an amazingly interesting conversation. And for those of you who haven’t seen the Social Dilemma, go get it, it’s on Netflix. Still up on Netflix?

Tristan: Still up on Netflix. I would also recommend if people want to go deeper to check out our podcast called Your Undivided Attention. We have one coming out right now with Yuval Harari, the author of Sapiens, really getting deep into the future, looking versions of these issues that I think it’s helpful for people to understand too.

Jim: And as always links to everything we mentioned will be on the episode page at jimruttshow.com. Thank you, Tristan.

Tristan: Thank you so much, Jim.

Production services and audio editing by Jared Janes Consulting, Music by Tom Muller at modernspacemusic.com.