Transcript of Episode 38 – Tristan Harris on Humane Tech

The following is a rough transcript which has not been revised by The Jim Rutt Show or by Tristan Harris. Please check with us before using any quotations from this transcript. Thank you.

Jim: Howdy. This is Jim Rutt and this is the Jim Rutt Show.

Jim: Listeners have asked us to provide pointers to some of the resources we talk about on the show. We now have links to books and articles referenced in recent podcasts that are available on our website, we also offer full transcripts. Go to JimRuttShow.com, that’s JimRuttShow.com. Today’s guest is Tristan Harris. Tristan is the co-founder and executive director of the Center for Humane Technology.

Tristan: Hey Jim, it’s good to be here. Thank you for having me.

Jim: Yeah, great to have you on. Some interesting stuff we’re going to be talking about. Tristan was the first design ethicist at Google. We’ll talk about what that is a little later. He’s an expert on how technology is steering us all. Tristan has spent over a decade understanding the subtle psychological forces in play in our online world. And he’s the cohost of the podcast Your Undivided Attention. In various writings and sayings, you’ve said that your experience as a magician as a younger person have helped you inform your perspective about the online world. Could you tell us a little bit about your magic days and what that perspective has provided?

Tristan: Yeah. I always start with magic because if you start by looking at what technology is doing, the common metaphor is people think, “Well, it’s all about how we’re using technology,” that we’re the ones using it. And the premise of, I think the rest of the conversation we’ll have today is really more about, how is technology influencing us? And the reason that I go to the magic metaphor so often in my childhood is it’s about recognizing asymmetries of power, like an asymmetric advantage. And that’s what a magician essentially has over you, is there’s a sense of they know something about your mind that you don’t know about yourself because if you did know that about yourself, the magic trick wouldn’t work.

Tristan: And what’s especially interesting to me about magic is that the universality of the things that a magician knows about all human minds and the fact that even a nuclear physicist can easily be fooled by a magician, it actually doesn’t really matter what domain of expertise or PhD you have, magic is about a subtler layer of human vulnerability. Think of them like the zero-day vulnerabilities or the back doors on the human mind in terms of cause and effect reasoning, in terms of attention, in terms of misdirection, and that’s obviously the kind of basic stuff, but as you go more advanced, there’s a whole long list of psychological features.

Tristan: And I think one last thing there is that magicians are like the first applied psychologists because way before we had an official fields of psychology, they were figuring these things out, and they didn’t even name the principles, but they had this whole library of things that they were aware of and started to exploit.

Jim: Makes perfect sense. You’ve also mentioned working with famous, or maybe it’s infamous Stanford Persuasive Technology Lab. Who are they and what impact do they have on our situation and on your perspectives?

Tristan: So at Stanford, I studied computer science so I have a typical computer science background, but I was mostly interested in the psychology, cognitive science dimensions. There’s actually a major at Stanford called Symbolic Systems that a lot of alumni, famous alumni like Reed Hoffman and Scott Forstall, the guy who had been at the iPhone were actually in that program that I was more oriented to. And part of that program, there was a class by this one professor named B.J. Fogg and from his lab called the Stanford Persuasive Technology Lab where they essentially took everything we knew about the field of persuasion, influence, social psychology, Robert Cialdini’s influence, slot machine design, clicker training for dogs, click, click, get the dog to eat the thing rewarded.

Tristan: And he applied all those disciplines, all those domains of expertise into, “Hey, what if we were to embed that in technology? Could technology be an influence over your attitudes, beliefs, behaviors, habits?” And that class, there’s a whole history we can go into there, but the founders of Instagram were in that class, many of the early alumni that went on to join the growth teams at the early tech companies, Uber, LinkedIn, Facebook, many people went on and because it gave a tool set on how to build more and more engaging products. And I just want to name really quickly that the professor B.J. Fogg has often vilified, I think incorrectly because he warned the FTC about the dangers of persuasive technology and where this could go back in the late ’90s, I think it was 1996.

Tristan: And he has always tried to apply persuasive technology for good, things like world peace. He actually had a project called peace, which was like peace.facebook.com, peace.google.com where each tech company would ask how could it be persuading the world towards world peace? We can go into these topics if you want to. There’s a lot of detail here.

Jim: Yeah. I think the takeaway here, and this is something everyone needs to understand, is that these are deeply designed systems. And funny you mentioned slot machines. The first time this issue of deeply cognitive science informed technology, it was one of business friend of mine went to work as the COO of the third largest slot machine company in the United States.

Tristan: Oh really?

Jim: Yeah, he did. Yeah. This was in the ’90s. We had a dinner, I don’t know, a year later and he was telling me all about what he was doing and he told me they had, this is the third largest slot machine company, they had 200 PhDs on staff. And he said they all came from the disciplines where people are basically torturing rats and make them do various things. I’m like, “Oh, that’s great. What a scary thing that the really sharp PhDs are tuning the finest details of slot machines to increase their addictive potential.” And then my second touch point on this was last year I published an essay called Regaining Our Cognitive Sovereignty about how people have become addicted to smartphones in particular.

Jim: And one of the data points I dug up when I was writing that was I went to the Facebook internal job board or I guess publicly facing job board and typed in psychology, and 708 openings at Facebook had the word psychology in the job description. And so again, that was, “Aha, these people make perfect sense. If I was them, I’d be doing the same thing, is trying to bring in serious talent that understands how the human mind works and uses that insight to make these products way more effective at pushing our buttons than we’d have any idea.”

Tristan: And it’s important to note that this is a new unique phenomenon. I mean, the hammer that might be sitting in your shed does not have a thousand PhDs behind it trying to figure out how do they get you to use the hammer and the particular way. Even your telephone back in the 1970s didn’t have a thousand PhDs behind it trying to figure out how to use your social psychology and do experiments on rats to figure out how to get them to talk through the telephone. These were tools, and so what’s really changed, this is why we really focus on this aspect of persuasive technology as what we’re accusing as the problem, is the degree of asymmetry, the fact that there’s a thousand engineers behind the screen who knew a lot more about our psychology than we know about ourselves.

Jim: Well, I think that you’re certainly right about the hammer, but I’m going to push back just a little bit, and this is a theme I’m going to push on a couple of times to distinguish between what was going on in the previous generation and the current generation. There was a technology before online that was amazingly psychologically informed, both from academic psychology and in practice and that is TV advertising. In fact, advertising in general, but particularly TV advertising. These things were amazingly expensively produced, still are, I assume, and they did all kinds of tests including EEGs, etC. So it’s not like this has not been something we’ve confronted before.

Tristan: Well, this is what always comes up. I feel like I’ve spent a decade now having the conversation of, but haven’t we always had propaganda, TV, marketing, advertising, etc? What could be so alarming or bad about what we have now? It really is a new, unprecedented situation. I think that if someone was telling you about an atomic bomb, but you were already aware of regular bombs, they’d say, “Well, so this bomb is just a bigger bomb?” They didn’t show you an entire city getting obliterated. We’d say, “It just sounds like it’s a bigger bomb than the other bombs. I mean, there’s nothing new here.” And in this case with atomic bombs, you can actually visually see the just stunning degree of exponential damage and scope that you can create.

Tristan: Whereas in technology, I think the degree of advancement in the persuasion capacity is not in the visible domains. You can’t use your eyeball to see how big it is. And we don’t have intuition for what billions of people being influenced by these things really looks like. If wanted to slow it down and say, there’s at least four major distinct things that are different. Usually supposed to say there’s three things because three is so much easier to remember, but let’s go with four. The first is the intimacy and pervasiveness of the infrastructure.

Tristan: So TV, you used to have to watch TV, you’s choose to watch it. It was in your living room, maybe it’s on periodically, but an advertisement has to reach you through a channel you happen to be watching, but it’s not like built into the fabric of your walls in your home. Whereas the smartphone we check 150 times a day. We have 2.7 billion people using Facebook, that’s about one and a half times the size of Christianity in terms of a psychological footprint. We wake up with the thing ever since we turn off our alarm in the morning till the time we go to bed at night. So that first characteristic is the intimate infrastructure aspect, the pervasiveness.

Tristan: The second is the social persuasive element. One of the things I learned at the Stanford Persuasive Technology Lab is how much more powerful persuasion can be if you can control the social psychological cues that people are using to make sense of what’s true or what’s real or what to do. And if you can say that, “Well, 300 of your friends liked or shared this,” or you have five people watching you as you’re doing this action right now, their little eyeballs are on the screen, the social persuasiveness is very powerful. And this gets into things like the Asch conformity experiments and so on.

Tristan: The third aspect is AI, and this is probably the one that’s hardest for the brain to intuit, but it’s that there is the ability to make and run predictions about what you’re going to do and what would cause you to do something. Every single time you use YouTube and you say, “I’m going to watch this one video and then I’m going to be done,” and then you end up watching for two hours and you’re like, “What the hell just happened to me? I fell into a YouTube trance,” and you say, “Well I guess I should have had more self control.” But what was really going on is when you hit play on a YouTube video, there was an AI on the other side of the screen, you activate all of Google’s multibillion dollar computing infrastructure and it wakes up a little avatar voodoo doll version of you. It does a calculation on 100 million different recommended videos that could show you next and it makes a prediction about which one I could show you that would cause you to stay here.

Tristan: And so we don’t really think about the unfairness of this fight between our paleolithic brain on one side of the screen and then the biggest supercomputing infrastructure in the world on the other side. The last thing I’ll mention is that AI is used for personalization. So the fourth characteristic is the degree that each of us get our own Truman Show, or the split testing works at a micro targeted level. So instead of billboards or advertising on TV where you’re getting a broadcast capacity, you don’t actually know who’s watching what except to some core screen levels, this is now fine grain levels, micro-targeted precision, intimate access, pervasive social and AI based indeed.

Jim: Indeed. Indeed. And I was involved in some businesses in the ’80s, ’90s and even into the early double odds where direct mail was a big part of the program. And direct mail was slow, a bit opaque and very costly. If we’d had these kinds of tools back then, Oh my God, could we have done some stuff, right? Not that we didn’t do too bad with our direct mail, but you’re absolutely right. In combination, these four things put us into a qualitatively different regime.

Tristan: That’s exactly right.

Jim: Interesting. You talk about the YouTube, what did you you call it, roll out? Frankly, I watch YouTube only one video following a link somebody sent me or something I found on Twitter. I never just sit there and let the damn thing run, but I understand a lot of people do. And one issue that you brought up for sure and other people have as well, is that these algorithms, not that they’re aimed to suck people into extremism, but because of the way our cognitive systems operate, we respond more powerfully to extremism. And therefore, even though it’s not an independent variable, the result of the YouTube algorithm continuous roll thing can be pulling people towards a more extreme perspective. What do you say about that?

Tristan: Yeah. Well, I’d like to go even stronger and say that the regime of social platforms and YouTube and so on that we have basically jacked into the brains of 2.7 billion people is completely dismantling our information environment. It has poisoned the information ecology, it’s like the Flint water supply for our brains. And that might sound like extreme statements, but I hope we get into defending why that’s actually the case because I want people to not see this as a conversation between, are we addicted to YouTube or are we not, did it get me to watch one more video or did it not? It’s more about the whole grand set of climate change like effects it creates in the social fabric.

Tristan: But to answer your question specifically, we have at the Center for Humane Technology, a bunch of ex tech whistleblowers who are building some of these different systems. And one of them is Guillaume Chaslot who’s an amazing researcher and whistleblower who built part of the YouTube recommendation system. And obviously, how much have you paid for your YouTube account recently?

Jim: Yep. Nothing.

Tristan: Nothing. And how is YouTube and Facebook, etc, worth more than a trillion and a half dollars of market value? They sell attention, and obviously that is the product. That’s actually more precisely as Jerry Linear would say, the ability to just imperceptively change your identity, belief, behaviors, etc, is the product. So the capacity to influence you is the product. And they need your attention to do that. And because YouTube is competing with TV and with Facebook for attention, it needs to get more and more aggressive over time, so they’ll start adding things like autoplay, etc.

Tristan: And a lot of people don’t know that 70% of the billion hours a day that people spend on YouTube is driven by the recommendation systems, 70%. It’s not like we open up YouTube, there’s this blank white box and we just click the videos that we want to watch or type in the search terms we want, we do that too, but we do that fast minority of the time. Mostly it’s the autoplay. So now imagine, okay, so that doesn’t seem so bad. Let’s say the AI pointed at my brain gets me to watch one more video. It doesn’t seem like such a bad deal. The question is, what is it actually steering people to do?

Tristan: And notice that it’s not us that are responsible for what we watch, it’s mainly YouTube here because it’s got the asymmetric influence and 70% is driven by it. And the metaphor we like to give people here is if you were to line up all the trillions and trillions of videos that are on YouTube on one axis, one spectrum, and on the left hand side of the spectrum you have Walter Cronkite, a calm, rational, Carl Sagan, Neil deGrasse Tyson, Richard Docket, whatever you want to say is the calm, rational side of YouTube, the Jim Rutt podcast. And then on the other side you have crazy town, you have the extreme conspiracy theories, hate speech, white supremacists movements, all the crazy stuff.

Tristan: Now, any person you drop along that spectrum, starting at any video, anywhere from calm to crazy, when YouTube wants you to watch more on that access, which way is it going to send you if it wants you to watch more videos? It’s never going to send you to calm videos because those are not really good for engagement. And so conspiracy theories, extremism or things that further affirm your existing worldview are the things that get more traffic. So now if you zoom out and imagine that godlike view of the attention economy, you’ve got 2.7 billion human animals just in this little farm that’s in front of you. And imagine we just tilted the entire attention economy in the direction of the more extreme stuff.

Tristan: And I want people to really feel the kinesthetic gravity of what that would feel like as the whole world tilts towards just slightly more extreme things. Three examples of that were in two years ago, if a teen girl was watching a dieting video, what would she be recommended? She’d be recommended anorexia videos. If you were watching a 9/11 news video, it would recommend 9/11 conspiracy theories. And if you are watching a NASA moon landing video, it would recommend the flat earth conspiracy theories. Now, this might sound just funny kind of like, “Oh, it occasionally recommended flat earth,” but it recommended flat earth conspiracy theories hundreds of millions of times.

Tristan: And I know the people who work in the disinformation space who’ve actually been studying some of the flat earth movement, and I think people really underestimate the damage that it has done, Because if you think about it, if the earth is actually flat and the government’s been lying and science has been lying and NASA’s has been lying, everyone’s been lying, it means that all of science is now under question. You can’t trust anything that science is telling you if you believe that the earth is actually flat and everyone’s been in on it. And so it’s like a trust atomic bomb is what we’re going to get to hopefully more as in conspiracy theories, or an asymmetric power to fear, uncertainty, doubt, conspiracy theories, etc.

Jim: Yeah, that’s an interesting one because again, my reaction as a science oriented person in, “What the fuck would anybody even contemplate something like that? Is it some kind of post-modernist ironic hack or what?” But apparently there actually are people who believe this stuff. Pretty amazing

Tristan: and it’s actually been used deliberately so there’s evidence that Russia actually in their information operations, actively promotes conspiracy theories. They’ve actually been going into US veterans groups and actually trying to seed conspiracy theories in doubt into those groups. Rodrigo Duterte, the Philippines authoritarian ruler, there’s evidence that in the Philippines, the populous movement, there was a specific use of the flat earth conspiracy theory in the populace movement in the Philippines to dismantle trust in the system. When you don’t trust things, the main thing you want is someone to keep you safe, like strong man dictators, so they’re actually a very effective tool in the arsenal of information weapons.

Jim: And of course all sides are using it. I think a good example on the other side is Adam Schiff. He kept saying, “I have proof that Trump was colluding with the Russian,” and I knew lots and lots of people who, when asked, and I kept asking them, what do you think the percentages that Mueller is going to demonstrate convincingly collusion with the Russians? And they kept saying, “100%.” And I would say, “Based on what I can see, more like 30%.” And so there’s another example of where the same kinds of techniques are penetrating into our everyday politics.

Tristan: Yeah. Conspiracy theories are being used everywhere, which is why I take the Daniel Stockton Berger view of, “How do we actually reboot sense-making?” And especially when you’re in a low trust environment, how do you actually get back to knowing what to trust? And ironically, as the world and the race to the bottom of the brainstem for attention moves towards more clickbait, etc, even our mainstream sources of news information, even high quality publications and I don’t mean necessarily the New York times, I mean whatever, take a science magazine, Scientific American, increasingly people have to play into this game because it’s a win-lose game. If I don’t do the more outrageous click baity thing and exaggerate the climate change claim, etc, then my competitors will.

Tristan: And so if I want to get people to even look at my thing, I’ve got to play the game. Yeah. And so that’s why these multipolar traps on attention are so pernicious and at the root of our problems here.

Jim: And of course one idea about how to make sense, do sense making is to do it collectively. One of my favorite quotes that I coined is that humans are approximately the stupidest possible general intelligence. As far as we know, we’re just over the line in our evolutionary tree. Modern nature is seldom profligate in her gifts from evolution. And one can go into the cognitive psychology of intelligence. And I could go down a long list on how you could make way smarter intelligence than us.

Jim: So let’s assume where the stupidest possible general intelligence plus or minus epsilon, some small amount. And so we’re going to fight back against these mandalas, we probably can’t do it together. And so Daniel Schmachtenberger and Jordan Hall, others been involved in promoting idea of collective sense making. And in fact, there’s a interesting Facebook group called Rally Point Alpha where we try to help each other out. We’ll float stories and say, “What the fuck is this right? Is this horse shit or what?” Or we’ll also deconstruct them and try to point out the techniques that are being used to mission form, what we call bad faith discourse.

Jim: And so maybe at least one answer to this problem is to encourage people to join up with other people and jointly use our weak but collective intelligence to make sense of this flow of bad faith and just money oriented stuff that’s coming bias.

Tristan: Yeah. Well I think those of those folks have been right to point out how much of information has had a for-profit motive, which means it’s more propaganda than there is actual information. I’d like to do a quick thing of, I think people often underestimate the degree to which we’ve hollowed out our information environment in our sense-making. And one thing I like to do to give people as a metaphor for this is newspapers thought that they were in the truth business, that their product was selling truth, but when these tech companies come along… Actually there’s two phases to this, the first phase was when Craigslist came along, newspapers realized actually they weren’t in the truth business, they were in the classifieds business because Craigslist came along and ate their lunch, and suddenly they had to figure out how to make the online advertising thing work and do a hybrid business model, etc.

Tristan: That was the first phase of newspapers realizing they weren’t actually in the truth business. Then the second phase comes along after they figured out how to stabilize after Craigslist came and took their classifieds, which is they thought they were in the truth business again, but then what they actually realized as they were in the attention business, not the classifieds business. The reason that they found that out is Facebook and YouTube come along. Imagine two black boxes, in the black box on the left, you have essentially a news organization, Wall Street Journal, New York Times, etc. You’ve got to pay those journalists $100,000 a year or $200,000 a year, you’ve got to pay those editors, you’ve got to pay for the Iraq security details. You’ve got to pay for the long time it takes to do an investigative story. You’ve got to do interview witnesses, you’ve got to do fact checking.

Tristan: You’re going to get it wrong, it’s not going to be perfect, but there’s some notion of process and there’s human moral judgment involved. And that is an inexpensive set of inputs to produce what on the output side? Well, an article or a newspaper report. And that generates a certain amount of attention and then that’s sold to advertisers. But it comes at a very high cost of paying all those human beings. So then YouTube, Facebook, Twitter, etc, come along, and especially the social media companies say, “I have an idea,” and by the way, I’m not saying this is actually how it went, but from a business perspective, this is effectively why those businesses are so successful. Is they said, “Instead of paying those journalists, what if each person was convinced to be narcissistically addicted to how many followers they had and how much attention that they could broadcast that they’ll essentially be a useful idiot in the attention economy, and be publishing and generating attention for free out of their own narcissism.”

Tristan: Because people want to be seen as a smarter and more thoughtful and get people interested in their lives, etc. So you start posting photos of your breakfast or you start posting news articles and saying how smart you are. So if you think about those two black boxes, the left is a very expensive way of producing human attention. The one on the right, the social media companies is a very, very cheap way of producing human attention. We’re essentially the Uber drivers of the gig economy, creating and producing human attention, essentially at no cost, for free, not even getting paid by the companies. But then what that means is that we replace that process of investigation, of fact-checking, of witness reports, of long cycles and human discernment with essentially a bunch of angry people who are yelling, thinking that they have the smartest take.

Tristan: So they do breaking news in all caps, they do cynical commentary, they take the least charitable example, they amplify, they build a mob around it and say, “Look how bad the other side is.” And that has become the default information environment. And when you realize that that’s actually what’s happened and that game theoretically, now the New York times on the left hand side has to compete with that. They have to now say, “Let’s at least play the clickbait games even get those articles, ” to get the same amount of attention. So it’s really screwed up the entire information environment. And I think that’s what I think people deeply underestimate about what’s going on.

Jim: Yeah, the game theory dynamic, My friend, Bret Weinstein-

Tristan: Good friend.

Jim: Yes, he’s been a friend of mine for years and a great guy, and part of his core analysis of what’s going on is that it’s the same old shit. It’s evolutionary dynamics and it’s game theory dynamics. And what he always underlines is the one that’s doing as the most harm across the board is the race to the bottom. And essentially, what you’re describing with respect to the news dynamic is, even now the New York times, it’s full of these stupid ass stories about some British princess or some goddamn thing, who gives a shit? On the scale of things, it’s a zero. But because they’re in the clickbait business, they’re forced to fill the front page full of crap so they can compete with Facebook.

Tristan: Well, this is all why the fitness landscape needs to be a fitness landscape that’s coupled with human values, what are the things that we actually want the competition to be for? But this is why it’s so dangerous if the competition and the fitness landscape are basically anchored on the resource of human attention, then that’s basically reverse engineering of the human psyche, to elicit responses out of your nervous system. I think of it like insider trading on your nervous system. I have asymmetric access to know how to do a trade to get the outcome that I want from your behavior, whether it’s a habit formation or a belief shift or infinite scroll, but we didn’t really go through the other examples of the persuasive things. Just to give one more of the auto playing videos or the infinite scrolling feeds, our co-founder at the Center for Humane Technology is Raskin.

Tristan: He actually invented the infinite scroll feature, so that’s that thing where you scroll with your thumb and it never stops. It wasn’t always designed like that, and the insight comes from a study that was done, I think it’s at Cornell, there’s a food lab where they had six people sitting down at a table, each with a bowl of soup in front of them. And two of the bowls of soup had a little pipe underneath that was actually refilling the bowl of soup with more soup as the person drank. And the question was essentially about food psychology and consumption. Do people know when to stop on their own? And it turned out that the auto refilling soup bowls people ate about 76% more calories and they didn’t really notice.

Tristan: And so the point is that our brains just like a magician, rely on stopping cues to know when to stop, like there’s natural breaks in our experience. And the idea of technology is well, we don’t want those stopping cues showing up because that would have you stop and reconsider what you’re doing. So let’s actually remove those stopping cues and get you infinitely scrolling in a trance. And then game theoretically, once one guy does the infinite scrolling feed, the other guy has to do it. Then once one guy does the auto-playing videos in the feed to make it more engaging, the other guy has to do it. Once one guy does the filters, that show you beautification filters of your face, the other guy has to do it.

Tristan: But the vast set of harms that are emerging from this game theoretic race on the fitness landscape, of whatever elicits responses from the human nervous system are basically the worst parts of us. And that’s what’s so dangerous about what’s coming out. So whether it’s distraction, addiction, social isolation, extremism, outrage, narcissism, disinformation, affirmation, polarization, these are all not accidents, but natural consequences of the race for attention. And that’s what I really hope people get.

Jim: Yeah. The important thing is, you’ve just almost said it right, but I don’t think you said it quite right, which is these are the rise of teen suicide is not the intent of Facebook, but it is the inevitable result of being caught in a race to the bottom around attention hijacking dynamics. So if we want to really do something about it, we have to separate out those two things and think about how do we attack the game theoretic attractors that produce the behavior, that produce the harm.

Tristan: Exactly.

Jim: And before we get into that, which is a pretty deep topic and it touches closely on the complex systems perspective, I tend to take on things. Let’s take a little sidebar here, and something I know you’ve talked about in the past is, let’s show what the actual tangible harms are. One could say, “All right, so they manipulate me. I don’t really give a shit. Fuck it, people manipulating me for years. I probably drink a little more Budweiser than I should, but I don’t really care.” What are some of the real harms that are coming out of this race to the bottom around the attention hijacking economy?

Tristan: Sure. Well, we think of it like prior to there being the hyper object named of climate change or global warming, there’s just these different ecological problems that feel separate. You have the coral reefs, you have nitrogen runoff, you have a species loss in the Amazon, and they might seem like separate problems, but then when you have a model like climate change, you see that they’re all connected. So similarly with the attention economy dynamics, people say, “Oh, I’m a little bit more distracted. We have this distraction problem.” It’s information overload problem. Then we have this totally separate polarization problem. Then we have this totally separate shortening of attention spans problem, reductions and critical thinking.

Tristan: Then we have this totally separate disinformation, Russian trolls, seeding doubt problem, and the point is that these are all one connected system that are also mutually self-reinforcing. So , if you look at some of the harms, our attention spans have been going down. We did a great episode on our podcast, it’s called your in-divided attention with this professor at Gloria Mark at UC Irvine, who’s been studying the dynamics of attention and distraction for a very long time. She’s found out that it takes about 23 minutes, I think on average to resume focus after an interruption, and we actually cycle through two unrelated projects before we usually get back to the thing that we were doing. This is on sort of desktop computers.

Tristan: But she’s also found, I think the average attention span or focus duration on a screen, these are desktop screens now, it’s about 40 seconds. But I don’t think anybody would say that their attention spans are going up or it’s easier to read books than ever, but it’s actually harder and harder and harder because we’re training our nervous systems to condition them to expect rewards or juicy exciting things at a more and more frequent rate. So that’s one of the first areas of harm. Then you have addiction and isolation. So the more the race for attention is to keep you on the screen, that’s basically to keep you by yourself, scrolling. That mean it’s disassociated your chin down, your esophagus compressed at 45 degrees, not really breathing as much, feeling less and less sort of in your body.

Tristan: These isolation dynamics are really costly. I was just talking with the former surgeon general of the United States, Vivek Murphy, and he was talking about how he believed that isolation and loneliness are one of the biggest invisible social problems that is not getting nearly enough attention. 18% of people in a big sickness study in 2018 said they had no one that they felt like they could talk to when they felt alone, which is just horrible. Solitary confinement is one of the worst punishments we give to human beings in jail and we are doing that essentially as a natural accidental consequence of the attention economy.

Tristan: My biggest fear, we can go through the harms forever. We have a project called the Ledger of Harms on the Center for Humane Technology website. We try to outline many of these things. But my biggest fear is actually the breakdown of the information environment, the dismantling of sense-making and the loss of trust in society that emerges where we don’t even know what to trust anymore. That’s the thing that if we just jump to the chase, we’re really worried about. I guess I should add one more thing because the children aspect is very important, that after two decades in decline of high depressive symptoms for teenage girls, so let’s take teenage girls, I think it’s 12 to 17 years old, this is in my recent congressional hearing testimony.

Tristan: There was two decades in decline and then it surges up 170% after 2010, which is around the time of mobile social apps like Instagram, Snapchat, etc. And so teen suicides and teen depression are up by a lot and Jonathan Haidt’s work in The Coddling of the American Mind is very, very relevant for that. And it’s a really, really serious issue, these are real consequences.

Jim: Yeah. Let’s focus back on the one you call out as the core. The breakdown of sense-making, sometimes I also label it information nihilism, where people are unable to make distinctions any longer between what is sense and what is nonsense.

Tristan: Yeah. Well, we can go in a lot of different directions on it. I think that your listeners have already listened to Daniel Schmachtenberger, hopefully and others, so clearly articulate that. Again, speaking to someone actually just recently, high up in the tech industry about this, we were talking about election integrity and what do we do to protect the 2020 elections? And it’s not like… Yes, we should do more to protect against bad actors, but the amount of our natural information environment now that’s just jibberish clickbait, and that that’s the default information environment. I think that’s the invisible thing that’s so bad is we need high quality sources of information in making sense of things.

Tristan: And at the very least I think we should avoid anything whose business model is chasing attention. As soon as we know that your business model is chasing attention, that just means that there’s distortion running all the way through the way that you are publishing information. So things that are more subscription-based, financial times, etc, are less likely to be incurring that kind of damage.

Jim: Yeah. Let’s talk about that a little bit. I was involved in building out fair amount of the pre-internet and early internet online information services including way back yonder in 1980, the source, the very first consumer-oriented online service. And throughout most of that time, most products were paid and were not advertising supported. The very end around 1996 something like that, AOL started doing a little advertising, but it wasn’t the core of the business model until after the 2000s. Do you really think it’s possible to put the genie back in the Bible and say, “No more advertising supported businesses on the internet?”

Tristan: Well, it’s very, very hard. I think of the advertising business model almost like the fossil fuel era of energy in our energy economy. Our entire economic infrastructure is basically directly coupled with the petrodollar, petroleum global economy. And I think that much of the growth of the US stock market in the last couple of years has been by the major advertising-based companies. And if you subtract that from the economy, how do you do that? That’s one and a half trillion dollars of market value at the very least if you take Google and Facebook together who are based on advertising. So I’m not saying it’s easy, but we have to be able to name the perniciousness of this business model.

Tristan: I honestly think I’ve been withholding from being the outrage machine here, but I think this is leading to a kind of dark ages in terms of what we’ve already been heading into. We’ve been baking our society for about six, seven years in these automated algorithms of YouTube and whatever good they do now, we have to recognize that for six or seven years, they were recommending conspiracy theories, and extremism, and outrage, and polarization, affirmation, not information. So the first is to recognize the true cost of free, we like to say that free is the most expensive business model we’ve ever created because it destroys kind of everything else.

Tristan: And as you said, I think on a previous episode of your podcast, we can go back to things like subscriptions. I think you said that on average, Facebook would be $2 a month for a user around the world. And obviously, the developing world is different than the mainstream world, but we’ve actually dealt with this with things like Netflix. The average time spent watching Netflix is something like 71 minutes a day, and people pay $11.99 a month. I think that’s rate right now, I can’t remember, for their Netflix account. And people actually spend almost as much time on social media and they’re not paying anything. But would you be willing to pay, if you already pay that for Netflix, why wouldn’t you be willing to pay for social media that was actually in service of human values?

Jim: Yeah. I keep talking about this as you just pointed out, and I don’t get anybody to nod their heads. In fact, we did an experiment once for $20 a month, would have brought one into a community, both political and online community. And man, people hate to pay. And I’m sure you’ve read Chris Anderson’s book, Free: The Future of a Radical Price. And unfortunately, even one cent is very different than free. And part of that is that people hate to have to fill out yet another set of usernames and passwords that have potentially high stakes, i.e connected to their credit card or to their bank account. And gosh, I wish I could figure out a way or somebody else could. I’m getting all too old to be doing this stuff myself, but the next generation of entrepreneurs could figure out a way to move back to an era where people pay for value and advertising is not the core of the business model, but I don’t see it.

Tristan: Yeah. Well, I think people like Apple, I always say that Apple is actually one of the interesting actors in this space because their business model is not monetizing your attention. In fact, they’re kind of like the government of the attention economy. They choose which apps get to participate in the app store, they choose the rules and the sort of building codes of what an app is and what levels of notification access you get and what you can and can’t do, and all that stuff. They can actually create the kind of payments infrastructure that could be based, built into iCloud and for a few dollars a month basically have dollars automatically flow to things that provide more value. But again, we need to change the fitness landscape of the attention economy, so we’re not actually competing for attention at all, but competing for essentially help in our lives.

Tristan: It’s interesting, I use this metaphor sometimes of, if you asked me like five years ago, “Would you pay $11 multiple times a day to get around a city, to pay that much money?” And I’d say, “Whoa, are you kidding me? That’s like so expensive. I’ve got a bike, I’ve got public transportation, I’m not going to take taxis everywhere. That’d be ridiculous. That’s so expensive.” But I think that Uber and Lyft and those sort of on-demand ride sharing provides such levels of efficiency and value that we can suddenly be the places that we need to be, that people are willing to pay because it actually makes their time go where we want it to go.

Tristan: And I think that that’s what we’re missing in this economy is that if we actually lived in sort of a time well spent world where, yes we’re paying for subscriptions or we’re paying for money, but our use of technology aligns with where we want it to go in the world. And we wouldn’t be just left with this sort of hyper distracted, constantly overloaded, polarizing our societies, going into a dark ages. I think it’s sort of should be obvious to people by now that this business model is just totally unsustainable.

Jim: Well, I don’t know if it’s unsustainable except for the fact that it crashed society. From a business perspective, it’s still seems very dynamic. And again, if Chris Anderson is right, consumers have a gigantic preference for free. Is there a way for the market to solve this problem or does this going to require some intervention from the political sphere, do you think?

Tristan: I don’t know how it’s going to go down, but obviously this might be one of those things where, look, if there’s someone who’s going to offer a free service that ruins the world and does public private profit, public harm and the balance sheets of society, but the balance sheet is so expensive that it introduces a total catastrophe or a crash in that society, if that crash occurs later than the private profit leading up to it, they’ll just keep doing it. And game theoretically, if the other guy’s offering subscriptions for $10 a month, they can’t compete with that. So it’d be much better to create a new fitness landscape where everyone’s competing for subscriptions that take us where we want to go.

Tristan: It’s very hard to set the Overton window of society so that in culture, all of these consumers understand this trade that you and I have just laid out. Most people don’t understand these problems are big and diffuse harms. And so people just say, “No, I prefer to have the free Facebook so I can look at the cat videos and see what my friends are up to.” But if we all saw this cost, if we all saw that the true cost of free is totally unaffordable and it kind of sends us into a dark ages and then we would support a government action or an Apple action to say, “Hey look, we’re actually only doing subscriptions for now on and it works in this totally new way. And by the way, your time and your values are going to line up with what you’d want them to be.”

Jim: Yeah. This is what we call in political science, a collective action problem. If we all could somehow make it happen, we’d be better off, but there’s no real mechanism to do so.

Tristan: Yeah. Well, we’re trying as hard as we can. I think the Congress and other folks are getting more awake to the issues in the public. I think right now the unfortunate thing is the public has a kind of a diffuser or naive concerned about, the tech companies took our data or they’re doing something with Russia. We don’t really like that, but people don’t have a good articulation of these issues. So we need a clarifying public awareness on the problem first before we can take that public action.

Jim: And of course the other issue which I see fairly often, I happen to be unlike a lot of people in the spaces that I hang out in, I actually live in a electoral precinct that went 75% for Trump in 2016, and so I know a lot of good, decent, solid people who have non-coastal blue perspectives on things, I should say. And part of their resistance is they assume that the executives, probably rightly, these tech firms are blue coastal social Democrats. And to give these people knobs to modify and change discourse, they look at as a probable censorship of their point of view.

Tristan: It’s funny that this is coming up so often among socially conservative politicians that the tech companies are actually censoring them or down ranking them, but all the evidence points in the opposite direction. There’s websites showing the traffic and engagement to Breitbart and I forget, is it Daily? What’s the Ben Shapiro one, again? I forgot the name.

Jim: I don’t know. I don’t watch that shit

Tristan: I think it’s Daily. It’s a Daily Caller or the Wire or something. They’re actually getting more attention than the other things. Alex Jones, by the way, was recommended 15 billion times by YouTube, 15 billion times. It’s very hard for people to get their brains wrapped around these numbers. That’s more than the combined traffic of New York Times. BBC, Guardian, Fox News, Breitbart combined. Anyway, all this is to say that, I understand the concern about someone putting their weight on the scales. I think we need something that’s more along the lines of the introduction of public broadcast media. And I forgot when that was, in the ’60s or ’70s, we can’t have a personalization based sense-making infrastructure, not 100%.

Tristan: We have to have some notion of a shared set of facts or a shared set of truth. If we don’t have that, we’re toast, because it means that no conversation can ever come to agreement and consensus. And that means that things that are collective action problems like climate change, then we already know the end of the story, we’re all dead. And so if we actually are going to take new actions and make new choices that we didn’t make yesterday, we need to be able to agree, which means that we’re going to need to have some notion of a non-fully personalized information environment.

Jim: Yeah. That’s of course, we talked about ads as one knob. The other is the one could imagine outlawing the collection of individuals behavioral data in terms of the system.

Tristan: And how do you see that solving the issues? Just to curious and understand your sense-making.

Jim: Well, let’s put a sharper edge on it. If Facebook had no data about my behavior, it would be essentially impossible for them to micro target me. And so the efficiency of the whole cycle would go down substantially probably by a factor of five, and we’d be getting closer back to the economics of direct response US mail as opposed to the unbelievably highly efficient micro-targeting that can be done today. So it’s essentially a major de-optimization of the cycle.

Tristan: Yeah, but I think one issue is the collection of the data, but you have to ask like, is there a way that Facebook would be unable or could they not have access to your data? I mean, by definition they have your friends, by definition, they have the posts that we post, by definition, they have to keep track of what gets clicked so they can show you back later the number of likes that you have. So they can’t erase that information unless you want to go to a totally ephemeral social network or something like that. The second problem is that people actually prefer personalized news feeds over let’s say chronological ones. Like we could erase all of the engagement metrics and say, “Let’s just sort newsfeeds by the exact chronological order and time that they posted them.” But people don’t like that because it’s actually really inefficient and you scroll through a bunch of stuff that’s not so relevant.

Tristan: We actually prefer things to be filtered for us, and if the filtering requires uses of data to make that process more efficient, people want that. I think what I’m pushing against is just we can’t have a 100% of our information environment be tuned by personalization because it eliminates the sort of shared water cooler or public broadcast events. In countries, in Nordic countries, I think it’s in Finland, professor Fred Turner at Stanford was just telling me about, he’s a great history communications professor, that there’s a sort of a 10 minute public news break that’s at the center of their soccer matches. So whenever there’s a big football game, there’s a 10 minute break that everyone is tuning into and they put that at the center of the attention economy and create a sort of shared set of information sense-making for everyone. And I think things like that where we balancing the personal with the public is what we need more of.

Jim: How you let [inaudible 00:46:17] is going to follow back up on the idea that we want personalized services. I’ve experimented quite a bit with Facebook. It used to be you could actually set the setting to show me the feed in chronological order, and I’m not sure it was worse than the Zuck algorithm, but what Facebook does, they keep changing it back. You basically have to switch it once every two or three hours if you want to keep that. On the other hand, I have also experimented with turning off history for Google search, and at least for me, I much prefer it with history on. I want Google to know that when I type Python it usually means the computer language and not the snake.

Jim: So it’s interesting, and the domains may differ a little bit to what degree the manipulation is actually a consumer benefit versus benefit for presenting willing eyeballs for advertisers and the ability to micro target.

Tristan: Yeah. I think the principle of data minimization and personalization that is not for the interests of the commercial entity, but for our interests, more like a fiduciary. Forrest Landry and others have articulated the argument of that technology needs to act like a fiduciary to our values based on the fact that there’s the degree of asymmetry between its power over us and our limited power with it. So if you compare and you line up these comparisons you say, let’s say, a doctor knows way more about medicine than you do and when you’re there lying unconscious on the operating table, you are fully in the submission of what they can do to you.

Tristan: So imagine that doctors entire business model was just maximizing profits, shareholder value, getting paid entirely by pharma, pumping you with drugs, doing surgeries they didn’t need to do, and that was the doctor about to operate on you in that vulnerable situation, we would outlaw that fully commercial relationship of something that has that level of asymmetric power over outcomes that matter for you. Similar with a psychotherapist who’s dealing with the very intimate details of your psyche, they could give you very misguided advice, they could lie to you. They could say, “Oh, and to heal that trauma from your childhood, you have to have sex with me.” They can say all sorts of things that would be a very vulnerable.

Tristan: So we have a name for that relationship, it’s a fiduciary relationship instead of a contract relationship, to recognize the asymmetry of power in those two things. And if you line up those two in the degree of asymmetry, how much do they know about you that you don’t know about yourself? How much are you trusting them? How vulnerable are you between a doctor and a patient, between a psychotherapist and their client? And then you ask, how much information does Facebook have about you compared to you about yourself? How about the difference of intimacy between what a psychotherapist knows about you versus what Facebook knows about you?

Tristan: Yuval Harari gives this example that Facebook can know that you’re gay before you know you’re gay based on just the way that you subtly dwell your mouse over ads of the person of the same sex or something than then the ads that don’t have the person at the same sex, and you may not even be aware of that yourself. And so in general, technology is able to make predictions about people that are asymmetric. We would never allow that relationship to be a commercial relationship because it’s extractive and unfair to the vulnerable party. So that’s the kind of change that actually we can use to get at the business model and I think addresses some of the other issues that we’ve raised.

Tristan: One last metaphor for people that I think is really helpful. Imagine a priest in a confession booth, which is another sort of asymmetric situation, where you’re there and you’re sharing all your confessions, you’re sharing the most vulnerable stuff about your life and your horrible things that you’ve done in this, your sexual thoughts and all of these things, except that priest is listening to the confessions of not just the town, but 2.7 billion people. That’s Facebook, except they also have a supercomputer that’s basically storing and recording all the confessions made by 2.7 billion people, and then finding patterns in the confessions so that they can actually make predictions about the next confession you’re going to say in the confession booth before you’re going to say it.

Tristan: And so even as you’re walking up towards the confession booth, based on gate detection and how you’re walking, they could make predictions about confessions you’re going to make because they have a supercomputer and you don’t. We would never allow that priest to have a commercial relationship where they sell everything that they learned about you and the predictions that they can make about you that are going to happen in the future that you don’t even know about to another advertiser. We should not have those be commercial relationships. We need them to be a fiduciary or protected relationship.

Jim: Yeah, I like that. That’s a very, very, very vivid, especially as a person who was raised a Catholic. I kicked that particular curse when I was 11, but I understand exactly what you’re saying, and it’s a very powerful image. One of the things I saw in something that you wrote or maybe it was on the Humane Tech one, is very similar to an idea I had, which was, what would happen if we just taxed, say internet advertising very high, say 100% tax on internet advertising. Do you think that would be enough to get the virtuous circle to spend the other way?

Tristan: Yeah, it’s a good question. we haven’t done a full policy economic analysis of how to do that. Part of this is like saying, “Okay, we have this bad business model that we want to get people off of it.” And one way we’ve dealt with this in other situations are right now, if we subsidize oil, so we keep using it, but what if we start taxing it to a degree that it starts to get more expensive than the regenerative alternatives. Then there’s a crossing point and then suddenly the economy flips into the regenerative alternatives. And so the issue here is that all of our technology being advertising backed is essentially parasitic, extractive and polluting. And we could tax it much like we want to tax fossil fuels progressively, carbon taxes, etc, to try to create a transition plan to ultimately fund a transition to more renewable stuff here.

Tristan: But the problem is the way we generate energies is different than the way we generate these different kinds of outcomes with technology. So I don’t know the exact way that this would work, but one metaphor I’ve given in the past to this is, what we’ve done with energy utilities. If you think about energy utilities, they used to have a perverse incentive, they actually make more money the more energy you use, including the more energy you waste. So please leave the lights on, leave the faucets on, leave the shower on, we make more money the more energy you use and we would actually want to send that nest into your home that actually encourages you with points and rewards to keep using as much as possible because that’s how the energy management, prediction, AI, etc would be incentivized.

Tristan: And we actually successfully went through legislation in the United States as I far as I understand, I think 50% of US states now have what they call decoupling, where they fully decoupled, how much energy you use from how much money they make. And the way they do this is, based on the seasonal availability of energy, and I’m going to tie this back by the way to attention because it’s another finite resource. If we have finite amount of attention, we have finite amount of environmental resources and energy. How do we manage this? So the way they did this is, let’s say you’re sitting there with Con Edison or PGNE using energy, and use some amount of energy and they charge you based on how much you use per performance.

Tristan: But then once you hit the sort of seasonal availability, they want to start disincentivizing you. So they double charge or triple charge to make it more expensive. That stops you from using it. So now you have these extra profits showing up, but those don’t go into the private companies of PGNE or Con Edison. They actually get put into a renewable energy transition fund so that it’s collectively funding the transition to the better system. So you can imagine something like this happening with attention where right now Facebook, Twitter, YouTube, etc, make more money the more attention that they get from you. And that’s a perverse incentive. It gets extracted at some point beyond the sort of your feeling of being in control in your own agency.

Tristan: And imagine that they can make some money from advertising and attention up to a small point, but then the profits and revenue that are above that point get reinvested into, let’s say, Humane Technology regeneration fund that funds the alternatives and the antitrust legislation and the public education and the media literacy, discernment against this information, cultural immune, system detection, all these kinds of things, to try to build the transition to the infrastructure that we want. We use the profit making capacities of this current advertising based infrastructure, which is the most wealthy infrastructure we’ve ever created. These are the wealthiest corporations in history to actually fund the transition to something that doesn’t destroy civilization.

Jim: Yeah. There’s a lot of details that need to be worked out there. When you try to get government to invest in new technologies, it doesn’t necessarily work too well very often, but there’s two parts of it. Even if you just had the tax, I suppose, if you had just had a tax, 100% tax on internet advertising, it actually lowers the activation energy for a subscription service to be able to compete because the guys with the advertising aren’t making anywhere near as much money as they used to be making.

Tristan: Right. So obviously there’s some more straightforward plans like that. Because I imagine that this is going to have to be done progressively in some way, much like carbon taxes have to build up slowly over time, there’s different ways of doing it.

Jim: Yeah. And I will say as part of my prep for this call, I watched your testimony in front of Congress, and I got to say our Congress critters, well, maybe they’re trying to get it now, unlike a few years ago, they’re still pretty vague on the whole thing, it seems to me.

Tristan: Yeah. There’s differences here. We’ve said in a previous episode in our podcast, we actually, this was a recent house congressional hearing, Jim’s referring to on online deception. It was called Americans at Risk: Deception in the Online Age. It was with deepfakes, and I was the house energy and commerce committee on consumer protection. So I will admit that, many of those members really were not as aware, certainly to the degree of harm that’s being created, and they are framing it as a problem of bad apples. We have these really good social platforms that are, the gems of the American economy and doing good in society, but then we just have these bad guys, these bad bots, bad Russian bots, we’ve got these bad deepfakes, we’ve got this bad fake news content and we’ve got these bad dark patterns.

Tristan: We have to get this bad stuff off these good platforms. And the main reframe I was trying to provide is, that’s not the situation at all. The natural functioning of the social media advertising, extracting platforms are to cause these harms. And that’s not something that I think the members were aware of. But I will say something else which is, if you go back to the Zuckerberg hearings, if I ask you, what do you remember? Do you remember Jim about… Well, what’s the one talking point or sort of memorable thing coming out of those hearings?

Jim: Truthfully, I didn’t even watch them. I did watch his Georgetown speech, but I did not watch him on Capitol Hill.

Tristan: Well, for those who were tracking that, the typical answer is, there’s a one moment where I think it’s Orrin Hatch asks Zuckerberg, “Mr. Zuckerberg, how do you make money?” And Zuckerberg replies, “Senator, we sell ads.” And that being the most memorable line coming out of what was probably more than eight to 10 hours of hearings, it basically makes the public remember one thing, which is that government is incompetent, doesn’t get it, and they didn’t even know his business model, etc. How would you ever trust regulation coming from a body like that?

Tristan: And if I were Zuckerberg and if I were Facebook, I would have paid for that moment to happen because it generated the trust imbalance that I would want people to be storing in their brains. And what it makes you forget is that there is plenty of other questions asked by other senators and Congress members that were actually very good. Senator Kennedy from Louisiana had asked some really fantastic questions, Blumenthal, Warner, The Honest Ads Act . There’s a lot of good stuff happening on Capitol Hill. It’s not passing, but the point is, there’s a distribution of awareness and there’s great groups like TechCongress that are trying to actually bring more people with technology expertise into government.

Tristan: You’re still left with this problem, which if we zoom out a little bit is the problem statement that we at our nonprofit the Center for Humane Technology focus on, which is the E. O. Wilson quote, “That the problem of humanity is we have paleolithic emotions, medieval institutions and godlike technology, and that those three operate on different clock rates because the paleolithic emotions in our brains are baked 200,000 years ago. Our medieval institutions update very, very slowly, craft laws very slowly, elect people very slowly. And then our godlike technology is accelerating at faster and faster rates, meta acceleration. And what happens when your steering wheel is lagging four years behind your accelerator, you’re going to crash.”

Tristan: So some way or another, we have to bring the clock rates of these three things into more alignment and have a better control system, which is why Daniel Schmachtenberger says, I think referencing Barbara Marx Hubbard, that if we have the power of God’s, we have to have the wisdom, love, and prudence of gods and discernment. So that’s the broader situation that when we ask about regulation, it needs to be thoughtful about the godlike capacities that we’ve created.

Jim: It’d be nice if it could happen. I’m not sure. I believe that it can happen within the status quo. I think as you know, Daniel Schmachtenberger, myself, and a bunch of other people are working on something we call Game B, which is to fundamentally reform society in a way to be able to actually deal with these issues. Everything I’ve seen, everything I’ve heard, I just don’t see it. How are we going to solve climate change with a collective action problem like we have then bad faith discourse, etc? How are we going to get trillions of dollars worth of capital in terms of these advertising supported business models under control, unless we have a fundamental rework of our political and social institutions.

Tristan: Yeah. It’s an enormous problem, but I’ll say right now that the technology companies aren’t helping. Right now more than about 50% of recommendations about climate change on YouTube are for climate denial videos and climate hoax videos, etc. And so when we have polarization and climate denial actually being the default information environment, Jim, the reason I work on these issues actually and why I’m so concerned is, I actually think the climate eco side issues are the thing we really have to pay attention to, and the technology makes it impossible for us to ever come to agree because it’s polarizing us into different realities. And that is the most existential issue for solving any of our issues. So this is almost like the infrastructure issue we have to solve to address the other ones.

Jim: Yeah. It’s the forcing function, at least it’s the outer forcing function. If we don’t destabilize our society other ways, for instance, the level of inequality gets so high that we have a violent revolution with the usual outcome of some bad dictator in charge. If we don’t accidentally create a genetically engineered organism, which produces giga deaths, if we avoid a nuclear war, then climate change will get us sometime early in the next century. So we have a lot of existential risk problems, not just climate change, but it’s the one that we know is coming. The other ones are more speculative and lower probability.

Tristan: Yeah. We’ve got a lot of sense making that we need to do to deal with all of the shorter term threats. I’m with you there.

Jim: Yep. And is there any hope at all of these incremental approaches driven by the US Congress can get us there?

Tristan: Well, I think that there’s, what I am optimistic about, and I’m not saying that I’m optimistic about the whole system, but we have to keep going on, is that at least in Congress, what I’ve found is it’s not that everybody knows everything we’ve laid out and the information, ecology, destruction, dark ages sort of thing, and they just disagree. It’s actually that they don’t know. And when people wake up to what’s actually going on, they’re terrified and they want to help. So I actually think there’s… We sort of say like, “Unlike climate change where you have to convince thousands of companies in hundreds of countries and get lots of governments to coordinate to change all of this infrastructure to be carbon neutral.”

Tristan: In the case of technology, there’s probably only about a thousand people in one country regulated by a couple of key states and one federal government to basically make some of the needed changes. So at the end of the day, about only a thousand people, technology leaders, product managers, designers, executives, metrics people, a few regulators, maybe a few strategic litigation cases would need to actually be involved in this process. And so we have this thinking piece that we haven’t published that sort of, how do we get to humane technology by 2030 and it involves essentially those thousand people activating in these specific ways.

Jim: Now, that would be interesting. I look very much forward to reading that paper when you guys put it out. Something you mentioned in your congressional testimony, and I think you’ve mentioned elsewhere, is the current weakness of third party fact checking services. There are two guys on a folding table basically trying to hold back the flood of bad faith discourse, a friend of mine, guy named Dick Brass, who was, I don’t know if Dick, he’s a very interesting guy. He was a senior executive, SVP level at Microsoft. He was also at the same level at Oracle, the few people that somehow managed to be personal friends with both Larry Ellison and Bill Gates.

Jim: He’s got an idea. He’s been floating around that the big tech companies, the handful that you’re talking about, ought to get together and fund a hundreds of millions of dollars a year, fact checking enterprise not for profit, engineered rigorously to be objective, transparent, as good as it can possibly be. And then let all these platforms use the output, the signals output from this third party fact checking system to start to down-regulate bad faith discourse. Any hope along those lines?

Tristan: Well, I like the idea of funding with hundreds of millions of dollars a sense-making protection layer for society that’s shared among technology companies, but I actually think what we really need is to build the cultural sense-making infrastructure, what Daniel and Forest and Zach Stein talk about of just better sense-making. Use the analogy sometimes of, back in the 1940s when the United States saw Germany, a country that had the most sophisticated philosophy, science culture, it was an epitome, kind of peak of a civilization in Europe, how that culture that was so sophisticated could be the one that would fall into authoritarian Nazi rule.

Tristan: And it broke their expectations about what is psychology, like we thought we knew ourselves, we thought we knew human nature, but the fact that this happened made us question everything. And so there’s a surge of interest in psychology and in the United States there was a group called The Committee for National Morale that was funded to help cultivate what they call the democratic personality. Because they didn’t take a democratic mindset or a democratic personality as something to be taken for granted, something you get for free. It was something we had to grow and cultivate and people like Gregory Bateson and Margaret Mead, I know you had Nora Bateson on this podcast.

Tristan: People like this were actually brought together to say, how do we cultivate egalitarian, tolerant, empathetic mindsets? And this what in the United States was developed… There was also a related group called the Institute for Propaganda Analysis that helped dissect in pamphlets and in libraries, they did lots of programs in schools that they help dissect foreign propaganda because they were worried that fascist propaganda would enter into the minds of US citizens, and they had to make people more aware of how that happened. And so the point now is that we have actually more knowledge than ever, more than we had in the 1940s about how the human mind can get influenced, cognitive biases, behavioral science, social psychology.

Tristan: We have way more encyclopedic knowledge about how the mind is influenced by propaganda than ever, but instead of defending the American psyche, we’re actually still behaving in a libertarian way as if each mind is this free instrument uninfluenced by anything. And this is the thing that really has to change. So I would love to see $100 million go into protecting that. I gave this metaphor in the congressional hearing that if Russia tries to fly a plane into the country, we have a Pentagon to protect against that from happening in the physical world. In other words, our physical world is governed by institutions and protection and systems of protection.

Tristan: But when you go up a layer into the mimetic virtual infrastructure of the web and these private Facebook universes, you don’t have a Pentagon, you just lost your protection mechanisms. So now if Russia, Iran, North Korea, Israel, Saudi Arabia, all countries who have been found to be running information operations in the United States start dropping information, bombs, micro-targeted to zip codes, going into US military veterans groups and sowing disenchantment. We don’t have a Pentagon to protect that from happening. We actually have 50 people on Facebook’s trust and safety team who are actually not even incentivized to look at the problem until there is public pressure to do so.

Tristan: So this is the kind of thing that I think we really need is a recognition that each society, each open society online is effectively in a global information war that can’t be solved, it’s going to be a chronic issue. And so we need the public education and sense-making capacity that we haven’t had.

Jim: Yeah. That would be good. But what is it? How do you go from a point where there are some people actually believing in flat earth and the Alex Jones to a point where we have developed immune system to that? Is it a membrane that we keep the bad stuff out? Is it more like an immune system internally where because we work collectively in small groups, we’re able to sort through bad faith discourse sort of architecturally? How might this actually work?

Tristan: Yeah. It’s a great question. I don’t think anybody who claims to have a clear answer, I would doubt. I think the first thing for me is people have to see this problem as it occurred because if you try to say to someone like, “You have to change your beliefs from these crazy conspiracy theories and Alex Jones,” that’s not really going to work. If you tell someone like we did with cigarettes, “This is bad for you,” that doesn’t actually stop someone from from doing it. You have to show them how they were manipulated. And that’s why I have always within magician frame and the persuasive technology frame, focus this conversation on what is the artificial illusion, the hypnotic spell that’s been cast upon society for years and years and years? To say, this is how the world went crazy.

Tristan: If we don’t actually have looking backwards the last seven years of view of how this is actually what happened to sense-making, so we can all see this thing that happened, that gives you the meta-cognitive perch you can climb up to, to now see, “These are the distortions that happen in our society.”And I think that’s the first thing is a collective understanding of the process of how conspiracy theories came to win and how trust came to lose and all of that. So that’s the first step. And then I think we need an immune system that makes it easier for us to not pollute the information ecology. We need more deterrence rather than defense.

Tristan: So instead of whack-a-mole, AI super lasers that are looking at shooting down the bad information that’s coming in and trying to scan for fake news and trying to catch it at scale, we need more deterrence, consequences for if something were to go wrong, you would be held accountable and that would because more people to be more careful about how they contribute to the information environment. This reminds me of things like, in China for example, they will allow you to post a deep fake, but only if you disclose and label it as such that, “This is a deep fake.” And you can actually go to jail if you publish a deep fake without labeling it as such.

Tristan: And that’s an interesting example where you can post certain things but you have to take responsibility for labeling it clearly. You could have something like that on Facebook where they could even say, “Hey look, there’s all these mainstream news channels, whether it’s MSNBC or Fox news or Breitbart or whoever, who might publish a deep fake and not label it.” And Facebook could say, “Hey, we’re actually going to suspend your account for 36 hours on Facebook if you actually post a deep fake without labeling it.” So they’re not saying you can’t post it, they’re saying you have to say that this was edited and labeled. And I think we need more preemptive liability and deterrence than we need defense.

Jim: The example you just gave, there are things that are well within the power of the platforms to do today.

Tristan: Yeah, absolutely.

Jim: How close do you think they are to feeling like they’re under enough pressure from the political sphere to actually spend some of their resources on things like this?

Tristan: Well, they’re not spending nearly enough. We’ve given the analogy in the past that if you compare how much Facebook spends on security to their total revenue, it’s a 6% of the revenue. If you compare that to a city, Zuckerberg likes to claim that Facebook is a global community of 2 billion people, like one big city. Well, let’s take a city like Los Angeles, and how much do they spend on security.They spend 25% of their budget on security. So Facebook is under spending by about four to five times basically. Certainly, one thing is they could spend more money on it, but I think deeper issues, I think that they’re not thinking about it the right way because they’re focused on catching the bad guys, catching the bad speech and then they have these free speeches in the United States so they don’t want to get in on it.

Tristan: So I think we need to change the conversation from free speech to about distribution, context, reach, breach, disclosures. Another thing that they could do for example, is whenever there’s an information operation they can back notify everybody who was impacted. There was I think something like 55 million people impacted by a recent Saudi Arabian information operation. I believe that the Russian 2016 information accounts operations, that those affected were about 150 million Americans. It was 126 million on Facebook, and if you counted Instagram, it’s 150 million. But those people aren’t even aware that they were affected or impacted.

Tristan: So one of the things that Facebook could be required to do is to back notify everybody who’s been affected. And this would be very similar to things like when there’s a security breach and Equifax says, “Hey, there’s a breach. Our data got out there of all these users, we have a responsibility by law to notify everybody that whose data might be at risk.” Well, we have should have the same thing for information accounts. And the joke is that NPR actually does this where if there’s a correction to a story that they have to issue like a fact check and they have to go back and notify people. They actually through their app have successfully done that. So the joke is, if NPR can do it, then Facebook can.

Jim: Yeah. You would think. Of course, there’s a lot of subtle distinctions there, what constitutes a sufficient exploit that would require a notification. We all see dodgy crap on our Facebook feeds all the time, but most of it is not objectively bad faith discourse. As you said, it’s more people have peculiar framings, which distort the meme space. Where do you draw the line?

Tristan: Yeah. That’s a very interesting philosophical questions about, for example, what is an authentic Texas secessionist? Because you have a lot of domestic people with ideas that might be more extreme or wanting to say things, but if I’m Russia, I don’t even have to plant new information operations in your country, get people to believe something new. I can just find existing people who believe something that might be more fringe, but I can dial them up by sending a hundred Russian bots at them and making them get thousands and thousands of retweets and make them go viral. And so there’s this question of, well, Russia is not even, and it doesn’t have to be Russia by the way, it can be anybody, I’m just using them as an example.

Tristan: My friend Renée DiResta was on the Senate intelligence committee and studied the full 200,000 meme Russian data set, So I just happened to know a bit about it. And one of the things that they did go after beyond African-Americans was the state secessionist movements. So the California Secession Movement and the Texas Secession Movement And so you could say, “Well, what’s the harm there?” Is the question I hear you asking me, Jim, because those people believe that anyway, why should we notify you that you’ve been affected by an influence operation when I was just thinking and believing those things and agree with it anyway.

Tristan: The point is that people feel more discussed and repugnance when they realize that they’re the target of something, or something that’s trying to influence them. And so I think that that’s what we have to introduce is the common knowledge that there are large influence operations happening from foreign countries all the time. And that this is not just this sort of safe space. So our trust is essentially miscalibrated. We’re calibrated to trust things to a certain degree in the physical world, but we should have less trust in the virtual world and we’re not really operating as such.

Jim: Oh, these are different kind of trust, I don’t have to worry about somebody robbing me at gunpoint online, but I do have to worry about them inserting perverse memes into my head by repetition if nothing else.

Tristan: That’s right. And I just think that overall, people underestimate the cost of what we’re talking about and the scale because it sounds so small and so subtle. You know like those military weapons that you can point it at someone and you can generate sound that only they will hear or even a sound so loud that will actually catch them off guard, there’s these specific weapons that do that. So if you move an inch away from that person, you won’t hear anything, that’s like micro-targeting. I can micro-target information bombs all throughout your society to zip codes or to individual user lists, phasing Facebook, custom audiences or look alike models. I can even take all the flat earth conspiracy theorist in your population, use Facebook look alike models to say, “Hey, give me a thousand users who look like that in your society.”

Tristan: And I’ve just found all the conspiracy theorists in your society. So basically, your whole society is up for grabs and I can flex my fingers and go in and wreak havoc and you don’t even think it’s happening. So I just want people to really get the degree to which we are living in a full on global information war. There’s an article that something like 70 countries are now participating in disinformation operations around the world. So it’s a very common activity now, and we have to make sure that this is part of the trust environment, that we are recognizing that this is the situation as opposed to thinking that it’s some kind of conspiracy theory. Because like I said, the person next to you could be the target of one and you wouldn’t even know.

Jim: Interesting. And of course, the United States is probably in the top two in spending on disinformation by government actors. I actually hope so. I hope our guys are doing it.

Tristan: I actually think that, from what I understand, we have not been very active on social media, sort of social media, disinformation in other countries. There’s some regulations around this, I forgot after Snowden’s domestic propaganda, the US citizens, there’s some rules that I forgot. Renee knows more about that.

Jim: Yeah. They certainly shouldn’t be doing it in the US, but interesting. And of course, we talk about this, we’re playing whack-a-mole, we’re putting makeup on the skin cancer, etc. If we went all the way up the stack and banned all advertising online, banned all use of behavioral data for targeting, the whole problem goes away.

Tristan: Yeah. A lot of it would really go away, and I think I would love to live in a world where technology is actually back to being a servant and a tool and empowering to the fabric of society. If you go back to Facebook in 2005, Mark Zuckerberg gave a speech saying, what Facebook was, and he said, “It’s an address book. It’s a utility. It’s a social utility.” That’s how he described it. It doesn’t have to be a newsfeed, it doesn’t have to be a news platform. They only did that because they got caught in the attention game theory of competing with Twitter and the race to the bottom of the brainstem, etc. But we could actually, as you said, ban these kinds of things from being the information environments and go back to a more open internet of information.

Tristan: That would solve a lot of the problems because we wouldn’t be so hooked. It would solve some of the addiction issues, the teen mental health issues of newsfeeds. It’d solve all of the sort of issues of constantly being raided, some of the sense making issues, personalization, polarization. It’s just that that’s a radically different society. That’s why I always spend a lot of time focusing on the cost and the threats of all this, because I think people, so underestimate why that drastic action might be needed.

Jim: Yep. And the more I hear you talking about it, more we think about it, it strikes me that fighting the war around the edges is a losing proposition. Perimeter defense is always very difficult. We may have to bite the bullet and go to the root causes.

Tristan: That’s right. That’s right. The perimeter defense might work, but when you’re generating, I think Daniel Schmachtenberger’s framing on this is, when you have exponential tech with the dimensionality of polynomial of the different kinds of harm you might be generated is so vast, you can’t account for that level of a polynomial of harm and clean it up at the edges. You have to actually deal with it at the source. And so that’s why we have to have systems that are not generating that dimensionality of harm

Jim: And sort of point of technology, deepfakes are just the first thing. Deepfakes amazingly haven’t caused nearly as much problems as people thought they would. Maybe we’ve gotten smarter in how to deal with that. But the ability to craft human sounding posts with images embedded and spin up vast armies of fully automated bots, that’s going to heighten these problems by the natural growth of the efficacy of AI.

Tristan: Yeah. And I think actually, people underestimate… Deepfakes are now starting to be used in ways that are very real. Recently, Saudi Arabia was found to have generated, I think this is right, 55,000 is my recollection, accounts that were actually the first to use the deepfakes for generating fake faces. So if you go to the website, thispersondoesnotexist.com and you refresh the page over and over again, you get a new fake face. And these are faces that look absolutely indistinguishable from real people’s faces. And I believe that this was used by an influence operation in which all of the fake accounts and bots looked like genuinely real people using tools like that. The one that’s also most alarming, I think that’s coming, is a GPT or DPT-2, is the deep faking text.

Tristan: It’s the ability to synthesize text in a way that sounds indistinguishable from the actual voice of the person. Think of it like a CSS style sheet where I could take all of your written writing, Jim and then I could generate texts, it’s sort of has all the gymnast to the way that you say things, but it’s all in text form. And because of the dimensionality of text is so much less than the richness of voice or voice synthesis, it’s much easier to fake. Then you add things like when the FCC asked for comments on, I forgot exactly what it was, they asked for a public comments on some new change in policy and there was later a study showing that something like more than 50% of the comments were actually generated by bots.

Tristan: When you have comment fields that can be spun up with bot farms by state actors or non state actors just spewing fake text everywhere. This is the checkmate on human trust mechanisms because when we cannot trust the very things that are in front of our eyes with the text that’s in front of our faces, the thing that’s ultimately going to matter the most in this new world is, what do we trust? And I think that we’re going to have more preferences for in-person interactions and real life experiences, and hopefully back to some institutions that can hopefully be regaining our trust because they’ll actually admit some of the ways in which maybe there was good reason to not trust them in the past.

Jim: Yeah, GPT-2 was exactly the kind of thing I was alluding to in the in the growth of AI and GPT-2 is overrated with respect to its capabilities. However, I can guarantee a year from now there’ll be something five times better, and a year after that, five times better again. So exactly whether GPT-2 is the actual problem or not, we know we’ll have that problem in the short term, which actually reminds me of another top of the root cause stack that doesn’t get talked about too much. One of the things that Facebook does, which in general I think is better than the alternative, is they attempt to establish an ecosystem where real names have to be used.

Jim: However, they intermittently do an okay job of policing that. If real name only was able to be maintained at a higher level of fidelity, some of these exploits, it would be a lot harder to do. Certainly the idea of being able to spin up bot armies would start to go away, unlike on Twitter where there is no standard of identity, and there you can assume that bot armies are running a muck.

Tristan: Yeah, I would actually say when we zoom out and look at all of these harms, we’ve already named one of the biggest generator functions for the harm, which is the advertising business model. But the other generator function is the lack of infrastructure that authenticates that people are who they say they are and there’s a notion of a good faith or a bad faith actor. On the internet for the first 20, 30 years, we just assume most people were good faith, blog posts you read, etc. That’s what’s so exciting about it, is you just have good faith use of a system, and so everything was positive, everyone’s posting things that are interesting. No one’s trying to sow disinformation, propaganda.

Tristan: It’s like mostly a pretty open, good faith participation system. But essentially because the borders are open and there’s no digital passport that says you have to be who you say you are, and I can use VPNs to get around whatever system you’re asking me to do, I can use US bank accounts to even validate my US citizenship. And this is what all the state actors have, the Russian trolls have US bank accounts. So it’s really hard to solve these problems when you don’t have a mechanism to validate that someone is who they say they are. Real names is one tiny part of it, but I’m thinking about how do we have a whole new identity and trust infrastructure for that.

Tristan: And that’s something we really need research into. I know there’s some people working on different parts of it. I’m not an expert on that topic, but it really is one of the things that addresses a huge chunk of these problems. And not addressing is what enables so much of these problems.

Jim: Yeah. We couldn’t even get a national ID card in the United States to handle immigration issues, hard to imagine the fringes, what they would say if we had mandatory strong digital know your customer requirements for online.

Tristan: Right. And it’s so hard. I mean, I say this with a sigh because without this, we’re going to continue to have the chronic information wars, the lack of trust, the fact that anybody can sow doubt anywhere they want and we won’t know whether to trust that person or not. That’s what’s so hard. And if could keep in mind, people do look long con thing too. So if you have a trust system that says, “Well, based on the fact that the patterns of behavior for this user have looked good for a while and this account creation dates from 2009, maybe that means that they’re a trustworthy account. They have been here for this long.” I would be more skeptical if the account creation date was last week during the Mueller report coming out or something like that.

Tristan: But it turns out that there’s actually whole markets, dark black markets where you can buy vintage accounts that have creation dates of 2009, 2010. And so this is why we really do need some new authenticated identity infrastructure for any social technology.

Jim: That self validates over time. It’s interesting, I laugh because I joined Facebook a little late 2009 I think it was, and the first week I was on there, I said, “I hope the CIA is starting to grow some accounts to be used for the background story for their agents because otherwise this is going to break the whole spy business.” And of course sure enough they made the same inference and started growing those legends. That’s the word I was looking for. They had the grow accounts to provide legend in social media. And as you say, you can go to the dark web today and buy or sell a well aged, reasonable looking identity. So age alone and the soft network generated cues that Facebook provides aren’t enough.

Tristan: Right. And this comes down to the dimensionality of trust signals and cues that we can use because you’re left with, well, what does their Facebook profile photo look like? Do I have a mutual friend with them? And what’s the account date. If there’s only three cues to go off of, that’s not a lot. And when in the real world we have many, many different cues and we have whole body language. There’s lots of ways that can get hacked too, back to the magician background and con artists. But I think we have to think about how can we increase the dimensionality of trust signals to better reflect the kinds of things that we care about.

Jim: Well, Tristan, that’s all been very interesting, but it leaves us at a dark point. We have multipolar traps, we have game theoretical races to the bottom, we have political process that doesn’t really seem to be capable of dealing with the problems on a timely enough basis, and we have platform companies with incentives that aren’t necessarily aligned to solving these problems. Do you see anything good out there? Some new green sprouts that are hopeful signals that maybe we are learning and are making progress on these fronts?

Tristan: Yeah, absolutely. And not that this is an easy transition to make, but I think when the thing that’s keeping you warm is a tire fire that’s also polluting your air, you’re still not going to walk out into the cold desert if you don’t know that there’s something else you can go to. So it’s really important to talk about the fact that there is a way to do this differently, but we have to have a real deep understanding of what the failure modes are, what the mistakes we made are, in how we got to the environment we’re in today. So the main thing that we focused on with humane technology is that technology can be designed to be almost ergonomic to our paleolithic emotions, be much more aware of how our emotional systems and our cognitive biases and our choice making systems can get distorted.

Tristan: So if you think about what world we want to be heading into, going into 2030 what is the role of technology? Do we want a future where most of our time is spent on screens, disembodied, disassociated, etc? Do we want a world where culture has been basically hijacked by technology? Or do we need a world where there’s a primacy to culture, human values and social life? Meaning that we actually have a human based discussion process, we have human choice making. We’re not just making choices through screens. So if you zoom out and say, “Okay, well, what that would look like?” Well, let me give you some examples of things that might embody that. Siempo was a Android OS homescreen launcher that had mindful features.

Tristan: So part of the insight they had was, “Hey, when you have a color homescreen, it’s addictive. Why don’t we make the homescreen grayscale? Hey, instead of doing notifications one at a time, why don’t we do a batch them all and deliver them at a certain time during the day and have a distinguishing notification for things that genuinely need your attention?” There’s something called breathing.ai, which actually uses a camera to track your breathing and heart rate variability and dynamically adjust the color and fonts. This might sound like really small stuff by the way, but what I’m doing is building up a stack saying, “Okay, well, how would it work from the foundation of the thing we’re holding in our hand and the disruption to attention, cognition, memory, breathing, social isolation, addiction, polarization, mental health?”

Tristan: We have to ask all the way up from the bottom all the way up to the top, how could this work? So those are the two examples. You can imagine things like habits being baked into the way that your phone works. So instead of just when you open up your phone and it’s not really aware of your broader life goals or how you want to wake up in the morning, it just says, “Here’s your last social media app that was open,” when you wake up in the app switcher. Instead, have modes where when we wake up in the morning next to our alarm, it lets us create that space and say, “What do you want to do?” And you could say, “I want to pick these habits whether it’s meditation or stretching or just 10 minutes of silence and actually design the interface around that and things like Headspace or Waking Up Sam Harris meditation app or calm.com would fall into those kinds of categories.

Tristan: Instead of being lonely and not knowing where your friends are and feeling addicted all the time, you could have things that help you find out where your friends are and when there serendipity around you baked into the way that phones work. Apple with its Find My Friends feature, which I think they renamed Find My could be baked more directly into the way that the operating system works. Instead of just an app switcher, it could be like essentially a GPS for where people are or which people are available and create more salient, easy ways to signal our presence and availability to reconnect with people. Because we all know that moment when you’re feeling alone and it’s really hard to reach out to friends because you just are in a dissociated state, it’s really hard to knock yourself out of that state. But if there’s lighter-weight ways we can mark ourselves as available, that’s like a step in the right direction.

Tristan: Things on sense-making and finding congruency and agreement. I know, Jim, you’ve been participating in letter.wiki, which is a really great service for long form, almost like letters between the founding fathers, debates between public intellectuals and big thinkers about the right framing of big topics. So instead of saying climate change is just going to kill everybody, actually have a debate about climate change versus economic growth. And so that’s been a great service. There’s something called vTaiwan, which is a pol.is, which is Taiwan’s sense-making platform. So the digital minister of Taiwan, Audrey Tang has this platform that basically lets people draft statements about how some matter of public society should be resolved, and they can respond to other suggestions by either agreeing or disagreeing with them.

Tristan: And when there’s a rough consensus, it tries to gamify and enhance consensus in other words. So that’s a really interesting example I recommend people check out. I could go on and on. I mean, there’s other things like Artery which helps people find living room concerts, it’s like an Airbnb but for basically renting space out for more physical events. If one of the problems of technology is hollowing out our physical world, hollowing out libraries, public spaces, town squares for culture, etc, what if technology were essentially instrumenting that. And Artery is something that does that.

Tristan: Hipcamp helps people do Airbnb, but for online camping spots. This is hard to organize into a list because there’s so many different things, but there are examples of technologies that are actually caring about and tending to the social fabric and the physical spaces around us and caring for and tending to the ergonomics of human biases and cognition and trying to ask how do we make it really work to empower the meaningful choices that a Joe Edelman talked about in one of your recent podcasts. So those are hopefully some examples of things for people to check out if they are interested.

Jim: Yeah, that’s good. Because it’s certainly important that we don’t despair that even when things seem like they’re overwhelming, there are ways to make progress, and those are some good examples.

Tristan: Yeah, absolutely. And just one last thing is, I think one of the other things we have to look at is just making sure that the expertise guiding the decisions and how we make technology matches the expertise that would be needed if you’re affecting things like a social fabric. I think about this like, should economists be the ones running the infrastructure that are going to affect, let’s say the viability of the pollinators, the decomposers and the web of life if they don’t even know what pollinators or decomposers are? So we have to have a match between the people who are running the management infrastructure and the substrate that actually creates the foundation upon which the economy can stand.

Tristan: So similarly with technology, can we have 20 to 30 year old computer science trained, STEM trained engineers making decisions about how social fabrics and identity and children’s development work? No. If they don’t understand those things, then they should be instead making technologies that make space for essentially the natural brilliance of children’s development and social development, etc, to occur outside and make sure that the technology is not taking over parts of society it doesn’t fully understand. I think that’s another area in terms of expertise matching that we need to think about.

Jim: It’s not been in the front of mind of entrepreneurs starting companies, having been one, I know we’re just trying to figure out how to build a product that will sell and how we can develop a bigger, prosocial mission. Yeah, I think that’s a beautiful dream. We’ll see how we can make it happen.

Tristan: Let’s make it happen.

Jim: Well, Tristan, thank you very much. I’m really glad there are people like you and your organization, which repeat for me again, the name of your organization.

Tristan: The Center for Humane Technology.

Jim: And your a website/

Tristan: Humanetech.com. And if people are curious to learn more, I definitely recommend checking out our podcast, Your Undivided Attention where we explore some of the point issues and walk through it.

Jim: All right. Thank you again. It’s been great to have you on.

Tristan: Thank you so much, Jim. It’s been great to be here.

Jim: Production services and audio editing by Jared Janes Consulting. Music by Tom Muller modernspacemusic.com.