The following is a rough transcript which has not been revised by The Jim Rutt Show or Mirta Galesic. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Mirta Galesic. Mirta is a professor at the Santa Fe Institute, and she’s external faculty at the Complexity Science Hub in Vienna, Austria and at the Vermont Complex System Center at the University of Vermont, as well as an associate researcher at the Harding Center for Risk Literacy at the University of Potsdam. Welcome, Mirta.
Mirta: Welcome. Welcome, me. Hello. Hello. Good to be here.
Jim: Yeah, it’s great to have you here. Mirta and I have crossed paths at the Santa Fe Institute a few times, but haven’t really gotten to know each other well. So I’m looking forward to have a conversation today.
Mirta: Yeah, the COVID decreased the chances for random encounters that we would have before. So nice to be here.
Jim: Yeah. So today we’re going to talk about probably a number of things, but we’re going to base the conversation on a paper that Mirta was co-author on, called Stewardship of Global Collective Behavior. And what was there, like 15 authors? Something like that?
Mirta: Something like that.
Jim: Yeah, that’s interesting. And it was published in PNAS, which is short for the Proceedings of the National Academy of Sciences, a peer reviewed journal, the National Academy of Sciences, pretty big stuff actually. So high profile article, and it’s about some of the topics that we often talk about here on the Jim Rutt show. So why don’t you start off by telling the audience what the paper meant when it referred to collective behavior.
Mirta: So there are many examples of collective behaviors in nature from many animals, from fishes to various other animals to humans. We are all somehow trying to adjust to each other and to the problems we are facing by changing our network structures and changing the way we make collective decisions. In humans, we also have whole systems of collective beliefs, of collective memories and so on. Other animals also have collective emotions and some joint perception that guides their actions. And all of that works really well.
And of course, we as humans are perhaps the most successful collective species, if you wish. Many would say that our spectacular success as a species depends on our, or is thanks to, our ability to form very successful collectives that are able to adaptively react to different problems we are facing. However, in the last decade or two situations has changed drastically, as you discussed many times on your podcast. Our networks became much, much larger than they ever have been. So the sizes are, it’s just in orders of magnitude, larger number of contacts that we potentially can have and that at least, even if they’re not our contacts, the number of people that we receive information from is immensely larger, and the contact is much more direct and all of that.
And another thing is that it’s more easy now to make networks, to make network connections and to break network connections. I can just shadow someone I don’t like anymore or I can connect to a scientist. I mean, science is great. I can connect to anyone who does science similar to mine all over the world in a matter of minutes. And so there are many new possibilities for that. But that of course also opens problems because we are just not the social norms, the ways we are making decisions are still mostly kind of counting on more stable social networks.
Networks are basically smaller communities where when somebody says something stupid, the other people are recognizing, maybe shun them, and maybe really outrageous ideas or harmful ideas will gradually be dampened. But here it’s so easy to find other people to talk to about any idea, which can be really great for certain suppressed minority groups and so on. But it can also lead to massive proliferation of harmful ideas, potentially collectively harmful ideas. So there are many opportunities and challenges here.
And so in this paper, we just wanted to revisit this. Many of the authors on this paper, like Joe [inaudible 00:04:06], who is really the one, the first author and the one who put this all together, Carl Bergstrom, there are people from biology who are used to working with animal collectives. And we wanted to see what are the challenges that we as humans are facing today, and what does that mean? And we concluded that it probably means that we are in a crisis, that this is a crisis discipline. Collective human behavior is a crisis discipline, and we should increase massively the amount of research we are doing on this, discussions we have about it. And yeah, that’s what the paper is roughly about.
Jim: Yeah. Now of course, this isn’t the first time humans have potentially gone through what might be called a phase change around communications technology. Think back 4,000 years ago, the invention of cuneiform writing allowed the emergence of relatively large centralized empires and increased the ability of command and control. Then the development of alphabetic language around 2,800 years ago may well have been what allowed the emergence of philosophy and protoscience and all the changes that brought to the world, and the more succinct and easily produced forms of writing allowed the emergence of things like the Roman Empire and the Chinese Empires. And then very famously, probably the precursor to our modern world, was the invention of the movable type printing press in 1450.
Jim: Yeah. And of course it also led, a number of stories, at least, assert that it led to or was a very important part of the decoherence of Christianity into warring sects and the development of the Reformation and the Thirty Years’ War that killed a third of the population of Central Europe. So this is not the first time that a change in our core communications technology has had a potentially large impact on humanity. So you hit a few of them, but let’s go into a little bit more detail on some of the things that are new that are brought to the table here with our digital networked, many to many communications infrastructures.
Mirta: Yeah. When you mentioned the printing press, one interesting consequence of printing press, people who study witchcraft say, is that it really contributed to the spread of fear of witches in Europe at that time because now even people who couldn’t read could see the pictures in the books that were printed. And so now people could form a collective impression of what a witch looks like and how we should find one and what we should do with it. So there are always some pros and cons in every change of technology, and it’s interesting.
And so what is so different with this one? Well, perhaps one thing is that it enables so much direct communication from individual to individual, as it was never before. So usually invention of how you say alphabets, scripts, printing press, even radio, TV. And most of the time you have some central source that is moderating, that is accumulating the information and then distributing to others. Here, each one of us can distribute their own truth to others and be directly influenced by other individuals. And that’s beautiful, right? It’s wonderful.
But then it might also have some unintended consequences, such as the cues that we are using in real life to understand what is true and what is not true. For example, how many friends somebody has, do people think that this person is respectable? Do they explain things well? Do they appear confident, or do they have nice pictures to support their thing? All these things are more easy to fake now with these new technologies.
And so it might take us some time. I’m sure that we can master it. We master, as you said, so many things. Throughout our history, we developed institutions to deal with many emerging problems, from marriage, police, legal system, religions. I’m sure that we can also overcome this crisis, but it seems like we need to think or maybe allow through different mechanisms, institutions to evolve that will help us to deal with these new ways of understanding what is true, what is not true, whom to trust, how to integrate all this information that’s attacking, how to cope with all this.
And then to understand as social scientists, where are we going? How is this going to change our collective social dynamics? Is everything, is democracy going to fall apart? Is it going to flourish? Can we say anything about it? And that’s the crux of what keeps me awake at night. What can we say more rigorously then? Well, we are not sure. It could be this or that.
Jim: Yeah. And it is, as you point out or as they point out in the article, this is a true complex system, at multi-scale, multiple levels of aggregation. We’re in a new network topology that we’ve never actually been in before. Think about something as seemingly simple as Twitter, which is a quite simple architectural system, but it’s also one that’s quite complex and that has a lot of self-assembly, and strange network configurations get built and change in real time, et cetera.
This is not something that we were evolved to do. Most of our history was living as foragers at the Dunbar number 150 or so or less, often quite a bit less. And this new ability to in theory be emerged in a very dynamic, constantly changing network of up to billions of people is a level of complexity and emergent complexity that we’ve never really had to deal with before.
Mirta: That is true, and to me, I’m not on Twitter and one of the reasons is that everything is recorded. I as a scientist, I like to keep my options open, especially as a social scientist. We really know little so far. Much of the social science has been developed in some other times. It has always been mostly verbal, mostly qualitative, not too rigorous in a sense that we can actually, that we were ever able to predict some kind of societal trends.
And now in these turbulent times, in these times of something almost chaotic, it is hard to put your foot down and say, I know how things are. And social media, such as Twitter, just record every thought that you have. And so years later you can be accountable but even more, it kind of makes you more committed to whatever you’re saying and maybe less likely to change your mind.
And I think one important property of collective adaptation these days is almost to maybe forget faster. There’s so much information. So much of that is irrelevant, and somehow, I’m just brainstorming as we speak, but it seems that one mechanism that we should, one institution we should build is some kind of collective forgetting so that we can go on.
Jim: Yeah, that’s a very good point. In fact, I posted on Twitter, ironically enough, the following quote from Herbert Simon, posted it a few days ago. “What information consumes is rather obvious. It consumes the attention of its recipients. Hence, a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.” This was Herb Simon in 1971. I don’t know what he would’ve made out of Twitter.
Mirta: I think he would be very much in favor of selective forgetting. I mean, our memories is faulty, and so many would claim that being able to forget is a very adaptive property that enables us to function in a changing world. So I’m a little bit scared of this social media remembering everything and then people accusing each other for what they said 10 years ago and so on.
Jim: So of course we can develop personal hygiene. You can delete things. Not that they’re gone. I mean someone may have made a copy, but-
Mirta: True, that [inaudible 00:12:16].
Jim: Yeah. And you can even have an agent that did it. I know someone that has one, has a software agent that goes on Twitter I think every 60 days and deletes everything they’ve posted.
Mirta: Oh, that’s great. Which brings us to, okay, everybody talks about ChatGPT, but what I’m hoping from the technology to evolve enough so that I can train a large language model on my personal data, my emails, and everything I have ever written. And so I can send my bot to talk to your bot about most daily mundane things or maybe even things like this. I don’t know, maybe things even like this podcast. Maybe a bot me would be much more coherent and comprehensive than a human me is.
Jim: Well, you can find out in a few days because I have put up a large language model chatbot of the Jim Rutt show.
Mirta: That’s amazing.
Jim: That’s at chat.jimrutt.com. And the way it works is we took the transcripts, one of the things I’ve always done from the beginning is had very high quality, expensive transcripts done of all my shows. I loaded them up into a database which then takes queries, and it’s a latent semantic space database so you don’t have to, if people were talking about, let’s say car races and they use automobiles instead of cars, it would still work in the latent semantic space. It retrieves a number of chunks from the transcript, ranks the chunks on their latent semantic closeness to the query, packages it up as a context, and then sends it to ChatGPT, not ChatGPT, but GPT-3, to essentially do the writing. Because writing a coherent output is really hard to do in computer code. But that’s one thing that the GPTs do very well.
Now, their facts can be very bogus. They make shit up. I still remember one of the first things I typed in into ChatGPT was, what are the 10 most prominent episodes of the Jim Rutt Show? And this is, I think it was hilarious. It came up with 10 that were very reasonable, just the kind of people that I would talk to, but only one of them was an actual person that had been on the show. So they can lie like crazy. But their ability to craft smooth language is quite impressive. So anyway, this chatbot, that’s how it works. It searches, and it’s up right now. And when your episode is up in a week or so, within a couple of days, you’ll be added to the chatbot. So you can see if you’re smarter.
Mirta: Oh, that’s great. Then you just need an avatar of yourself. You just need an avatar of yourself to speak the words the created by ChatGPT, and you can spend more time with your granddaughter.
Jim: Exactly, exactly. Yeah. And then one of the other points here in this, I mean, I’m excited by this stuff. I love these things. In fact, I have found this large language model and other related models to have reinvigorated my interest in technology. And I think I stole from Bill Gates that this reminds me a lot of PCs in 1980. There are a lot of really easy things to do that are still already, they’re interesting because this is like the Cambrian explosion.
These tools didn’t exist in this form long ago. And if we can apply our creative abilities, we can use them in the right way, not the wrong way. Of course other people use them in the wrong ways, unfortunately. And create these things. Like this chatbot I created took me six hours.
Mirta: Amazing. I’m impressed.
Jim: Totally amazing. Amazed me, frankly. But there were tools that I could use to do all these. The only thing I had to write was the parser to go through and parse all the episodes out and clean them up, whatever. Because the only thing I could get was a gigantic 70 megabyte dump of HTML from the podcast hosting company. And so I had to write some parsers to find the transcripts in that mess, and create titles and break them up into documents and all that stuff. But that only took about an hour and a half I think, something like that.
So this was kind of interesting. And to your point, unpredictable how these new affordances are going to unfold. Who would’ve thought a year ago that Jim Rutt in six hours could create a quite good little chatbot for his content?
Mirta: My first experience with computers was when in the eighties, I was a kid and my father was a crystallographer in then Yugoslavia, and there was a big computing center for the scientific institute he was working at. And he would take me there because I don’t know, they didn’t know what to do with me. And then the technician there would print out my photograph composed of numbers, and that was very cool.
And then a little bit later I got my first Commodore 64. And then as I was a little bit introverted kid, of course the first thing I wrote a little program to talk to myself. That was cute. Anyhow, I grew up, do other stuff. But as a psychologist, maybe as a doctor that studies people, and so to really understand and appreciate humanity, which is wonderful, you also need to look at the ugly parts or dirtier parts. And so part of my work is dealing with hate speech and misinformation and the influence of outgroup threat on people’s beliefs and behaviors.
And so of course this new technologies open many, many good things. But also they open this, they open this new philosophical questions. What is freedom of speech? Shall we allow everything? How do we control people who are very well connected and whose voice looms larger than others, maybe for no particular reason of substance, but maybe because of some other social aspects that are not related to the truth? So what should we do about it?
And it’s a very interesting question that I think we as a society need to start grappling in a completely new way. It’s no longer the societies of the sixties or the seventies where it was desirable for different people to have their voice. What are we going to do about it now? And so I studied that by looking at how, for example, one of the lines of research is how citizens themselves individually could cope with or maybe influence the discourse online, in particular the hate speech discourse, but maybe also various misinformation. Can we as individuals have any influence? Can we kind of bottom up influence this whole enormous growing system, or should we have government controlling things?
Jim: Yeah, I’ve got a small part of the answer, maybe. It’s something I talk about the show from time to time, which is one should consider the decision to upvote something or retweet it a morally consequential decision. Do not allow yourself to be triggered. Stop and say, is it a good thing for the world to have this piece of this mimetic payload have wider distribution in the world?
And I’ve got a pretty good and pretty influential follower base on both Twitter and Facebook. So if I tweet something or retweet it, or even like it, it affects hundreds, sometimes thousands, sometimes tens of thousands, sometimes hundreds of thousands of people. And so it’s moral. I consider it a morally consequential act. And if more people would stop and consider the moral character of that act, I suspect that would help a fair bit.
Mirta: Absolutely. I completely agree. It’s interesting that you’re thinking that way. What prompted you to think that way, and why do you think that some other very influential people are maybe not thinking that way? Or is there a difference? Maybe they’re thinking that way, but there is a difference in moral norms or what is considered to be acceptable? What do you think?
Jim: I think some of each. Also, I will say I have the very large benefit of not doing this for money. I was very successful in my business career. I run this as a hobby, a love activity. And the people that are doing it for money unfortunately have a different incentive system. They have the incentive system to be as outrageous as possible, to get people mad, to have them start arguing, because what draws attention but people arguing about what blah, blah, blah said yesterday. Oh my God.
And if she or he could say something sufficiently offensive to get on the second page of the New York Times, their listenership goes up tremendously. So unfortunately, advertising driven information ecosystem sets up a whole bunch of bad incentives for inauthentic and bad faith behavior. That’s my theory.
Mirta: Exactly. And social media so far has not been evolving to help humans achieve their potential as human beings or to achieve the best knowledge, to inform the best, but to make money off advertising. And that’s one of the problems that we mentioned in our paper, as well, that the [inaudible 00:21:20] did not evolve to help us but to profit from us.
Jim: Indeed. In fact, I have said more than once that the single silver bullet that would most improve, it wouldn’t fix everything, but it would most improve our online ecosystem, is to ban advertising. And it’s now practical. What’s interesting is we may have needed advertising to get us to where we are, because it used to be networks and computers were expensive. But they’re not so expensive today.
I’ve looked into the financials of some of the online platforms, and Facebook collects on average two dollars a month per subscriber. Twitter, it’s less than one dollar a month. And they’re supporting their ad infrastructure, which is a non-trivial cost, and they’re turning a profit. So one could offer quite powerful platforms for a dollar or two a month, which are within the realm of certainly everybody in the West. And if we had to subsidize it for the rest of the world a little bit, it wouldn’t be that much money. And so I think the opportunity has come for us to consider as a human race, wouldn’t we have a better information ecosystem if we banned advertising?
Mirta: I mean, you’re coming dangerously close to socialism, Jim, because that’s how I grew up in my socialist country. The TV was free of advertising and we were paying for it. And so that was how the money was collected. The payment of TV subscription was enforced. There were guys who were frequently, who were walking around the neighborhood and they would hear a TV or a radio, they would come in and say, “Is this registered?”
And if it’s not registered, [inaudible 00:23:02] fine, it happened to me. But the consequence of that is that then there was no advertising. But then basically, that needs to be combined with some kind of decentralized, of course, programming.
Jim: And of course the thing that’s different now, because the UK also still has to this day, has the same model. There’s a licensing fee for TVs on the order of a hundred dollars a year. I think it’s a lot of money actually. And that money goes to support the BBC, which doesn’t have advertising and a couple other, BBC one and two or three, I don’t know how many they have. But of course the problem is it’s stodgy, boring stuff. I imagine your Yugoslavia and socialist TV shows were boring.
But fortunately we have a completely new way to attack this problem. And that’s micropayments. Instead of paying one license to the great bureaucrat to decide what goes on TV, if it costs me half a cent to read a Medium essay or two dollars a month to be a member of Facebook and that would essentially be paid more or less automatically out of a little pool of virtual money that I created associated with my browser, we could still have individual driven preference but not influenced by the perverse incentives of advertising.
Mirta: You’re right. That’s the problem of what David [inaudible 00:24:19] would call emergent engineering or other people will call nudging. What is the minimal some kind of intervention in the system that would nudge it in an appropriate way and in the right direction? I’m pretty cynical about human nature and their willingness to pay for good stuff, but there must be a way.
Jim: Well, there are some-
Mirta: I mean, there are examples.
Jim: There are some signs. For instance, Substack, I don’t know if you’ve looked at Substack, but it’s a new, not that new now, a couple years old, where many of the very best journalism voices have left the big newspapers and magazines and set up as independents. And you can subscribe to their flow, and they’re typically four dollars a month, something like that. And there’s numerous of them making very good livings doing this.
And now this in the early stages where the cognoscente, the intellectuals will subscribe to these things. I think I subscribe to 15 of them, maybe something like that. Get the price down to 20 cents a month, then it could become a mass phenomena and people could assemble their own sources rather than these very bland and frankly not very trustworthy big media operations. And again, some people would subscribe to inflammatory misinformation laden stuff, but at least it’s not the incentive system. See how much noise I can make to get as many eyeballs for free as I can so I can sell them some advertising.
Mirta: Right. And then there are also other systems, social media systems, and I mean the knowledge systems online that work, like Wikipedia, like Reddit. Some things actually work basically without payment involved. So it is possible to touch something in human motivation to, you know, we all have this drive to contribute to others’ knowledge or at least to show off with what we know and somehow to influence what others are thinking. I mean, you’re doing the podcast, I’m writing papers, so we want to share what we know. We are very excited when we have new information. Did you hear this?
It’s something that we are, it is a motivation that we probably developed as one of the things that differentiates us from others and the way we are making most of our collective nature. And so it seems that Wikipedia, Reddit, these things are exploiting this maybe basic human motive to share the information we have in certain circumstances.
Jim: Of course Facebook would say the same. And now it is worth noting Reddit is advertising supported. They’re not very clever at it so they don’t make a lot of money, but they are advertising supported. Wikipedia is a more pure model where, as you know if you periodically go on there, there’s some begging going on, looking for charity. I send them $50 a year or something like that. It’s worth it to have Wikipedia in the world, for people to pay a little bit. But I think that’s actually a good model for the future, either by donation or subscription and less so by advertising.
So let’s go back to some of the emergent knobs, we might want to call them, about the design space of our online world. One of the ones you call out in the paper, the paper calls out, is algorithms. The things, for instance, that tell you who you should ask to be your friend or the things that get stuffed into your feed. What is it about these, what are these algorithms and why are they potentially important to the emergent result?
Mirta: Yeah, it seems that this could be, as you said, another way to go about nudging these systems in better ways. So of course right now all these recommender algorithms are designed to make us feel good about ourselves in the immediate ways to connect us to people who are kind of like us, or ideally who people who like us and who agree with what we say. And so there was a lot of discussion whether these recommended algorithms should be tweaked in a way to offer us more diverse opinions and so on. And there was some prominent papers investigating that.
As you know, recently in PNS there was this paper asking people to follow a bot with information from a different political extreme. And it didn’t help at all. It didn’t help. People became even more entrenched in their opinions.
But I still think that there is a lot of potential there, that somehow if it’s presented in a way that’s kind of more natural, more organic, it could be a way to connect people more with people who are different from them. One very good researcher, a researcher in this sphere, is Fariba Karimi, from Complexity Science Hub in Vienna. And what she investigates is how perception of minority changes depending on the size of the minority and depending on how well connected this minority is with majority.
And then she investigates what would be the effect of different algorithms that would connect the majority to minority, more or less. And then she finds some interesting things, that if minority is very small, less than I think the cutoff is 30%, then minority basically cannot do anything to increase its social capital. Minority can try to connect with majority in the society as much as they want, but unless the majority reaches out and starts connecting with minority and the position, the social position in terms of how many connections they have across the network, the minority will not improve.
And so those are some, I think that kind of research in social science can actually inform, depending on the current situation in the society, what kind of algorithm would help best. Should this algorithm connect majority with minority, or should this algorithm connect minority with majority and so on?
Jim: Yeah, of course a lot of talk about that. Elon Musk has talked a lot about the algorithm, and there’s some evidence that the algorithm on Twitter in particular was politically biased in a fairly specific way, basically pro-left in the US terminology, more towards the Democrats than the Republicans. And there seems to be some evidence for that.
There’s also been some very interesting evidence that ChatGPT shows political bias similar to Silicon Valley programmers, basically left libertarian. There are these tests where they ask you questions and when you answer 20 questions, they’ll put you on a two-dimensional graph of political space, libertarian versus authoritarian, collective versus individual. And ChatGPT looks an awful lot like a Google programmer or an open AI programmer, more progressive on the left, but more libertarian. And so this is course always the question when you start doing intervention, who decides who do we trust?
Mirta: So just to be the devil’s advocate here, it is possible that it’s left leaning, but some other people would say, well, maybe it’s accurately recording the pro and against arguments from different claims, from anti-vax to the January 6th to the election frauds and so on. So what is perceived to be the left leading bias might be the accurate recording of the evidence that’s out there.
But I mean you’re absolutely right, especially ChatGPT, which seems so convincing. It talks to people. Just recently a colleague was showing results, just actually yesterday we had an excellent talk, Mark Stivers, who is a community scientist at UC Irvine, and he was looking at how much people trust algorithms that are obviously wrong. These algorithms were basically giving advice on what is in some murky images. So it was nothing about vaccination or politics or something like, it was just like, can you recognize what’s in this image that has more or less noise in it? So for human eye, it could be difficult, but different algorithms can recognize, oh, it’s a bear, it’s an oven.
And it turns out that yes, people are sensitive to the accuracy of algorithms. When they see that some algorithm does not recognize a cow from a house, they start believing less to the algorithm. But there is still about, it’s always about 10, 20% of people who believe even the worst algorithm. So with that in mind, it becomes very important who is training the algorithms. And if there is an obvious bias in the algorithms, okay, most people might not believe it.
But if 10 or 20% of the population believe, well, I can calculate, I can do models. What can they do if they’re particularly societal influential, if maybe they’re in the positions of power in some way, if they’re your politicians or whoever, guardians, a certain kind of information spread, they could be influential. And I think that’s a really important question and maybe something that differentiates this new ChatGPT thing from some previous developments in technology. It really has more potential to maybe deceive us in a way.
Jim: Yep, I had an idea about that. I was talking with a friend of mine the other day. Of course a lot of people I know are talking about this. We came up with the following hypothesis, one of the things that’s so dangerous about ChatGPT is its tremendous capacity to do language. It turns out that most humans when they lie have some giveaways, as they call them, tells.
And my father was a Washington DC police officer and a bunch of his friends were detectives. My brother was federal law enforcement. And I’ve known some of his friends who are into being detectives, and they all say that criminals when they lie don’t speak the same way as when they tell the truth. They give more details about irrelevant things and they provide more detail in general than if you were to ask them about something that they were telling the truth about, on average. And on the other hand, there are some sociopaths that you can’t tell but the run-of-the-mill normal person, if they’re lying, there are some tells in how they structure the discourse.
But of course ChatGPT doesn’t know if it’s lying or not. And it uses the exact same language model whether it’s telling the truth or not. So when it’s telling something that’s just blatantly false, it tells it with exactly the same linguistic style. So the liars’ indicators aren’t there in ChatGPT’s output, which is probably something which is why people are so easily taken in by it.
Mirta: That’s an excellent observation. And just actually yesterday we were talking about police officers and investigators and how they understand who is lying or who not. What is [inaudible 00:34:50], because Mark Stivers, who’s visiting now at Santa Fe Institute, was just one of his research projects is investigating how people know, how people judge whether a text produced by a person who is lying versus not is accurate. And one of the things is exactly what you said, this level of specific, the level of details, but that then this kind of queue, this kind of metacognitive queue cannot be used with ChatGPT.
And also, many other cues as I mentioned before, such as how many followers do you have? How beautiful are your pictures? Are you speaking with a lot of confidence? All of that can be more easily faked online. And so I think I’m optimistic about it. I’m sure that we can learn these new cues and develop some new ways of thinking about it in institutions, but it’ll take a few years. So we need to somehow survive, try to not fall apart as a society until we figure it out.
But I mean the big thing is that I’m curious about your opinions. If you want to change something fast, but probably in a kind of long term, even damaging way, is to change it top down. If you want to change maybe in a better quality way, which allows for adaptation, for a living organism to develop with times, you want to encourage citizens to do it bottom up. Is your view that we should do both ways, or would you prefer bottom up only? What are the dangers of top-down? I’m thinking about it as well.
Jim: Yeah, it’s a very interesting, important question. Yeah, my answer to that is top down as long as I’m at the top. But in truth, it comes back to that same decision. Who can you trust with that much authority? I would say I like some of the ideas Elon Musk has floated around with respect to the algorithm, which was first open source it, and then he hasn’t talked about this as much, but I’ve talked about it quite a bit, both on my podcast and an essay I wrote called Musk in Moderation, was published on Quill, where I suggested that there be a marketplace in algorithms and that they have to be open source and they can’t cost more than X, maybe five dollars a month. And that there would be ratings on the different algorithms, et cetera. So that people could pick and choose which algorithms they want.
They want the real inflammatory one that’ll get them mad, they can pick that one. If they want the one that’ll provide just their actual friends, they can choose that one. If they want one that intentionally reaches out to people as different from you as possible at a certain rate, I want five percent really out there stuff. And you could even have sliders. You could have one that has this ability to look for the variance. I want high variance. Five percent, oh, that’s too much. Three percent, oh, not enough. Seven percent. I think an open market in algorithms that are open source, so that while the average user can’t read the [inaudible 00:37:42] code and know what it means, experts can.
And I would expect there to be the equivalent of book critics. There would be algorithm critics who would post in the New Yorker, oh, we’re going to look today at the algorithm XXY, and here’s what it does. And we’ve tested it on the following five accounts and it really does do what it says, or no it doesn’t, it’s lying. And I think that there’s an interesting bottom up way, at least potentially, to get people to do some things. And I think the more of this that we can do, the better.
The same with moderation. This is a real problem. And again, there’s a perception that moderation has been biased on the big platforms. And I think far more often than biased, it’s just been stupid, where these automated algorithms will zap people for no good reason because they’re using statistical matching patterns that are just not very strong. And so I’ve suggested that you have a layered approach. Yeah, you’re going to have to use algorithms for first order because the flow is too high, but it should always have the right of a human appeal.
And then my third innovation item I’m kind of proud of, which is that you don’t like the results of the $12 an hour human that reviewed your temporary ban or whatever, you can put up some money and a stake up to a million dollars, anywhere from a hundred dollars to a million dollars, and challenge the decision, which would be heard by an independent arbitrator from the Association of Independent Arbitrators.
And if you win, you get 10X the stake that you put up, and if they win, they get your stake. And so that people without any money can have a voice in the system, that there’s a marketplace in the claims. So I put up, okay, I said this, Facebook banned me for this violation of the rules. I think they’re wrong. Who wants to syndicate? Who wants to back my challenge? And in the challenge backing with the 10X payoff, the person who has the claim would get 20% from the marketplace backing. And the people who backed the bets would get 80%. So even if you had not a nickel at all, if you have a good claim, you could get a million dollars behind you and you’d get $200,000. Actually you’d get two million dollars, because you’d get 10X if Facebook had it wrong.
Mirta: I think that’s great.
Jim: I think, because then basically the standard, if you thought about it for a second, Facebook would have to be right 90% of the time. If they’re 90% of the time right, they’ll break even on the wager marketplace. If they’re right less than 90% of the time, they’ll lose money. If they’re right more than 90% of the time, they’ll make money. And I think that’s a very interesting market-based bottom up way to deal with moderation.
Mirta: No, that’s great. Any chance to employ it in one of the platforms?
Jim: Probably not. They’re too big and too stodgy. I think a startup could do it. And I do think there’s a huge amount of room right now. In fact, I just did another little post on Twitter a couple days ago. What would you all think of a Twitter that had no ads and that had open source algorithms, blah, blah, blah? And a bunch of people said, “Oh yeah, I’d pay.” I said, “How about $10 a year?” And people, “I’ll pay $25 a month.” And so of course the real problem is a network effect problem. To leave a network with hundreds of millions to go to a network with a few hundred, unless they’re a very selective group of people that are part of your own network, you give up an awful lot, going from the big network to the small network. That’s the defense that the big boys have today, is their network effects.
Because I use Twitter for lots of things. I follow a number of scientists, I follow political activists, I follow commentators about the Ukraine War. I follow regenerative agriculture people and local agriculture people. So my own Twitter is, that’s the cool thing about Twitter, it is what you make out of it based on who you follow. And any one single subset that moved off to its own little world, maybe it has better features and is less corrupt by advertising, unfortunately doesn’t have the reach to all the various things that I’m interested in. So it would be hard to give up, at least for long. It’s a chicken and the egg problem.
Mirta: And then there’s the example of Mastodon and such systems that are distributed that are working without advertising, where you can choose your own server and then obey the certain rules of a particular server. You can choose the community with whose rules you are ready to comply with.
Jim: Yeah, I like Mastodon in theory. In fact, I actually got an account on Mastodon when it first came out just because I was interested in the distributed server concept. But there was nothing happening there, very little, during all the blah, blah people going to Mastodon. I went back and-
Mirta: Yeah, I did too. And then my server crashed. So that was my short foray into the social media world.
Jim: Fortunately the one I chose was, of course I didn’t know it at the time because it was very, very early, the one I chose happened to be the one that turned out to be the biggest Mastodon server, which is mastodon.social, I think it is.
Mirta: Social, social [inaudible 00:42:54].
Jim: And also, I also spun up my own Mastodon server. Well, just enough to play with it a little bit. The feature set just isn’t quite right. There’s some things about the feature set that I don’t like, but it’s close. It’s open source. So if I were to ever actually do a Twitter clone, I would probably do it on Mastodon. I would just massage the feature set so that it was a little bit better.
So let’s talk about some things where some interventions could be done in ways that might not be obvious to people. In the paper, you all talk about white noise, latency, and information decay as things that you don’t really think about. When people talk about the features of Twitter or Facebook, they usually don’t talk about those three things. So maybe you can tell us what they are, and how you think incorporating those as design elements might have some useful effect.
Mirta: Yeah, I mean when it comes to latency, that’s this property that we were talking a little bit at the beginning, it’s this, sorry, the information decay. It’s the ability to forget collectively and individually about what we were saying, the various fights we were involved in or various stupid things that we said, various things that we overcame as a collective should be allowed to be forgotten. And currently, this is not the case.
And it’s like people with perfect memory actually have problems functioning because they cannot differentiate important from unimportant things without some extra processing. And so the same thing, us as collectives with this immense memory that we are building, it’s just becoming too much. And as you, the quote from Simon says that. It’s just too much clutter.
And when you come to latency, oftentimes we are angry. I’m a [inaudible 00:44:37] person, I get angry about things, I say things, and then next day or the next five minutes, I regret it. So having a certain, you know, you write your latest or I write my latest tirade about something and I press the button, but nothing happens for five minutes. That might already be enough for people to get up from their desk, talk to their wives, and realize that they were idiots, and then go back and delete it before it’s too late.
And so it’s very simple, right? It’s a very simple nudge, nothing much there. It just doesn’t show up immediately. It shows a little bit later. And it could be on voluntary basis also, though there are pros and cons, but we believe that that already could help with all this involuntary basically liking, supporting things that are obviously wrong just because they’re played to some kind of personal image that we have. So that’s that.
And then when it comes to white noise, it’s just noise, as again, eloquently said by David [inaudible 00:45:37] and David [inaudible 00:45:38] in the latest collective intelligence issue. It’s beneficial in many areas of life. Many things operate better, actually, when they’re a little bit perturbed. A professor, Carlos [inaudible 00:45:49], who is currently on sabbatical at SFI is talking about anti-fragility, right?
Those are things like he has this great example, [inaudible 00:45:55]. If you let it stand, all the good stuff falls on the bottom, but if you shake it a little bit, it’s much better. And so in many other areas of life, some noise actually helps, and allowing all different kinds of points of view online, which on average kind of cancel out, might actually help explore various directions that we might otherwise not take if everything was just exactly right. So allowing some noise in information we share, information we receive, might actually help broaden the horizon, avoiding to settle too soon to some kind of local optimum, and maybe find new ways as a collective to go for this complex landscape that we are facing. So those are some suggestions.
Jim: I have sometimes called things like this and some other things that are not exactly these, but similar, the viscosity knob. And some things I’ve tossed out for the viscosity knob might be to limit the number of posts people can do a day. Or if you put up a post, limit the amount of comments a person can do a day. Because one of the real problems of these thin bandwidth, something like Facebook or Twitter, is one person with nothing better to do that day can dominate the conversation by literally typing in 15 replies. And every time anybody replies, they reply, and so they dominate the conversation just because they’re sitting in their mother’s basement and have nothing better to do.
So if you had a rule that the poster could set the number of comments per day per user to say one or two, you would slow down the pace of the conversation, and maybe you’d have a better conversation and certainly you’d have one that’s less dominated. And of course, there’s a counter selection going on here. I would argue that the people with absolutely nothing better to do are the ones that dominate the conversations. And it’s probably the busiest people are the ones that you actually want in the conversations.
Mirta: I mean, a similar discussion is going on in science. How should scientific quality be evaluated? Is it by the number of papers you write or is it by the quality of the papers? And so certain selection committees would say, submit us our three best papers. We don’t care if you have three or a hundred, but submit what you have and then we’ll read it carefully and evaluate. But whereas, I mean, what is happening in most times is that people are counting papers. And so maybe it’s in a similar way.
So basically as you said, people who have more time or for some reason are less careful about what they’re saying, will produce more output. And then these kinds of restrictions are also maybe starting to go a little bit in the individual freedom. So I don’t know what you think, but I’m starting to be a little bit uncomfortable, I guess I’m much less libertarian than you are, but these things that restrict how much I can share with others, maybe even there were proposals saying that people who have the most followers online should be particularly scrutinized in what they can say and how they can basically, we should basically restrict their freedom of speech because they have so much influence on others. So when they say something, as you said, when they say something potentially untrue, they will influence many more people.
But it’s an interesting question, are we allowed to selectively kind of punish some people who are much more influential than others? Is that punishment or is it healthy regulation of the society? Those are these things that every society will have to decide for themselves. I don’t think there is a good or good or bad answer. It’s a lot of gray area. And so it just needs to have many discussions. And different collectives will just have to decide how to do it. Maybe different countries or maybe different servers, Mastodon servers, will decide how to regulate that.
Jim: And certainly there are experiments going on in the live. China has its thing. Russia has its thing. Iran has its thing. Saudi Arabia has its thing. Very top down, very censorship oriented. Is it working better or is it working worse? Don’t know, but I hope some scholars-
Mirta: I guess depending.
Jim: … dig into it and take a look. In fact, let’s talk about that for a little bit. In the paper you talk about that there are difficulties, but also opportunities in and around how scientists can do real research in this domain.
Mirta: Yeah, so I mean, never before we had so much data on every aspect of human existence as we have now. So everything is monitored. Not only social media, or I don’t know, cell phone data, but I just learned about a large project in Germany where people were actually donating their data from their smartwatches and so on, their pedometers. And I don’t know. So you could track everybody’s, 120,000 people were submitting their daily minute by minute data on their heart rates and how much they walk. And so you can see. It has been used to try to predict onset of COVID earlier than the official statistics because people would generally lie in bed but have increased heart rate. But we can at the moment of reflection think of other activities in bed that also increase the heart rate.
So there are very interesting things that go very much in the privacy of individuals, but definitely the opportunities are here. And many social scientists are struggling with the sheer amount of data. We are not trained still. Only now maybe people, the computational social science is becoming more prominent in undergraduate curricula, but people my age are trained in basic statistics, maybe some Asian based models, if you pick up later. But it’s large scale analysis of data. It’s still not something that social scientists do, which opens doors then to, and basically forces us to collaborate with different disciplines. Because I’m lucky to work with applied mathematicians, physicists, computer scientists who do that very easily, which then opens another problem.
It’s like there are many people who can, unlike social scientists, analyze this data very easily, very quickly. But then the questions they’re asking might not be the ones that are perhaps the deepest or the most crucial to advance our knowledge about human society. Maybe they’re not informed by all the other years, decades, hundreds of years of trying to understand human psyche, human sociality. All of these discussion can be lost when somebody who is not trained in that literature encounters a big dataset and then just analyzes this without knowing all this and without interpreting it correctly and taking into account all the habits of a particular sample and particular data source.
So a lot of opportunities. But then the big problem is also that a lot of this data is very difficult to get. And so Twitter is talking about closing their academic API, and oftentimes you need to have connections with different companies to get their data. And so that’s a whole other discussion. Should this data be open? I think it should. I think it’s in everybody, as you said, I mean, algorithms should be open source, everybody should know how they’re influenced and they could choose how they’re influenced. But also somehow the anonymous data in some way should be available at least to researchers for scientific studies.
But that’s a big problem. It also opens the problem of looking under the lamppost because we are now looking more at fine data basically of things that are available. Whereas data about still our daily lives, how we function within our families, how we treat our partners, our children, that’s still very difficult actually to collect. For that, we still to rely on very expensive survey methods, moment by moment data collection. And as a matter of fact, my colleagues, Tara [inaudible 00:53:31] and Hendrick Olson and I just spent very much a lot of money, courtesy of National Science Foundation, to collect an old-fashioned survey data about people and their friends.
So we asked a representative sample of American public to not only report about their own beliefs and behaviors, but to recruit a friend, just one friend, an actual friend from real life, not a Facebook, not necessarily a Facebook friend, but a friend they see on a daily basis. And we wanted that friend to also participate in a survey and give us their data, give us their opinions on the same issues. And this now enables us to see how much are people actually influenced by their friends from their real life, if you wish, independent of social media, how much they’re then influenced by scientists who also investigate this, how much they’re influenced from [inaudible 00:54:20] on social media.
And that kind of research is still incredibly expensive and difficult to do. But the availability of all this online data and everything may give an illusion that now we know more than we do, just because we have all these very large datasets.
Jim: Yeah, that’s always an issue. The looking under the light post is a famous problem anywhere in science. I like your point that you made, I think very important, that perhaps some of the better work will come out of teams that include sociologists, psychologists, and data science people. And that either by itself may not actually be able to fully mine what is useful in these datasets. And I think this question about mandating datasets for researchers is kind of a fraught question because online, as you know, people talk about really personal stuff, and even if you disguise the identity, gave it a 12 digit number instead of a username or something, we often talk about people by name.
I suppose you could go through and X out or turn to a number all the proper names or something. But there are legitimate privacy issues about a dataset. And the more you abstract it, de-identify it, and anonymize it, the less valuable it is. It would be nice to know what proper names are being used, what geographic references exist in the data. But that to some degree could be a privacy invasion. Somebody might post their home address, oh, heaven forbid on Facebook. But if they did, that would not be good to have that in a dataset.
Mirta: Yeah, we were having just this discussion just in the last couple of weeks because with an interdisciplinary team of scientists, including a physicist, a psychologist, an applied mathematician, another physicist, and me as a psychologist, we were studying Twitter data. It’s a large dataset of political conversations on German Twitter in a very turbulent part of German political history. And that’s the migrant crisis from 2015 to 18, when people are starting to really be upset about the enormous amount of migrants coming in Germany at that time, and there were different political options were discussing different solutions for that.
And then there were also a lot of fighting on the political scene for 2017 elections, when the newly emerging parties, especially on the right, were taking a large amount of votes. And in parallel, there were big fights on Twitter. And so in particular, there were some organized groups. One was called [inaudible 00:56:53], Take Back Our Germany, which was kind of an unofficial offshoot of extreme right political party in Germany. And then as a counter group, there was a party called [inaudible 00:57:05]. These are, as you can imagine, liberal snowflakes trying to argue against the arguments from this extreme right.
And in analyzing this data, academic, so this paper is just out, it’s just on archive from today actually. And in analyzing this data, the academic standard is to publish them, to have the data available for others. But the problem here is that it’s, again, this problem of allowing things to be forgotten because some people’s tweets can be, if you publish a text of the tweet, somebody can search the text of the tweet, see who is involved. And if they know that this was actually a member of the organized counter speech or hate speech [inaudible 00:57:48] group, they can be attacked. There were such examples where people were harassed.
And so what is our responsibility as a scientist? To improve the science most, we should publish the full text of tweets, but to protect the people we shouldn’t. And so we found some middle option where we published classifier or automated classifier labels for different tweets without giving the full text. But it’s a very different, difficult question, even for less inflammatory things.
Jim: And it’s not obvious, it’s a set of trade-offs essentially. And presumably there ought to be some people who are specializing in the ethics of this, in the same way we have human research boards for biological and physiological and psychological experiments. Perhaps we need an ethics board or committee for how to deal with social media data.
Mirta: Absolutely. So far it’s pretty Wild West and it very much depends on the ethics board for university. And many would say, well, it was voluntarily given data. It is not in [inaudible 00:58:54] and it’s actually that you don’t have to have a permission from ethics committee to use this data. But as you said, maybe it should be revisited. And there are initiatives to organize conferences, workshops, of course, to discuss this.
But again, this is interesting, so my kind of big theme in my research is this collective adaptation, and this is how we as societies find ways to cope with these many new emerging problems, given the network structures, given the beliefs and decision making strategies that you already have. And why do we sometimes take more time to adapt to certain problems than to other problems? Why are some societies more successful, less successful, or easier to change or harder to change?
And I think it’s a very complex social system, complex social science problem, which you can imagine as society is basically kind of stumbling along on some kind of a landscape, if you wish. Although this is an analogy which has problems, but a landscape where you can get stuck in particular local optima and where you can maybe based on some problems that were particularly acute in the past. So for American societies, innovation, technological problem. From other societies, it was dealing with extreme poverty, or from some other, dealing with extreme political animosities and so on, or war.
And so different societies have different pathways, which then in turn makes us more or less prepared to deal with new problems. There is excellent research, for example by Katherine Schmalz and Sam Voz, also a series of articles in PNS, which they looked at how likely or how willing were Germans from former East Block and Germans that were raised in the West Block, how willing they were to accept governmental recommendations during COVID pandemics.
And you can see that those who were raised in the East Block were more likely to accept these governmental recommendations, for better or worse. But it’s just that this kind of path, there is a strong path dependency with us as individuals and us as collectives, how we are going to approach similar problems that different societies are facing and how successful we are going to be.
And I guess my point is there is no maybe correct or incorrect answer. There is no intelligent or stupid answer. It’s interesting to see how everything that we know, everything that we learn as societies in the videos are going to affect the way we are going to solve different problems, including should the data be public, how should we treat advertising, what is the freedom of speech, and so on.
Jim: Yeah. And of course one of the big freedom of speech boundary lines where there’s good arguments on both sides to some degree, is misinformation. Under US freedom of speech laws, US has the strongest freedom of speech laws in the world, as far as I know, misinformation is legal, as is hate speech. But does that mean that online platforms need to carry it?
And then of course there’s the real problem, the second order problem. Well, there’s clearly identifiable misinformation. Someone argues the sun rises in the West and sets in the East. Okay, that’s misinformation. We can filter that one out. But arguing about did masks help prevent the spread of COVID. There’s data on both sides of that. Personally I think there’s more data on one side than the other, but it’s still an arguable issue. And to the degree that something is still an arguable issue, but with a preponderance of evidence on one side, does that provide a standing for the platforms to filter on what they would call the misinformation?
And of course, as it’s come out with some of the Twitter files stuff, Twitter itself was not actually seriously considering the balance of evidence for these opinions, but was rather doing what the US government told them to do. Is that the right way to do it? Is that safe? If the government’s in good hands or in hands that you agree with and you think that’s a good idea, but if it’s in hands of people you don’t agree with, you don’t think that’s a good idea. And so I think all these things are very fraught. What are your thoughts about the issue of misinformation, as opposed to any of the other things going around?
Mirta: Yeah, it’s very interesting. Again, it depends on where the society is. Some issues, such as masks, there was indeed, at least at the beginning, but even now, you can claim that the evidence for effectiveness of mask is much less robust than the evidence for effectiveness of vaccines, for example. It is very clear that the death rate among unvaccinated people is much higher than their death rate among vaccinated people, or the hard consequences.
And so my opinion is that in this case, I don’t have problems with basically banning by some platforms speech that kills people. I don’t have a problem with that, especially when people who spread such information are actually personally profiting from that. There was this report about who are the 10, or whatever, 30 largest spreaders of misinformation about vaccines. And many of these people were selling their own alternative treatments and so on. So there were profiting from this. And I personally have no problem restricting heavily that kind of speech. In other cases, when the evidence is still uncertain, it does make sense to allow more diverse speech.
And when it comes to hate speech, again, I think it really comes to the experience of a particular society with damaging effects of different kinds of speech. I lived in Germany for eight years, and they have very strict laws on whether you can deny Holocaust or what can you say about Hitler and all that. And they have very good reason. I mean, Hitler didn’t show up just from one day to another. There was a lot of manipulation and propaganda going on for decades before the things really got rough. There was lot of inflammatory, and of course the position of Jews was never favorable, but then through the emerging media, radio, even film, many pamphlets, mass events that were enabled by also increased reorganization, this message has been amplified and spread and people got the impression that many people, many more people are thinking like that maybe than they thought before. And so there was a power in numbers.
And through decades, this led to very, very bad consequences. And so I think German society’s experiences, it might not generalize to other societies, but that society’s experiences, it’s very bad to allow that kind of extreme inflammatory and hate speech. American experience different. So there were of course problems in American history, but that kind of, okay, one can argue different things, but the experience with allowing all kinds of speech was positive for different sides of the political spectrum. It was essential in the 60s, with the civic rights movement and so on. It was essential to foster the freedom of expression. So you can say there were many good things about it.
And so of course now the debate is, if you forbid it on one side, whether this then prevents improvement of the society on another side in the future. And so that’s a different trajectory. And I don’t really think that there can be a right or wrong answer. Every society just needs to decide for themselves. And I would say that the only right answer is that nobody can insist that they discover the proper answer. It really depends on the societal paths and what the society’s willing to accept and what the society is afraid of and where do they want to go and what kind of problems they’re facing.
Just to finish this tirade, for example, when the problems for the society change, for example, just a few years ago we were all living in some kind of idea that our main goal as society is to achieve technological progress, to achieve more collaboration across borders, and somehow to better the humanity in a way to achieve our highest potential. And then a war starts. When the society’s faced by war, by the need to defend itself from another, from an outgroup that might have killed us all if we don’t group ourselves.
And the challenges are very different. And the way we need to organize ourselves are very different. In these cases, it is very actually useful to be more, less diverse, to be more coordinated in what we believe, to maybe follow fewer individuals, who will help us to organize ourselves quickly. There is another group now knocking at our door, trying to basically attack us. It doesn’t make sense to discuss much. We should just choose a leader and just go and unify ourselves and act as one.
And that has been, of course, studied a lot in military psychology and social psychology over years. There’s a whole phenomena group thing. This can be good and bad. And so it’s hard to judge, it’s hard to say, well, this society’s right or this society’s wrong in the way they’re facing things. Depending on the problem they need to solve, they will choose different ways of regulating such things, such as misinformation and hate.
Jim: It sounds like what you’re saying is it’s an inherently political question.
Mirta: Yes. You can call it that way. I would call it maybe it’s an inherently cultural evolution question.
Jim: Yeah. At one level it’s going to be cultural evolution, but if you’re going to try to have some teeth to it, it may have to become political, right?
Mirta: I mean, politics is the way we are making, it’s a discipline of making rules by which we are all going to interact with each other and exchange goods and knowledge.
Jim: And of course the US paradigm is free speech, but private actors can do what they want. So we end up with very radically differing standards on different platforms. On one extreme, you have a Disney platform for children only, no cussing, nothing inflammatory. And the other extreme, you have 4chan or someplace like that, where absolutely everything goes. And all of those are legal within the US and are essentially a market driven exploration of what it is that people want.
Mirta: And that’s very beautiful. It’s a beautiful idea. I think if a society can afford that, that’s wonderful, I would say. Suggest that many societies are maybe living in a kind of more narrow range of possibilities. And while noise can often be beneficial for the societies that are kind of trying to pass a narrow bridge, it can throw them off.
And I think it can be studied. I think it can be studied. We [inaudible 01:09:38] computational and mathematically models, different systems, and try to understand where different societies are on this bridge or a more wider road. And based on that, almost recommend or understand what would different levels of diversity of opinions that can push the society temporarily to extremes, what would they do to the prospects, future prospects of the collective.
So anyhow, I think it’s a complicated, and it’s a political question, but I think it can be studied more rigorously than we are doing now. And I think SFI and such institutions can contribute here.
Jim: Well, I really want to thank you, Mirta Galesic, for a very interesting conversation. We got into all kinds of interesting things. We talked about what’s actually going on today, and we also talked a little bit about the complexity implications and the complexity lens for studying this. I’m very happy that there are people like you out there working on this stuff.
Mirta: Thank you very much, James, for listening patiently to various wild ideas and maybe contrasting opinions. And I really enjoyed also hearing your opinions on all these things, and I hope you will be able to find a way to implement your ideas for moderating online platforms and finding ways to evaluate accuracy of information through market mechanism. I think that’s very promising.
Jim: Hey, Elon Musk, give me a call.
Mirta: Are you listening?
Jim: Are you listening? I had actually, someone sent in my paper on it, but it fell into the void, as far as I know. All right. Thanks again. And we’re going to wrap it right there.