The following is a rough transcript which has not been revised by The Jim Rutt Show or by Ben Goertzel. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Ben Goertzel. He’s actually been on the show I think twice before. He is one of the world’s authorities on artificial general intelligence. Indeed, he coined the expression. He is also the instigator in chief of the OpenCog, AGI open source software project, and SingularityNET, a decentralized network for developing and deploying AI services. Today though, we’re going to be talking about something entirely different. The need, the potential, and maybe a bit about how to build true distributed social media. Welcome, Ben.
Ben: Hey. Thanks, Jim. It’s a pleasure to be on The Jim Rutt Show once more. It seems there’s an endless succession of pertinent things to powwow about. The decentralized social media, you and I have been talking about this for quite some time, and each have been thinking about it since long before we met each other. Now suddenly, this is percolating up into the working memory of the global brain, which is interesting to see.
Jim: Yeah, it was funny. I was trying to remember how long I’ve been thinking about it. It’s at least 35 years. We’ll get into that a little bit. I’ll tell you something that you may not know, very timely. Little old me, Jim Rutt, good network citizen, got banned from Facebook on Friday of last week.
Jim: Yeah, I looked at it as a bit of a badge of honor. Even more ominously, yeah I can be a little bit raucous on occasion, but I thought I’d been being fairly reasonable since I got back on Facebook. I take a six month sabbatical every year. This year I was off, as I always am, from July through the first of January. Got back on, and engaged on a number of topics, but I thought well behaved. I wasn’t involved in any first class flame wars, et cetera. I go, “God damn it. Why the hell did they bounce me?” I thought maybe, and this would have been bad, because I had been posting about free speech on the internet and my distaste for censorship by a bunch of peculiar oligarchs. It turned out, it was actually something more ominous than that. Within an hour, I discovered that not only I, but the other two administrators of the Facebook GameB Group had been whacked simultaneously.
Jim: This is quite interesting, right? Those of you who know anything about the GameB Group, the GameB Group on Facebook, it is very theoretical. We don’t allow the discussion of partisan politics. We don’t talk about current events. We don’t rabble-rouse people to go invade the Capitol. If we mention QAnon, it’s only to dissect it as an example of a social virus. So hard to imagine how the GameB Group, which frankly if most people take a look at it, they’re going to say, “Hm, this is interesting, but man, it’s a little on the esoteric side here. A little on the intellectual side.” How we happened to fall into the target, and I have a hypothesis. I’d love to run this by you, actually. I don’t know if it’s true, but it fits the evidence pretty well. First, I’m assuming it was a deep learning-based bot of some sort. What’s my evidence?
Ben: I would guess, yeah.
Jim: What’s my evidence for that? What a weird thing to have done, to have whacked the three admins. They did not bounce the four moderators, which was the next level down of people involved with Godly powers to manage the group. Nor did they delete the group. The other weird thing was they gave all three of us the death penalty version of the ban. The one that when you try to appeal, says, “This is not reviewable, and cannot be reversed. Cannot.” That is their words. Again, it’s kind of an odd thing for someone to do. If a human had looked at the GameB Group, who knows, maybe they’d find some reason to give us a warning. I don’t know what it would be, but it’s hard to imagine they would just whack us with a hard ban.
Jim: Anyway, we fortunately have friends, and we whipped up a pretty large mob on Twitter, literally millions of people. We got a retweet by Joe Rogan, and various and sundry other folks. Fortunately the GameB network, several thousand people, knows people in Facebook. At least four people have told me that they put in complaints to people they knew at various levels, and at least in a couple cases fairly high levels in Facebook, to at least get our case reviewed.
Jim: Within 10 hours, it was reversed all of a sudden. Again, typical Kafkaesque Facebook. When they ban you, they don’t tell you why. When they put you back on, they don’t even tell you you’re back on. The way you find out is when a friend of yours emails you and says, “Oh, it looks like you’re back on Facebook. Ho ho ho ho.” This is the whole story. Anyway, I believe it was an AI bot, just because the fact pattern is so peculiar. It’s hard to imagine even a rather dumb $15 an hour employee having done what happened.
Ben: Yeah. Another piece of evidence there is, Lincoln Cannon, a friend of mine who is the head of the Mormon Transhumanist Association, and a fascinating character who you might want to have on your podcast sometime.
Jim: Yeah, he and I have actually been in contact about this event.
Ben: Oh, yeah. Lincoln is really cool. Anyway, he was banned from Facebook quite shortly after making a post that just contained a link to an article of mine on decentralized social media. How bad big tech monopolization of social media is. Again, they don’t explain why. He posted a link to that article, and got zoinked. Again, probably no human is going to be that explicit, that if someone posts a link to a sort of a think piece on decentralized social media, to ban the guy. What was their algorithm actually doing?
Jim: Yeah. I have a hypothesis. I looked at GameB, and I went back. In fact, our whole moderator and admin team got together Friday night, and were scratching our heads. “All right, why did they get us?” I said, “Well you know, I’m somewhat obstreperous.” One of the two admins is one of the most calm and reasonable, well-behaved net citizens you ever met in your life. The fact that somebody would whack her. I go, “What? It can’t possibly have been anything to do with her. It had to have been something triggered by the group.” We went through the various posts in the group over the last couple of weeks. Nothing there at all. Then, I had this aha. Again, I won’t swear this is the answer.
Jim: As you and I both know, a AI bot trained on deep learning is opaque. There is no readout of its logic. It can’t tell you why it does anything. It just does stuff. Famously, you can train a deep learning algorithm on 80,000 dogs and cats, and it can quite reliably tell a dog from a cat. On the other hand, you can generate a static pattern which it will claim is a dog. These things are strange, and opaque, and brittle. Anyway, knowing that, I’m looking at what goes on in our group. What we do that most people on Facebook don’t do, is we actually think. We bring up new terms. We explore the meanings of things. We link things back in the history of philosophy. People dispute each other, but in very high-toned theoretical ways. It’s actual thought.
Jim: What is mostly on Facebook I’d call rubber stamps, or simulated thought. “Trump is a fascist.” “Trump is our hero.” “QAnon is correct.” “QAnon is wrong.” Then I started to think about it. I said, “Who else puts together a linguistic and syntactical pattern that sort of looks like thought?” I realized it was QAnon. While they’re, in my opinion, complete idiots and don’t know what they’re talking about at all, et cetera, they are evolving this mythos in real time. Where they’re arguing with themselves and each other about what it means, elaborating the theories. Keep in mind, these deep learning algorithms, as you know, know nothing about semantics. They know nothing about what you’re actually talking about.
Jim: All they are is pattern recognizers about the use of words, the frequencies, their unusualness perhaps, et cetera. I said, “You know what they have done,” again hypothesis, “Is they have trained up…” They’ve kicked a whole bunch of QAnon related people out. “They trained up some bots on QAnon content, and said, ‘Go kill stuff like this.'” What they’ve accidentally done is built a bot to kill thought. Anything that has these statistical attributes of thought will be raised as bad. In our case, raised all the way to the worst bad, where you get a death penalty banning that you can’t appeal, allegedly, without any human intervention. That’s my theory. What do you think of that?
Ben: I’m in. It sounds like a reasonable approximation of the truth. I mean, we can’t know what their training data really was. Yeah, it rings true that the QAnon conspiracy theory discussions have a similar form to actual in-depth intellectual discussions. I could see how that might work. On other hand, Lincoln being banned would suggest… I don’t know. It would suggest some variation of that. Lincoln is very thoughtful, but the particular post that seems to have triggered him getting banned was not very in-depth. It was just-
Jim: Exactly. Subsequently, I’ve discovered other what I would call thinkers about good things. Daniel Christian Wahl and his regenerative ecology, he’s been banned like 24 times by Facebook. What the hell, right? He’s a very innovative thinker, trying to save the world through local regenerative ecology based agriculture. I mean, there sounds like a really bad thing, right? Somehow, the AIs have targeted him. Again, he’s clearly a thinker. He’s written books. He’s the real deal. This is really bad. I mean, it’s hard to imagine how anything could really be a whole lot worse at a high level. If humanity’s going to save itself from the meta crisis it finds itself in, we have to be allowed to think outside of the tiny little box of the status quo.
Jim: If Facebook even inadvertently, and I don’t believe they intentionally went to the spin up a bot to kill thought. But even if they inadvertently spin up a bot to kill thought and let it loose on the nets, and then don’t do anything about it, with Facebook being the principle public square of our time, they will be down regulating thought. Basically reducing the discourse to rubber stamps and the most conventional dialogue. In the current situation, where the future of humanity is literally in the balance, I can’t imagine a more irresponsible thing to do.
Ben: Yeah. It seems like, fascinatingly in a perverse way, what these big tech companies and Facebook in particular are happening on, is like the exact opposite of what is needed for the beneficial evolution of humanity, and the creation of a positive singularity and all that good stuff. I mean what we really need specifically is… I mean we really need automated tools that help to nurture and nourish the best of humanity. To sort of guide each of us toward a deeper self-understanding, and toward the more reflective thinking and feeling, rather than just trivial reactions. We need tools that are guiding social groups toward processes of coming to mutual understanding, and creative imaginative treatment of life, the universe and everything.
Ben: What’s happening is exactly the opposite, which is pretty funny and pretty scary. It ties in with some deep aspects of artificial intelligence in the world today. I’ve often thought that big tech partly intentionally, partly inadvertently, they’re fostering the development of AI algorithms, methods and paradigms that maximally exploit their own differential advantages. I mean, big tech companies just implicitly as well as explicitly, they’re fostering AI algorithms that leverage big data and leverage big processing. And give differential advantage to those who have a lot of data and a lot of processing power.
Ben: They’re doing that not entirely intentionally. I mean, they’re doing that because they have the data and processing power, so why not? Then they build APIs for the developers to use that leverage that data and processing power. The result is that the parts of the AI field that leverage that data and processing power best get a lot of umph and a lot of development time. They get very slick, open source development frameworks. The aspects of AI that are a bit orthogonal to this particular sorts of massive data and processing power are a bit left out in the cold. It’s partly a side effect, and partly a direct effect. One thing correlated with this is these deep learning and other methods that are weak on semantics and weak on what you would call real understanding, are doing more shallow pattern recognition at a large scale. These are getting very sophisticated.
Ben: Then it goes along with the potentially being unable to distinguish QAnon bullshit from GameB imaginative exploration and pontification. The AI algorithm space exploration bias of big tech is sort of going along with their own actual political biases in a pretty freaky way. It’s very interesting domains we’re wandering into. Much of this synergy happens under the hood, where most people can’t see it, and can barely understand it even if they saw it.
Jim: Then this is a key point that you bring up. Because of the coevolution of tools from the academic world, with their being wrapped in slick wrappers by people like Google and Facebook, and the access to vast data and vast computation, almost I would assume essentially all of these AI hunter killers are opaque, deep learning artifacts. One of the side effects of that is one of the biggest complaints about Facebook and its bannings. Their entirely Kafkaesque. They don’t give you any reason at all. Because they’re using opaque AIs, they can’t. If this were an AI based on let’s say symbolic methods, of probabilistic logic networks like your OpenCog project, they could actually readout in more or less readable English what set of rules were triggered.
Ben: It’s not necessarily that they can’t, it’s that it’s a lot more work to generate the explanation than to generate the judgment. I mean, there’s of course flourishing and pretty cool research literature on generating explanation from deep neuro models. That’s a thing you can do, but it’s a whole other area of research. It’s often harder, and it’s cheaper for them just to ban a bunch of people than to deal with generating explanations.
Jim: Yeah. This pattern of static, we think it’s a dog so we’re just going to call it a dog. That’s what even well-trained neural nets do. They don’t actually know anything. They just are a set of kind of reflex arcs to patterns. Mostly they get it right, or if they’re well done they mostly get it right. It seems in this case, they have-
Ben: Right. From an AI system developer view, the degree of scalability infrastructure, and the quality of the UI/UX that you have with these big tech development tools, is quite intimidating. That nudges people to continue with these sorts of methods instead of exploring methods of more genuine understanding. As you mentioned in the intro, I’ve been working on the OpenCog AGI toolkit under that name since 2008. Some of the code, I’ve been working on much longer than that. I’m now engaged with a number of colleagues, mostly SingularityNET folks and a few others. We’re rebuilding OpenCog almost from scratch. At least the core representation and framework is being rebuilt, although much of the AI rules and math and so on, and some integration code will remain from the old version.
Ben: We’re making a new version called OpenCog Hyperon. This is aiming at turning everything we learned from the legacy version of OpenCog into a system that genuinely is going to be capable of creating true human level AGI, which has been our aspiration. We’re rebuilding the guts of OpenCog, like our weighted label hypergraph knowledge store into something that can run across a shitload of different machines and be massively scalable. We’re making a new programming language that should make it concise to express all the different learning and reasoning algorithms that we have.
Ben: The other thing that we’re thinking about is how to make greater usability, so that not only is this more scalable, and does it make it more tractable to integrate all our different learning and reasoning algorithms in an AGI oriented way, but we can pull in a broader developer community. I know you fought with the legacy version of OpenCog and its usability issues yourself years ago when you were playing with OpenCog to do some simple game AI and simulation type stuff. It’s just hard when we come up… You look at the TensorBoard and all the cool tooling you get with TensorFlow. It’s not as deep as the hardcore thinking we’re doing. How do you make a programming language to bring machine reasoning and machine learning and episodic memory together, and so on?
Ben: Still, there’s hard work, there’s hard thinking. There’s a lot of engineering to go into making really usable toolkits and frameworks. As a sort of upstart project like we are with the OpenCog and SingularityNET, I’m grateful that we’re not totally impoverished. We’re paying some great developers to work on this. We’re not at the level of a big tech company. Without that level of funding, you never will get to that level of usability for something with the complexity of a distributed AGI platform. Which means that people will always need to have a little more skill, or a little more time, or work a little harder to use these tools that don’t fit as perfectly into big tech’s worldview and infrastructure.
Ben: Of course, if OpenCog is just way smarter than the tools that big tech is rolling out, which I believe it will be once we’re done with Hyperon, then that can override a modest usability deficit. It’s quite interesting the way all the different pieces of the ecosystem, which now includes the AI ecosystem, the AI research ecosystem, as well as the social media and big tech ecosystem… All these fit together in a way that it forces this oligopoly… It even goes into academia. If you’re doing a PhD, you’ve got a certain number of years. You’ve got to publish a certain number of papers. Your career and livelihood depend on it. You’re going to have some bias to do things that you can do more quickly.
Ben: If it’s an AI PhD that you do more quickly and easily using really slick tools that are out there now. Rather than something where you have to slog a little harder to use a more finicky open source tool set. Even research, which is not on the payroll of big tech, still various incentive structures align so that the whole world is doing things that reinforce this big tech oligopoly. It’s pretty disturbing. It’s not a big shock to you or me, who have been thinking about these dynamics for many decades. We’re at a juncture now where the rubber hits the road. It’s like, okay, we understand this. We’ve understood this a long time. Now more people are starting to understand this. What can you do to counteract this extremely destructive set of complex dynamics?
Ben: I’ve been debating this with some of my friends in the softer development world. They’re like, “Well the government should do something.” The government should put into place like a social network infrastructure so that commercial social networks are running on top of the government infrastructure. I can’t see how the government is going to grapple with things of this level of complexity, and the dynamicness. The government can barely deal with delivering the mail anymore. I think that the open source community is there. The blockchain community is there.
Ben: Various social networks like GameB are rising to deal with these issues, in spite of the obstacles posed by the oligopoly. We can see ingredients needed to form a decentralized democratic alternative to this oligarchy are there. The crystallization of all these ingredients into a equally or more powerful alternative to what the big tech oligopoly is doing, it’s going to be a quite interesting challenge for this crystallization of real alternatives to happen.
Jim: It’s important that this happen. It’s kind of a strange, because of some of the alignments, some of my progressive friends. I consider myself in an odd way progressive, though sort of orthogonal to the team red, team blue version of it. Isn’t it peculiar that you kind of team blue progressives at least are cheering on giving essentially unlimited power to three or four very peculiar oligarchs. These are not normal people. Zuckerberg, and his fascination with Augustus Caesar, which I talked about with Steven Levy when we did about his book all about Facebook. He spent five years with Zuckerberg, talking to him. He found out crazy stuff, like his wife was ripshit when they took their honeymoon to Italy. All Zuckerberg wanted to do was visit sites associated with Augustus Caesar. Check out Zuckerberg’s haircut sometime. There’s no doubt who he’s channeling. The Google twins, they’re strange dudes too. Bezos-
Ben: They’re also not running Google anymore [crosstalk 00:26:28]-
Jim: True, but they still could control it behind the scenes, if they wanted to. Bezos, probably a little bit more normal in some ways, but a megalomaniac of transcendent proportions. You have these four people, and their heirs and assigns, essentially making law for the most important public square in the world.
Ben: Yeah. I think in terms of the people, we have to understand it would take a person with an extraordinary level of advanced consciousness, and self-awareness, and emotional self-control not to become some sort of sociopathic narcissistic fuckhead. I mean, after being in that sort of position of power for a period of time. You can’t rule out that it could happen. You could have someone ascend to that position, and really use it with a high level of self-awareness, and use it for broader good. Jack Dorsey is more in that direction than these other larger oligarchs, I would say. By and large, a high percentage of the time being in that role is going to screw with your psychology, just according to power corrupts, and absolute power corrupts absolutely.
Jim: Yeah, and also the reverse unfortunately is that that kind of power attracts sociopaths.
Ben: Well, yeah, yeah, yeah, absolutely. Many people could get one-tenth that wealthy, and then step back and pursue other things with their life. It’s clearly true. Yeah, the politics is very funny. I am very left wing. I mean I’m in a narco socialist. I was raised by Marxists. I’m not really a Marxist in any sensible sense anymore. Clearly, I’m not alt-right by any stretch of the means. I’m a democratic socialist with a strong libertarian streak. Many of my friends and family, who are also on the left wing side, they’re like, “Yeah, good. We banished Trump. That guy’s a moron.” I’m like, “Well, yeah, he certainly is a moron. But do we want to be cheering banning of social networks that big companies don’t agree with?”
Ben: I probably have very few views in common with the average user of Parler. I’m far from a white supremacist. I’m not xenophobic really in any way. I’m generally in favor of progressive Democrats, if I had to put myself somewhere on the US political spectrum. The strong advocacy of free speech is one of the things I really, really respect and admire about the US system, compared to other places in the world. That’s one of the things the US has really, really gotten more right than other major countries. Now to be at risk of rolling that back in terms of our culture and our practical life, even if not the legal system. This is very bad. This is not good.
Ben: I was in Hong Kong for nine years. I saw how things work there and in mainland China where I spent a lot of time. They’ve had amazing triumphs there. 80% of the people to be lifted out of poverty in the last 30 years in mainland China. There’s been great successes, but the fact that you don’t have a way to go out there and say, “This thing the government is doing is a piece of shit and is wrong.” There’s no tractable way to do that in mainland China. There’s now no way to do that in Hong Kong. The fact that you can do that in the US is really, really, really important. Of course, you can still do that in the US without being put in jail. But if you’re going to get de-platformed for doing that in the US, the practical route to doing that is going away.
Ben: Marxists always had this thing against the US system. Yeah, yeah, there’s a free press. Well the press is only free to those who own one. The internet made that much less true. The press really was free to those who didn’t own a printing press. Now with big tech de-platforming people, once again the press is only free to those who own one, or who are at least on the same page as those who own them. That’s an extraordinarily negative development, which I would think everyone should be up in arms against. Everyone-
Jim: They’re not. They’re not. You and I both know.
Ben: It’s fucking insane. It’s almost as insane as Trump being elected president. We’re in [inaudible 00:31:38].
Jim: Yeah, exactly. It’s as fucked up. It’s as fucked up. Talk about the de-platforming. People are like, “Oh, so you got knocked off of Facebook for a half a day. So what?” Well let me make one thing very clear here. Our group was very lucky. We were prepared, but we were lucky because we have a pretty good sized mob of ourselves, maybe 20 or 30,000 people. We know people. We were able to spin up a Twitter shit storm of fairly prodigious proportions almost immediately. 99.9% of Facebook users don’t have either of those advantages. They get the AI driven death penalty, they’re done. They have no recourse. There’s no-
Ben: Well, also note at this point Twitter is not yet as draconian. That’s not guaranteed to be the case a few years from now. Right?
Ben: Now a Twitter shit storm-
Jim: In the future-
Ben: Is great. That’s because Jack Dorsey is not Mark Zuckerberg. I mean, there’s no guarantee there’s going to be a Twitter a few years from now. I certainly hope there is. Without that, maybe you could have gone to folks in Facebook and get your case reconsidered anyway. The odds become more and more against those who are arousing the ire of the establishment’s moronic deep learning models.
Jim: In our case, I think if Zuckerberg looked at the GameB Group, he’d say, “This is great. This is just what I built Facebook for.” But the machines that he has created for other purposes, that are very, very flawed, that are raging war on thought, for most people no recourse. An amazing number of stories I got. I put a request out on Twitter for stories of people that had gotten fucked with no explanation from Facebook, for nothing they could understand. That’s where I met Lincoln, actually. I got numbers of stories. A carpenter who had a goodly part of his business up on his Facebook page whacked for no apparent reason. A substantial hit to his business, which months later he’s still trying to rebuild. People who had… This of course was the original use case for Facebook.
Jim: In fact, one of the three admins that got whacked, she said, “You know, this is the main thing I use to talk to my extended family.” Holy shit, this really sucks. There’s all this collateral damage from being kicked off what should be a stable and reliable piece of common carrier infrastructure, but it’s not. It’s this weird thing run by these… They’re not incompetent, Zuck’s a smart dude, no doubt about it. He’s under all these pressures which he has succumbed to. For a while, he was fighting the good fight on free speech. I remember his speech at Georgetown University. I said, “Damn, that boy actually believes in free speech.” Unfortunately, the relentless pressure of the enemies of free speech, which has got nothing but stronger over the last year or something.
Ben: I think it’s partly because the issue is structural. I mean, big tech companies are really in a damned if you do, damned if you don’t situation, to a certain extent. The way out of that may involve some non-local leaps away from what they’re doing now.
Ben: I can see the way they’re thinking. Okay, certainly we should ban people colluding to go murder someone. Which window to climb through before they behead him. That’s clear. That’s illegal, so that’s fine. Then do you ban people saying stuff that statistically often leads to actual plans to kill people? Then once you go there, then you’re banning stuff that’s not illegal, but it’ statistically correlated with illegal things. Then you’re in the very bad situation, where the laws of the country rightly don’t go. But if you don’t go there then, okay, your platform is going to be used at some point by statistics for some people to… Early stages of plotting something that then becomes a nasty crime. Then social justice warriors and a lot of others will jump all over you, and say, “Well this was used for the early stages of plotting this bad thing.” Then something bad will happen. You worry about that.
Ben: On the other hand, if you start banning too much, then you’ll get lynched in the court of public opinion for banning too much. It’s a very hard line they’re walking, which is certainly not why any of these guys got into the business they’re in. I mean Zuckerberg didn’t launch Facebook because he wanted to play the sensitive role of being a cop of which public assemblies are legit and which aren’t. There’s no, I don’t see any good way out of that just by making more artful policies, or by making some ethics committee or something. This is where I think you need the whole tech stack of social media to be organized in a different way.
Ben: In coordination with that, you need the various social patterns and organizations and networks associated with that tech stack to be organized in a different way. I’ve been discussing recently… I had an online sort of podcast interview with Charles Hoskinson of IOHK and Cardano blockchain about this quite recently. Cardano is aiming to provide sort of the block chain infrastructure for decentralized social media tech stacks. SingularityNET, on which OpenCog we’ve been discussing will be one source of AI running on SingularityNET. We can also run neuro models and sorts of other things. I mean SingularityNET is well suited to serve as this sort of decentralized, democratic AI infrastructure of decentralized social networks.
Ben: There’s a lot of other pieces you need there. It comes down to like mobile operating systems even. We’re seeing banning at the level of Apple App Store and Google Play Store. Then you’re like, “Well shit, we need a mobile phone OS that isn’t controlled by some oligopolistic company that’s going to ban your app for having unpopular views. It’s not infeasible. I mean Huawei, when the US government started screwing with them, Huawei started building their own mobile OS on the truly open source portions of Android. Which is much easier than building your own mobile OS from scratch. All of it is doable, but there’s a lot of pieces that have to be done, and they all have to work together.
Ben: It all takes a lot of highly trained people to put in a significant number of menus to build an alternative here. It’s not clear where the funding for all that will come from, and not clear that it can get done without substantial funding just by volunteer efforts. Although I did see an article today that 30 to 40% of tech employees admit in a survey to be working three or four hours a day, now that they’re working from home. There may be a lot of latent developer time that could be used for this stuff.
Jim: I love it. Actually, there are people working on Linux based phone operating systems. There’s like 10 projects going on. I’m keeping an eye on them because-
Ben: They all suck. I used an Ubuntu Touch phone once. It’s just so slow, and the ties in from Samsung really sucked. Yeah, I would love for one of those to come about. Android is such a horrible architecture. On the other hand, if I was trying to roll out a free and liberated smartphone OS as quickly as possible, I would probably end up doing what Huawei did. Just take the open portions of Android and put other stuff on it. That’s there. Yeah, Android is bad. If you think about it, if you had a free version of [Encode 00:40:30], you could run Docker containers on Android. Then you could run OpenCog on Android. You could run whatever you want in Docker. I’ve run Docker on the root on an Android phone. There’s no reason you can’t run Docker on an Android phone, which lets you then run any Linux software on Android. It’s just not allowed by Google, because they don’t want the competition.
Jim: Interesting. Yeah, we’re actually tracking it. We believe at some point we’re going to come out with a GameB phone. It will have all kinds of interesting aspects, for instance, it will actually be hard to add new apps. You’ll need permission from your own social network, so that people don’t make bad decisions. Put it this way, you could enable that function. You don’t have to have it. One of the affordances of the phone would be… We know people are addicted to apps. We know that people waste huge amounts of time. Phones seem to be more addicting than alcohol, somewhat less addicting than nicotine, more or less at the heroin level of addiction. Building some features into the phone that people can opt-in to, to help them control their pathological addiction to these God damned things, would be part of a GameB phone.
Ben: This sort of comes down to the need for a decentralized reputation system, which is something… Reputation and rating systems is something we’ve worked on a lot in SingularityNET, and did a lot of simulation modeling and prototyping. We haven’t yet rolled out into the main net of SingularityNET, just because you need more traction. You need more utilization to get the statistics to drive a high quality reputation and rating system. That’s key to many aspects of what needs to be done. If you’re not going to have some oligarchs in control, but you’re going to have a more democratic decentralized control mechanism, then you need to be using the internet smartly. You need to be using decentralized networks smartly. You need well designed rating and reputation systems.
Ben: Having a group of folks who you trust to provide rating and validation and filtering of [inaudible 00:42:38] is one simple and interesting example of that. I think we need that on a bunch of different layers. Again, the math is there for that. Software prototypes are there for that. None of it requires humongous research breakthroughs or something. I can say everything you would need for large scale, decentralized, democratic social networks that foster positive individual and collective human growth and imagination and honesty, and Evans’ grounding of claims, the tools for that are all there at some level of maturity. Like in the form of research papers or prototypes or alpha version software or something.
Ben: Getting from that to systems that are deployed, and that run at large scale in a cost efficient way, with the usability that will get average people to use it, there is a lot of work there. Again, it’s not yet clear to me how all that work gets done. The whole ecosystem we have for funding technology development is based on control of this small number of big tech companies. Startups funded by VCs, VCs get preferred shares. VCs control things. VCs want the company to get an acquisition exit to one of these small number of big tech companies. Everything is feeding into the oligarchy, because it’s good to be the king.
Ben: You need to break the organization, and the direction of resources of various sorts to technology development projects. You need to break that out of the big tech oligarchy, just like you need to break AI R&D, including academic R&D out of its subservience to the big tech oligarchy’s control of who works on what. All this needs to be opened up in order to build the tools for opening up everything else. Yet it all recurses, because the discussion groups for AI developers and startups are on these same big tech controlled platforms.
Jim: That may be starting to change. We’re starting to see some breakage. The growth and the usage of Telegram and Signal has gone off the charts. I’m sure this will continue.
Ben: Telegram, which is controlled by the Russian government, right?
Jim: I don’t use Telegram. I use Signal, people. I use Signal. In fact, one of the things we learned… That we were so stupid. I should have known better, and some of the other people should have known better. We had some other homes for GameB on Reddit and MeWe. We had a Discord server, but we didn’t really have a centralized way to get ahold of people consistently. We’ve now set up a Signal group for the admins. We’ll soon be adding a Signal group for the members to be able to at least register a home that we can get an alert message out to you, should the assholes at Facebook actually take us down.
Ben: Yeah. If you looked recently, there’s been some apparently dissension within the ranks at Signal, between people who want to take it in a little more of a moderation friendly direction, versus people who want to leave it to the Wild West.
Jim: If we do that, we’re going deeper. Anyway, at least that’s some of the thinking that we’ve been doing here just in the last few days, based on reaction to this bumbling effort to mess with us. I can announce for the first time that we are going to be building the GameB home off of Facebook soon. Fortunately, just for shits and grins, I actually built on in early December. I’d been looking at white-label platforms for a year. I finally found one that was good enough. I spent like three days, pretty, powder and paint till it kind of looked like what we wanted it to. I learned how it worked. It was just sitting there. When this event occurred, we have been mobilizing. We already have a reasonable number of test users over there. We’re going to up that to three figures by the end of the week, and probably four figures-
Ben: Interesting. I remember a number of years ago you and some of our mutual friends were working on something with the same goal, which I guess didn’t get deployed, right? I mean, you had-
Jim: Yeah. In those days, we spent hundreds of thousands of dollars. We basically took a fork of Reddit’s source code. In the old day, Reddit source code was open source. We took it, and we added a bunch of enhancements. It was hard work. We spent, I don’t know, $300,000, six months work.
Jim: It was okay, but it wasn’t frankly good enough for a mass audience, so we shot it. This one, again this is how the world changes, it’s a beautiful white-label platform, very cost effective. The world has just gotten, the barriers to doing this is-
Ben: What’s the underlying platform then?
Jim: I’m not going to say, because I’m not sure I want them to get the heat at this point in time. We’ll talk about it later. I flip to ’20, this is one of the leading ones. We’ll be rolling that out here relatively soon, assuming it gets through these multiple layers of tests. Which I would say so far looks pretty strong. As part of that, I also did a quick scan of the things you’re talking about, the true distributed web technologies. It turns out a good friend of mine is very much involved with the Beaker Browser project, which uses its own peer-to-peer protocol, not regular old HTML/HTTP.
Jim: I looked at some of the other projects that are various other forms of dWeb. This is back to what we were talking about initially. I’ve been following this for at least 35 years. As you intimated, where the real weakness is, is that when you go distributed, particularly if you go radically distributed in peer-to-peer, it’s really difficult and expensive to build the UI and functionality that regular users expect. I’ve never seen those curves cross. Now it is true that with much, much more bandwidth than we used to have back in 1985, and much more computation than we had in 1995, there may be some solutions to this. But there are some hard problems, even on some high granularity platforms like Mastodon, which is kind of a distributed Twitter more or less.
Jim: Trying to search across them is a non-starter. Trying to discover people is difficult. The coordination problem is fundamental. The work I think we’ve done about how to build a distributed OpenCog made clear that there is no fundamental magic answer to the problem of the distributed computation. It’s all trade-offs. Solving distributed computation so that you can build a product at scale, that has the features people want and a UI they’re willing to use, is a non-trivial undertaking.
Ben: I think we could do that now, and we couldn’t do it five years ago. The software frameworks are more advanced. The phones everyone has have way more processing power and memory in them. Yeah.
Jim: That may be, but no one’s done it.
Ben: Another little sort of spinoff we’re incubating in SingularityNET, which is called NewNET, which is precisely into decentralized sharing and coordination of processing power. We’ve been looking into that space and what frameworks there are. There’s also very good work on decentralized distributed training of neuro models. We ran OpenCog on the Atom processor on Raspberry Pi 3 now. We’re seeing a convergence between the software frameworks getting more and more distribution friendly, and just edge hardware getting better and better. I think that’s doable now, whereas when I first started toying with that stuff in like the year 2000, I just decided there’s no way we’re going to get this to work. It’s still hard.
Ben: I mean there’s a lot of engineering to be done, and there’s a lot of UIX, and just a lot of design thinking on various levels to be done. There’s organizations very interested in this now. There’s your GameB Group. There’s SingularityNET and our community. Cardano and that community, which is a bit wealthier, at least proximally is looking in the same direction. There’s a whole bunch of groups that are going in this direction, but it’s still definitely a David versus Goliath thing, compared to these trillion dollar big tech company which are increasingly allied with government.
Jim: Well, and/or they’re even worse becoming their own government. They place themselves in opposition probably to the idiocy of Trumpism, which in the short-term is good. But as we talked about earlier, people who cheer kicking off Trump, yep tactically probably okay. What kind of precedent is that?
Ben: Well, there’s some subtlety there. That cooperation between big tech and intelligence agencies didn’t go away, in spite of them being adversarial to the commander in chief. There’s a lot of subtle connections here that probably we don’t have time to go into.
Jim: That’s true.
Ben: All-in-all, it’s David versus Goliath. On the other hand, a lot of things in the tech industry have been that way in the previous decades. I mean, sometimes David wins. The music industry was massively disintermediated. The mobile and PCs and all this were upstart industries that disrupted huge entrenched industries run by mega corporations.
Jim: Yeah, remember the killing of Digital Equipment Corporation, right? That was the number two computer company in the world, probably the most innovative. They made some wrong decisions about the PC, and they literally went away, bought for scrap by Compaq somewhere along the way.
Ben: Yeah. All these things in the end they’ve been about social network effects triggered by genuinely new and in some important respects superior functionalities. Even when the new functionalities were inferior to the old ones in some way, they were superior in ways that allowed network effects to get triggered to cause new ways of doing things to spread. Of course, that’s to do partly about attitude and sentiment, as well as about the tech. One wonders if the de-platforming that’s occurring now, and the attention that’s being brought to the excessive power of these big tech oligarchs, could this be the Rubicon moment that allows the network effects to stare growing for decentralized alternatives? I’m really hoping so.
Ben: I think this is not only important for our everyday lives. If you believe as I do that Kurzweil was right, and say 2029 is a reasonable time for human level AGI. Then we’re talking about what sort of collective human thinking is going into feeding and forming the mind of this first human level AGI? We’re talking about that, which is an even more important thing than shaping the twisted lives of all of us humans on the planet right now.
Jim: And get us to buy shit we don’t need, which is currently what the purpose of Facebook actually is. Certainly, this would be much bigger. Yeah, let me address this. This is very actually important. I’ve thought about it a lot over the years. You’re right. All you need is a network effect community that’s big enough. One that really stands out in the history of personal computing was the original Macintosh, was not good for much except for one thing. It was clearly superior to any other device for creating, editing, producing media. The Mac became the darling of the photographer first, and then the videographer, and the digital artist, et cetera. Those were big enough ecosystems for interesting products to emerge, like Photoshop which was originally Mac only, and numerous other interesting things.
Jim: Then that critical mass network was just big enough to allow Mac to survive, although people forget how close they came to shutting the Mac down around 1994-5, when its market share was trivial. Somehow, it got through the keyhole and came back to life. It had a big enough, those relatively concentrated market. Here’s a problem with alternative distributed social media. We’re going to be very clear about this in GameB. Because of the current political dynamics, they have a tendency to attract douchebags, fucking Nazis, racists, crapola like that. Parler, for instance, I’ve kept an eye on Parler. I logged on once a week to see what was going on.
Jim: Parler was not designed to be a Nazi troublemaker platform. It was basically a relatively clever business play to appeal to the Fox News type audience. It had a different advertising model. It basically highlighted the “big names” in Fox News-ish and its equivalence on the conservative side. Most people followed what the Nunes had to say, or Cruz, or Hannity or one of these clowns. On the other hand, like a lot of these platforms, they had the ability to create groups and whatever. Even though it was not created to be a dirty platform, in fact it was intended to be quite commercial and kind of Fox News-ish.
Jim: Because of the pressure these folks felt on the mainstream platforms, they felt like they needed a home, and they thought Parler might be more accommodating. So they hid away in the corners of Parler and did their thing, and you see what it did to Parler. At least in the early stages, I would recommend people build… And I think it’s legitimate when you’re small to have social norms, period. Now I think it’s more problematic if you’re big. I think the, I have a number I come up with, five million. If you five million or more unique visitors per month, perhaps you should be government regulated for no discrimination based on viewpoint.
Jim: Maybe you should be able to discriminate on cuss words, for instance. You can’t use the seven dirty words. I wouldn’t go on that kind of fucking platform, but it strikes me that’s legitimate. Or no nudity, or whatever you want, as long as it’s not viewpoint discriminatory. For smaller platforms, they should, and build their own local community norms that could be very specialized. The GameB community was infiltrated by a small number of members of an anti-Semitic and racist organization last year. We discovered it in May, and in early June we expelled them. We put out a notice, “No racist douchebaggery in GameB, period. And we’ll stick with that.”
Jim: In its current political context, people that are experimenting in these domains, I would suggest build local social norms to keep what the community considers bad actors off. If those bag actors don’t consider themselves bad actors, which of course they never do, they can go build their own things over in the corner someplace, and not pollute other people’s efforts. I think that’s going to be real important. We’re certainly going to keep an even tighter eye on that on the GameB Group once we get on our own home. I would suggest that when you’re thinking about your efforts around SingularityNET and other things, that at least for the interim one has got to build local customary norms and enforce them.
Ben: Yeah. I think this, bring this back to the rearrangement of the tech stack underlying social media. SingularityNET is sort of a meta platform. Cardano is a meta platform. When you’re at that layer, you’re just building tools that could be used to create a whole bunch of different social networks. The way you’d like to see the tech stack work is, like for a handful of genius young coders from somewhere to create the next Facebook, you shouldn’t have to write all the code that Facebook has had to write. It should be a lighter weight application running on a bunch of decentralized software layers, so that then you could have a whole bunch of smaller social networks which have high quality in terms of usability and efficient distributed processing and AI. But yeah, they can be for a smaller community with their own criteria.
Ben: It’s not clear if in a world like this, you actually have to get to a monolithic social network like a Facebook or Twitter. Maybe you do, or maybe you don’t. If you have the basic protocol for messaging, and the protocols for AI and reputation exist on a lower level of the tech stack. Say your own personal profile and your own history of what you’ve done is owned by you, and the AI models of yourself are owned by you and are represented in a way that’s largely independent of the specific social network group. Maybe then more stuff is living at these lower layers of the stack, and you don’t actually need very much of the activity to be in these generic humongous networks like a Facebook or a Twitter. We don’t know how that’s going to evolve. It will be interesting to see.
Jim: It will be interesting. Again, you and I have been doing this a long time. We know that in the long-term, generic solutions will often beat specialized solutions. Again and again and again, I’ve looked at business plans. I looked quite carefully as a business plan to build genetic screening algorithms on FPGAs, for instance. It was a viable business for a while, but it then turned out that the generic Linux servers got cheap enough, scalable enough, et cetera, that most people moved away from those. I’m glad I didn’t invest in that company, because while it had a short flurry, it eventually went bust. Maybe if the people that are working on low level protocols, and ways to have deep infrastructure for building a distributed world of things like social media. Actually social media, I think, is a bad label for all this stuff.
Jim: Whatever this stuff is that we need a better label for, if that low level protocols get out there and operate at kind of the level of HTTP/HTML, where it becomes foolish not to use the standard, then the possibilities of dethroning the specialty monoliths, the Facebooks and the Twitters and the Reddits, et cetera, could be quite real. Something you mentioned in your essay is that Twitter at least is taking that seriously, and has put a team together looking at open source distributed social media.
Ben: Yeah, yeah. I think that’s really interesting on their part. I think they’re quite sincere about that, and that Jack Dorsey may be thinking about all this in roughly the same way that you and I are even. It’s an interesting question how far Twitter can go in that direction and remain harmonious with their imperative to maximize their own shareholder value, given where they are now. It doesn’t seem impossible on the face of it, but certainly you face complexities that aren’t faced by doing it from scratch. I mean it’s rare but not unprecedented for a company to keep flourishing through a couple different eras of the technology ecosystem. IBM kept going from the mainframe era through the PC era, and it’s still doing stuff. A lot of the mainframe companies didn’t, right?
Jim: Yep, exactly. It will be interesting. Of course on the flip side is the famous network effect, some variant on Metcalfe’s Law, who defined the value of a network is the square of the number of participants on it. I don’t believe it’s the square, but I do believe it’s an exponent greater than one. Let’s say it’s Geoffrey West’s superlinear scaling of 1.15. That’s enough to give people like Facebook tremendous free benefit of the network effect, and is a barrier to other people building their things. On the other hand, when you’re looking at smaller communities, it’s much less of a barrier. When you run the exponential on a small community, in absolute terms their benefit is much smaller. We can actually capture most of that just doing it ourselves.
Jim: Yeah. It’s going to be interesting. It’s going to be interesting. Where do you see AI’s role in this? We’ve talked about this just a little bit on a phone call not too long ago.
Ben: It’s well worth noting that Google has long considered itself an AI company. Facebook’s AI research lab is an extremely strong AI R&D group. The big social media companies, and sort of internet information companies, they know that AI is absolutely critical to their business. The success of their AI algorithms has an extremely direct impact on how much money they’re making, and how competitive they are. I think the same things that underlie that have big implications for decentralized social media. I think without a substantial AI component, it’s not just that it’s harder to make money on social networks.
Ben: It’s harder to provide people with the type of experience that they want, even if you set aside advertisements. I found that myself with news reading. I’ve used Google News to read news, among other sources like traditional magazines and newspapers. I get pissed off at Google News all the time. There’s certain things that it shows me that I never want to read, and never read. I don’t understand why it keeps throwing that garbage-
Jim: It’s surprisingly sucky.
Ben: The point I was coming to, though, is if I’m not logged in as myself to a phone or a browser or something, I go on Google News, I’m distressed by how much worse is the stuff that I see. How is there so much stupid boring stuff out there that I wasn’t aware of? However weird and flawed their customization is, it is good compared to their default stream of news aggregated from various sources. But it’s not that good. It’s very clear how, with deployment of like vanilla transformer neuro nets and information retrieval software, you can do a way better job of customizing a newsfeed for me. It’s habitually throwing garbage at me that I never click on and have no interest in, and on the same themes over and over again.
Ben: YouTube was similar. When I lived in Hong Kong, where I lived for nine years until relocating back to the US mid last year, over and over YouTube gives me ads in Cantonese. I don’t know Cantonese. Google should in fact know that I don’t know Cantonese. They know a lot about me. You can see that ads are one thing, but even just customization of content is very important. There is too much garbage and too much good stuff out there for any of us to filter through on our own. We want and need AI to be directing us to valuable stuff, given the volume of stuff out there. Otherwise, it’s hard to break out of the bubble of the people you know and things that you know to go to.
Ben: You don’t want that AI to be controlled by some big company in an opaque way. The same AI perversion that led to you and the GameB colleagues getting briefly banned from Facebook, which could to have led to you being banned for a long time from Facebook if you hadn’t known how to work the system. That same mix of intelligence and stupidity is being used to recommend to me what news articles to read. It’s being used to recommend to me which people are recommended for me to link to on LinkedIn or Twitter. It’s being used even within Google to recommend what research papers I see from [ArcSight 01:10:24] when I type in a search query. The same mix of intelligence and stupidity is being used to guide what we see.
Ben: I try to use DuckDuckGo whenever I can. On the other hand, it’s not as smart as Google search. If I’m looking for research papers on some topic, eventually I’ll from DuckDuckGo back to Google a lot, because DuckDuckGo won’t find the stuff. Google will find more valuable stuff within a shorter amount of time, but what biases is it baking in? Which of these biases are as stupid in their own way as Facebook’s algorithm that banned you? We do need AI to sift through everything, but we want the AI to be on our side. We want the AI to be at least trying to be transparent and explainable. We want the AI to be biased in a way that benefits ourselves, and our growth, and our own expansion of consciousness, and the benefits in social groups that we want to benefit.
Ben: I think down the road what we need… You need an AI personal assistant that’s like Alexa or Google Assistant, but with more actual understanding. Which serves you rather than the big company that in a quasi free way supplies it to you, and that helps you sift through all the information out there in a way that works toward your own goals. Again, we’re working on that in our own way with an OpenCog project. We’re working with Hanson Robotics on something called Grace, which is Sophia Robot’s little sister. This is aimed at elder care. This is an elder care robot. I think that can be big on its own. That’s through a joint venture called Awakening Health between Singularity Studio and Hanson Robotics.
Ben: Part of what we’re doing there, we’re also working on OpenCog based natural language dialogue, and trying to get that as good as possible to integrate with the and understanding. Part of the medium-term game there is to make something like Alexa or Google Assistant that’s actually intelligent and understands what’s going on, and that serves you. The profile of you that your virtual assistant is using should be stored in a secure data wallet, which is secured with your private keys. Blockchain tools can be used to give you intentional agency over what that data is used for. What our mutual friend Trent McConaghy has done with OceanProtocol can play in there in terms of online blockchain based data management.
Ben: AI is also critical to the reputation system aspect that we discussed. Reputation management in itself isn’t such a hardcore AI problem. That’s some fairly simple math in terms of rating things, and rating people’s ratings, and rating people regarding the quality of their ratings in different dimensions. When you have a reputation system, you’re going to have reputation fraud. You need machine learning based reputation police to try to crush the reputation fraud. Then you need that to be based on open source code, which is audited and which is automatically checked by AI tools so that you know the reputation fraud policing doesn’t turn into thought policing.
Ben: There’s a bunch of layers where you need AI within this ecosystem which need to be open source, and distributed, and decentralized itself. Which brings up a bunch of hard computer science problems, but again I guess I have a bar for what’s a hard computer science problem. None of this is as hard as building AGI using OpenCog Hyperon. This is all stuff that you can solve using technology that currently works in various prototype or alpha forms. But it needs to be rolled out at large scale and low cost, and with high usability, along with all the non-AI parts being rolled out with the same necessary buzz words. Right?
Jim: Absolutely. Well we’re going to have to wrap it up there. We’ve come to the end of our time. I’ve got another call coming up here in a couple of minutes. I just want to thank you, Ben, for another great deep dive into all kinds of interesting things. I’m absolutely excited by the ability to watch the progress in the directions that we have pointed to here today, though I will alas report that a survey from the field says we’re not there yet and have a long way to go.
Ben: Yeah. We are not there yet. I think you’re not only watching the progress, I think with GameB and some other initiatives you’re helping drive the progress. It’s very cool to be sort of wrapped up in these dynamics, and perhaps even to have some agency in driving these dynamics forward. It’s a very scary but also fascinating and exhilarating time to be involved in this part of humanity.
Jim: Indeed, so with that we’re going to wrap her up.
Production services and audio editing by Jared Janes Consulting, Music by Tom Muller at modernspacemusic.com.