Transcript of Episode 3 – Ben Goertzel

The following is a rough transcript which has not been revised by The Jim Rutt Show or by Ben Goertzel. Please check with us before using any quotations from this transcript. Thank you.

Jim Rutt: Hey, this is Jim Rutt, and this is the Jim Rutt Show. Our guest today is Ben Goertzel, one of the world’s leading figures in the effort to achieve Artificial Intelligence at the human level and beyond, what is often called Artificial General Intelligence, or AGI, a term that Ben coined. We will likely use the term AGI a lot today when we are referring to AI at a fully human level and beyond. In addition to being a researcher and prolific author, Ben is the leader of the OpenCog, open-source AGI software framework, and he’s a CEO of SingularityNET, a distributed network that lets anyone create, share and monetize AI services at scale. Welcome, Ben.

Ben Goertzel: Hey, Jim, thanks for having me.

Jim Rutt: Yeah, great to have you. Maybe you can start … Remember, our audience, while intelligent and well-read audience, isn’t necessarily expert audience in AI. Could you tell us what AGI is, and how it differs from Narrow AI, and why the emergence of AGI is so significant?

Ben Goertzel: Yeah, when the AI field began in the middle of the last century, the basic informally understood goal was to create intelligent of kind of the same type that people had. Then, during the next few decades, it was discovered that it was possible to create software hardware systems doing particular things that seemed very intelligent when people did them, but doing these things in a very different way from how people did them, and doing them in a much more narrowly defined way. So, I mean, in the ’50s it wasn’t clear that it would make sense to make a program that could play chess as well as a Grand Master, but didn’t approach it anything like a Grand Master did, and couldn’t play scrabble or checkers at all without some reprogramming.

Ben Goertzel: So, the existence of these Narrow AIs that do particular intelligent seeming things in narrowly defined ways, very different from how humans do them. I mean, it was really a major discovery, which is quite interesting, and it’s left us in a situation now that I think of as a Narrow AI revolution, where we have an astounding variety of systems that can do particular very intelligent seeming things, yet they’re not doing anything like how people did it, and as part of that difference from how people did it, they’re not able to generalize their intelligent function beyond the very narrow class of context.

Ben Goertzel: So, I introduced the term AGI, Artificial General Intelligence, 15 years ago or so with a view towards distinguishing between AIs that are very good at doing tasks as seen very narrow in the context of ordinary human life, versus AIs that are capable of achieving intelligence at least with the same generality of context that people can. Other terms like transfer learning, and lifelong learning, it varies in the AI community, have closely related meanings, because I mean, to achieve general intelligence you need to be able to transfer knowledge from one domain to a domain that’s qualitatively different from a human view from the domains you were programmed for, or trained for.

Ben Goertzel: It’s important to understand, humans are not the maximally, generally intelligent system. I mean, from the standpoint of computability, theory Marcus Hutter’s theory of universal AI sort of articulates what a fully mathematically general childhood would be like. We’re definitely not that. If you give a human a maze to run in 275 dimensions, they’ll probably do very badly. So, we’re not that good at generalizing beyond, for example, the dimensionality of the physical universe that we live in. So, we’re not maximally general by any means. We’re very narrow compared to some systems, you can imagine, but we’re very general compared to the AI systems that are in commercial use right now.

Ben Goertzel: So, I think as a research goal, it’s worth thinking about how we make AIs that are at least as generally intelligent as humans ultimately more generally intelligent, and it’s an open question to what extent you can get to human level AGI, by sort of incrementally improving current style Narrow AI systems, versus needing some substantially new approach to get to higher levels of AGI.

Jim Rutt: Well, with all the various contending approaches out there, is there a rough estimate either in the community, or your own, or both on when we might expect to see a human level general intelligence?

Ben Goertzel: My stock answer to that in recent times has been five to 30 years from now. I’d say in the AI community, there’s a fair percent of people who agree with that, but you’ll get a range of estimates aiming from five or 10 years up through 100’s of years, and there are very few serious AI researchers who think it will never happen, but there are some who think a digital computer just can’t achieve human level general intelligent, because the human brain is a quantum computer, or a quantum gravity computer, or something, but if you set aside the small minority of researches who think the human brain fundamentally relies on some trends trying computing for its general intelligent.

Ben Goertzel: So, I think aside those guys for the moment, you have estimates that are 10, 30, 50, 70, 100 years, not a lot that are 500 years. I’d say during the last 10 years, the mean and variance of these estimates have gone way down. So I mean, 10 or 20 years ago it was small percent of researchers who thought AGI was 10 or 20 years off, a few who thought it was 50 years off, a lot who thought it was a couple 100 years off. Now, from what I’ve seen, substantial plurality, probably a decent majority think it’s coming in the next century. So, I remain on the optimistic end, but the trend has been in the direction of me rather than in the direction of the AGI pessimists.

Jim Rutt: That’s interesting. Our last guest on the Jim Rutt Show was Robin Hanson, and we talked a lot about AGI via upload, or emulation of human, directly standing, their neural system, their connectum, and representing that on a computer. It seems to me that one could, at least at one level divide AGI approaches into upload/emulations, and a software approaches. Could you just maybe speak a little bit about your thoughts on those too broadly different ways of achieving AGI?

Ben Goertzel: I think right now the idea of achieving AGI via an uploader emulation of the human brain is really just an idea. I mean, it’s an idea that seems to me scientifically feasible according to the known laws of physics, but there’s really no one working directly on that. There are people working on supporting technologies that could eventually lead you to be able to scan the brain accurately enough, so you could start seriously working on that. Right now we just don’t have the brain scanning tech to scan the mind out of a living brain, and we don’t have the reconstructive tech to take a dead brain, like freeze it, slice it, scan it, and then reconstruct the dynamics of that brain.

Ben Goertzel: I mean in theory you can do that by the laws of physics, but we’re way without speaking of time estimates, we’re very far away in terms of tools and concepts for being able to do that right now. Whereas the attempt to create AGI by a software, be it very loosely brain inspired software like current deep neural nets or more math and cognitive science inspired software like OpenCog. I mean the prospect of getting AGI by software, this is a subject of like really concrete research projects now. So it’s certainly much more than an idea. I mean it may be that all these projects including mine are wrong-headed, but at least we’re directly working on the problem right now. So I mean conceptually you can divide AGI into those two approaches. But I’d say those two approaches are very different practical status at the moment.

Jim Rutt: Yeah. One point I made when I was talking with Robin is that the upload emulation approach is essentially all or nothing. Either you can upload a brain, maybe not a human brain, but a full brain and make it work like it’s supposed to or you can’t. While the software world one can see, in fact, we obviously are already seeing lots of incremental benefits and, from my experience in the business world, the investment world, generally guide people away from trying to jump up a cliff, right? Go from zero to a 100 in one single project.

Ben Goertzel: I don’t think that’s quite a fair assessment, because I think the missing link for being able to mind upload people or simulate people in detail, I mean the missing links are the right kind of hardware/wetware and on the other hand, the right kind of brain scanning equipment. And I think incremental approaches toward brain-like hardware or wetware and [inaudible 00:09:31] really spreadsheets about the accurate brain scanning. I think those advances would lead to a lot of amazing incremental achievements. I mean, you get making advances in that kind of hardware or wetware will probably do a lot of funky narrow machines doing robot controlled or perception or something. And advances in brain scanning would lead to amazing progress in your understanding how the human mind works and diagnosing diseases of the brain and so on.

Ben Goertzel: So I think incremental progress in that direction can be valuable for reasons other than building AI systems. And, it might be valuable for building animal level AIs too, right? Like if you wanted to build the artificial rodents or a bug or something, then progress in scanning organisms and emulating them can be pretty interesting also. So I don’t think that’s a dead end by any means, it’s super interesting. I just think right now that’s about the research on supporting technologies rather than about mind uploading per se or brain emulation per se.

Jim Rutt: Yeah. I like that you make some good points and that suggests that we’re likely to see both tracks gradually attract more and more resources as we get to it.

Ben Goertzel: I would say brain scanning needs a breakthrough, right? It needs a radical breakthrough in imaging or else a radical breakthrough in extrapolating the dynamics of the brain forward given a static snapshot. And it’s not as obvious that AGI needs a radically different sorts of technology than what now exists. I mean it might, but it’s not obvious as it is for brain emulation, I think people who aren’t building the technology but who are speculating about it, they liked the brain emulation idea because it’s a proof of principle, right? It just like the bird is a proof of principle that flying machine can be built out of molecules.

Ben Goertzel: But then, as we all know, the best proof of principle isn’t always the best way to build something. A nanotech is a good example. I mean in Eric Drexler’s initial books on nanotech, I mean he was making all these structures of gears and pulleys and nano structures that look like big machines that we make. And that’s the right approach to take if you want to make it first proof of principle that, hey nano machines could exist. But now it’s becoming clear the way nanotech is probably going to work as a bit more molecular biology’ish. But that’s a harder way to work out details and have proof of principle type way. But it may be a more try’able way to actually make the machines work.

Ben Goertzel: So I think it’s not bad if your proof of principles system ends up totally different than the system you actually build. But of course also my [inaudible 00:12:11] is interesting [inaudible 00:12:12], because we’re humans, right? And even if that’s the most awkward way to make AGI from the perspective of making the most intelligent possible systems, the fastest or the cheapest. I mean from our particular position as humans, like I would love to have mind uploads of Friedrich Nietzsche or Phillip K. Dick to play ping pong with and of myself for that matter.

Ben Goertzel: So I don’t have to be stuck in this body shrivels. So there’s an interesting value to that apart from how good an AGI approach it is. With my AGI hat on, however, yeah, I’m more drawn to more heterogeneous approaches that leverage what we know about the mathematics of cognition and leverage the current hardware that we have to a greater extent than the brain emulation approach can take. I mean, the current hardware we have is very little like the brain, unless you didn’t get super high level of abstraction. I mean we now understand a lot about how to do some types of intelligent activities like fear improving or arithmetic or database look-up way better than the human brain does.

Ben Goertzel: So it’s interesting to think about how to create AGI in a way that leverages this hardware that we have and this knowledge that we have, while also leveraging what we do know about how the brain works. And that leads you to a more opportunistic approach to AGI where you say, “Well, how can we put together the various technologies we have now including brainage and non brainage technologies to take the best stab at training in intelligence system, and a system that can move from Narrow AI plus, plus toward general intelligence.”

Jim Rutt: So Ben, you’ve been working on the OpenCog project for a number of years, which is an approach to AGI that’s quite different from the deep learning approach we hear about so much about the media. Could you tell us about OpenCog, the history of it and what it is?

Ben Goertzel: So the history of OpenCog goes back before OpenCog. I mean it goes back to the mid nineties when I started thinking about how you could make an AI, which was a sort of agent system that was a society of mind as Marvin Minsky had described it, but with more of a focus on the emergence. So Marvin Minsky viewed the mind, the human mind, and an artificial mind as a collection of AI agents that each carried out its own particular form of intelligence, but they all interacted with each other much like people in the society interact with each other, where then the intelligence came out of the overall society. I like this idea because I like self organizing systems and I thought this sort of self organizing complex system might be the right kind to get mind like behaviors.

Ben Goertzel: But when I dug into it more I realized Marvin Minsky didn’t like emergence and he didn’t like nonlinear dynamics, at least not in the AI or cognitive science context. Whereas I was viewing the emergent level of dynamics, and the emergence of like overall structures in the network of agents as being equally important to the intelligence of the individual agents in the society. So, I mean I tried it in the late nineties to code myself a system called WebMind, which will be a bunch of agents distributed across the internet, each of which, tried to do with some kind of intelligent processing and where they all coordinated together, and they were the old emergent intelligence.

Ben Goertzel: We did some amazing prototyping in the company in New York, I formed WebMind incorporated, but then, we failed to make a success of our business in spite of decent effort and run out of money when the dotcom boom crashed in 2001. I then started building a system called the Novamente Cognition Engine, much of which was eventually open sourced into the OpenCog system. And I would say OpenCog and then the Singularity Network, each of these reflects different aspects of what we were trying to do in WebMind. So WebMind was really a bunch of agents, which are sort of heterogeneous. They were supposed to cooperate to form an emergently intelligence system. Now OpenCog, we tried to control things a lot more.

Ben Goertzel: So we have a knowledge graph, which is a weighted labeled hyper-graph called the Adam Space with particular types of nodes and links in it, and particular types of values attached to some of the nodes and links such as truth values and attention values of various sorts. Then we have multiple AI algorithms that act on this Adam Space, dynamically re-writing it and in some cases watching what each other do and helping each other out in that re-writing process. So there is a probabilistic logic engine called PLN, Probabilistic Logic Networks. It was described in a book from 2006 or so, there’s Moses, which is a probabilistic evolutionary program learning algorithm that can learn little Adam space sub-networks representing executable programs.

Ben Goertzel: There is ECAN, Economic Attention Networks that propagates attention values through this distributed network of nodes, right? And then you can use deep neural networks to recognize perceptual patterns or predators and other sorts of data and then create nodes in this knowledge graph representing sub networks or layers in the deep neural networks. So you have all these different AI algorithms cooperating together on the same knowledge graph. And the concept of cognitive synergy, I coined that term to refer to the process by which from one AI algorithm gets stuck or make slow progress in it’s learning than the other AI algorithms can understand something about where it’s got stuck. So understanding something about its intermediate state, and what it was trying to do and then can intervene to help make a new progress that can non-state the AI algorithm that got stuck.

Ben Goertzel: So for reasoning engine get stuck in its logical entrance, maybe evolutionary learning can come in to introduce some new creative ideas or perception can introduce some sensory level metaphors. If a deep neural net gets stuck at recognizing what’s in the video, it can refer to reasoning to do some analogy entrance, or it could refer to evolutionary learning to brainstorm some creative ideas. To make an OpenCog system, you need to do a lot of thinking about how these different AI algorithms can really cooperate and help each other acting concurrently on the same knowledge store. It’s different then like a modular system where you have different modules inviting different AI algorithms with a sort of clean API interface between the modules.

Ben Goertzel: We don’t have a clean API entries between the OpenCog modules. The design is more that these different AI algorithms are cooperating in real time on the same dynamic knowledge graph, which is then stored in RAM. There’s some resemblance to what’s called a blackboard system in the eighties a long time ago, but the blackboard is dynamic in RAM, weighted labeled hyper-graph and the AI algorithms are all uncertainty savvy, and interacting largely on the level of exchanging probabilities and probability distributions. Just actually different pieces of knowledge. And so the focus on graphs and probabilities is different than blackboard systems had way back decades ago.

Jim Rutt: Talking about cognitive synergy, I’ve taken that idea from you guys and thought about it in a bit more real time dynamic fashion, where not only can one set of algorithms attempt to solve a problem that another one is stuck on, but also think about bi-directional problem solving. For instance, taking from human cognitive or animal cognitive science. It certainly appears that high level clues flow back from higher levels of the mind to the perceptual system. For instance, when the perceptual, the upper levels of the perceptual stack are trying to identify an object, for instance, the scene at a higher level makes more sense with certain objects and with others. And, that’s something I generally don’t see in most current deep learning projects. But I see OpenCog being very well structured to do that kind of real time dynamic multilevel processing, bi-directional.

Ben Goertzel: Yeah, yeah, absolutely. I mean the concept of cognitive synergy was always intended to be bi-directional and concurrent with multiple AI algorithms helping each other out at the same time in cycles and complex networks. And I tried to formalize the cognitive synergy notion using a mix of category theory and probability in the paper at one of the AGI conferences. But, I mean that brings you in a whole direction of mathematical abstraction. The example you mentioned certainly is an important one. And I’d say in the neuroscience analogy, current deep learning networks for vision or audition probably model well what the human brain does in less than half a second or so. And when you take longer than that to perceive something, it’s often because you’re using cognition in some form to disambiguate or interpret and bring some background knowledge on interpreting that perceptual stimuli. That’s something current deep neural nets don’t really try to do.

Ben Goertzel: I mean, of course you could attempt to in the neural net architecture by taking in neural net with a long term declarative memory and long term episodic memory and like co-training them in some way with a perceptual neural network. But that’s not something being worked on a lot. And we’ve been working toward doing that sort of thing in OpenCog where you have a symbolic cognitive engine, the Adam’s face and PLN and so forth, interacting with a deep neural net for doing perception. We’ve done some simple experiments in that regard and that there’s been a lot of tweaking we had to do to the OpenCog Adam space framework, not so much the Adam space as a representation looks so, but to the pattern matcher, which is a key operational tool on top of the Adam space. So a lot of tweaks, we’ve had to do the pattern matcher to make it interactive effectively in real time with deep neural networks.

Ben Goertzel: But we’re mostly through that now and I’ve been doing some interesting experiments in the direction you’ve suggested, and I think this can be valuable in natural language processing also, where we have… Our need is very deep neural net, architecture is doing a great job at recognizing various complex statistical patterns in natural language texts so that they can then do emulate natural language in a way that looks realistic in the short run, yet clearly they’re not getting the overall meaning of a document or the deeper semantics and what’s being said, what you can see in the meaningless aspects of their generated texts over a top medium scale of text line. So we’ve been working on combining deep neural nets for text analysis with a more symbolic approach to extracting the meaning from tech. So I think this sort of neural symbolic approach to AI is going to be very big three or four years from now, because deep neural net in their current form, we’re going to run out of it’s seam.

Ben Goertzel: I mean there’s still more seam left because real time analysis of videos and videos with audio on your mobile phones chip that’s not yet rolled out commercially, right? So there’s more steps to be followed kind of straight forwardly by just incrementally improving and scaling up current deep neural nets. But I think, once enough sensory data processing has been milked using these deep neural nets, we’re going to hit a bunch of problems that need more abstraction. And I’m guessing that tweaking deep neural nets to do abstraction is not going to be the step forward. Now, there may be many paths forward, you can perhaps take a multiple neural net architecture where some of the neural nets have a totally different architecture than the current hierarchical deep neural networks. I mean, Google went a little bit in that direction with differential neuro computers and there’s another papers like that.

Ben Goertzel: But, I’m guessing that interfacing and synergizing logic systems with neural nets is going to be a much less obscure thing in a few years. And we’re already seeing more and more papers coming out on hybridizing knowledge graphs with deep neural net. So once you’ve done that, I mean a logic engine is the natural way to dynamically update a knowledge graph. I think you’re going to see more and more powerful logic engines used on the knowledge graphs being hybridized with deep neural nets until people rediscover that credit and logic and term logic are interesting ways to manipulate knowledge graphs. And OpenCog may well get there first before others. I mean, we’re already there in terms of the design. We may, well, get there first in terms of having amazing results on cognitive enhanced perception before others get there by sort of incrementally adding more and more cognition to their deep neural net architectures.

Ben Goertzel: But whether it comes from OpenCog successfully integrating deep neural net, it’s given me one way or the other, right? We’re adding deep neural nets onto OpenCog and working on using them to feed knowledge into our logic engine, while the deep neural net guys have started adding knowledge graphs onto their neural nets and I’m sure they are going to be adding more and more logic’ish operations onto their knowledge graph. So we’re going to see a convergence of these two approaches and it’s going to lead to maybe a convergence of those two into one approach or maybe just into a whole family of parallel approaches, each of which is learning something from each other.

Jim Rutt: Seems to me the most likely. Let’s hop back a little bit to something you mentioned in passing when talking about OpenCogs’ symbolic and sub-symbolic working together. You talked quite a bit about cognitively informed perception, but the other to my mind, really big gateway problem that maybe your approaches are knocking on the door of, is language understanding . When I look at how we move from Narrow AI to near AGI, one of the biggest barriers seems to be real language understanding whatever that actually means. Right? I know that you guys had done some work on that. Could you talk about your thoughts on language understanding and where your projects are? And maybe a little bit about where you think other people might be?

Ben Goertzel: I think language understanding is quite critical to the AGI project. I would say probably the most critical thing that we’re working on now toward AGI is Meta-reasoning, reasoning about reasoning. But we can come back to that, language understanding is certainly easier to understand. And I think we have been working on language understanding in OpenCog for some time, when they’re just now playing with hybridizing that with deep neural nets in various ways. So for Syntax Parsing, I mean we’re working on using a combination of symbolic pattern recognition and deep neural nets to guide the symbolic credit and recognition for automatically learning a grammar from a large corpus of texts and then to map grammatical parses of sentences into semantic representations of those sentences in a framework like OpenCog, which has a native logic representation as part of it’s representational repertoire. Then the semantic interpretation tasks that come as a matter of learning mappings from parses of sentences, syntax parts of sentences into logical expressions representing key aspects of the semantics of the sentences.

Ben Goertzel: So, semantics is a big and rich thing. The smell mix of a sentence may involve no memories and episodic memory images that are evoked, it can involve a lot of things. But I believe that a key and perhaps core aspect of the semantics of a sentence is in that sense a logic expression or something… Homomorphic to a logic expression. So you can think of semantic interpretation as mapping a syntax parse into a logic expression plus a bunch of other things like images, episodic memories sounds and so on, that they’re sort of linked from or ornamented on this logic expression. So how do you learn the mapping of syntax parses into logic expressions? That sort of part two of the language understanding problem and part three there being pragmatic. So how do you map the semantics into the overall rather context, which I think is well treated as a problem of association learning and reasoning?

Ben Goertzel: In OpenCog we really deal with the syntax learning task, the syntax, the semantics, mapping learning tasks, and then the pragmatics learning tasks in somewhat separate ways. Although all using the same underlying repertoire of AI algorithms and we’ve been focusing more on syntax learning recently and focusing on unsupervised language acquisition. Well, we’re trying to make a system that automatically learns it’s dependency grammar, which can then be fed into the link parser, which is a well known grammar parser out of Carnegie Mellon University to parse sentences. And we’re making decent progress there. I mean it gets better and better each month. We’re able to parse them all more sentences with greater and greater coverage. We’re not yet at the level of supervised learning based grammar parsers that you feed a corpus of diagram sentences into. But one interesting thing we’ve been playing with is, if you start with just a little bit of improvised data, not even parse sentences, but start with a little bit of chunk parsing information like mapping of linked collections of words in a sentence into semantic relations, they are very small part of your corpus.

Ben Goertzel: So if you start with just a little bit of partial parse information and then use that to seed your unsupervised learning, you can get much more accurate results. And I think this is interesting because what I think on supervised learning is a great paradigm. I also think we don’t have to be total purists about it. I mean starting with like whole complex parts of sentences like independent tree bank, parse sentence, corporates or something feels wrong to me. But if you can start with like simpler semantic or syntactic information about 10,000 sentences or something, a very small percentage of the sentences you’re looking at, maybe that’s okay rather than a pure unsupervised approach. And that brings us into how you learn semantics, because one approach learning semantics is to look at say, captioned images as a data source. Because if you have images with captions, I mean then you can use a neural net or neural net connected to OpenCog, say to recognize the relationships between what’s in the images.

Ben Goertzel: And then if you correlate that with the syntax parts of the sentences that are captions to the images, then you’re connecting what the sentence is about, which is the image with the syntax structure in the image, right? So suppose you have 10,000 or 100,000 captioned images and 10 million senses without captions, then if you take the supervised data that comes from the captured images, which is supervised in the sense that you have some semantics there in the images, which is correlated with those sentences, and then the unsupervised data of all the other senses in your corpus. Now if you can make something work from that, that’s also very interesting. Perhaps just as interesting as pure unsupervised grammar learning. Right?

Ben Goertzel: So we’re playing with both of these approaches. Pure unsupervised grammar induction and then grammar induction where you have a small percent of your sentence corpus, which has links into something non grammatical and that’s sort how people do it, right? We have some sentences that we had in non linguistic correlate to and then a lot of senses that we don’t have a non-linguistic correlate to, but we have to figure it out and do the smack interpretation.

Jim Rutt: Yeah, I was going to say that we have an existence proof that… The one existence proof we have for language learning is a mixture of supervised and unsupervised. Right? How a child learns… They are babbling away at random initially and then they get feedback. Mom smiles when you say ma, for instance. Right? And as people have done more and more research, there’s clearly a lot of feedback.

Ben Goertzel: It’s an existence true for cross modal learning with very crude reinforcement base supervision, which is different than either unsupervised corpus learning or supervised learning in the sense of computational linguistics where you’re fed parses of the sentences, right? How kids learn is different than either of those. And I mean this really gets into the AGI pre-school idea that I had proposed a long time ago. I mean this gets into the idea of no little kids are either, just observing a huge multi-modal world in an unstructured way, and B, they’re trying to achieve goals in particular context, but using actions that achieve those goals and their learning linguistic along with non-linguistic action and perception patterns in the context of practical goal achievement in particular context, which is just different than unsupervised grammar induction or supervised grammar induction or studying caption images or whatever. Right? So I tend to take a sort of heterogeneous and opportunistic approach here.

Ben Goertzel: There’s a lot to be learned from unsupervised grammar induction. We’ve learned something from supervised grammar induction too, but I think we may have learned whether it’s to be learned from that. There’s a lot to be learned from studying caption images. There’s something to be learned from robots, even as crude as they are now or game characters and all the available learning paradigms are a bit different than what you know the human kids are doing. And we may need to do something more like with a human baby’s doing or it may be that by piecing together these different more computing friendly learning paradigms, we’re going to solve the problem. Right? I mean we don’t know and there’s a lot of interesting experiments to do. I think for semantics in the end you need to apply learning between grammar and sentences that are parsed using your grammar and a non-linguistic domain.

Ben Goertzel: And that non-linguistic domain could be logic expressions gotten from somewhere like a parallel corpus of English and loge bond, which is a language speakable by people that has syntax is directly parseling to predicate logic or it could come from correlation of English sentences and images that are going to capture image corpus or movies with dialogue or close captions associated. It could come from a robot that here’s stuff in the context of the environment that’s trying to act in. But I mean clearly, although much semantics that we humans use is very abstract. Like if we’re talking about calling them mechanics are continental philosophy, the basis for the abstract semantics that we learn is earlier stage snags that goes from correlating linguistic productions with sensory non-linguistic environment. So I think, pattern mining across linguistic productions and non-linguistic sensory environments or situations. This is how you have to get to semantics. And then of course pragmatics is the same way, but I mean there, you’re correlating linguistic productions with whole episodes and then you’re looking at perception and goal-based processing across the episodes.

Jim Rutt: Yeah. Ben, let me jump off from that into something that just come across my desk recently, which was a paper by Jeff Clune. He had three pillars of how to go after Artificial General Intelligence. The first two I thought were, not that new, but the third, I thought it was quite interesting and perhaps relevant to this language understanding problem, which is to develop technologies for generating learning environments. Could we, for instance, somehow miracle occurs, be able to develop language learning environments that had embedded within them pragmatics and semantics. And we’re supported by natural language syntax and we’re able to generate cases for these language learning systems to use multilevel learning the way humans seem to. As you point out, neither classic unsupervised nor classic computational linguistics provides that multilevel problem solving framework, which is what we know that children do. Does this seem to you a fruitful approach perhaps for language learning?

Ben Goertzel: I mean it’s interesting. I mean I think I would say at a crude level we were generating learning environments. I don’t know what, eight or seven or eight years ago or… And I’m sure other people were well before me, right? I mean cause, anyone who has tried to do AGI oriented or a transfer learning oriented learning in the video game worlds, usually they don’t have a lot of game world authors on their teams. So they end up writing scripts to generate stuff and then in the game worlds from some probability distribution. So I think that that practice is not remotely new, but that paper that you’re referencing sort of highlighted it as a key portion of an AGI approach with the greater oomph and rigor than had been done before, which is it’s potentially valuable. I mean, I don’t think we yet have the models that we would need.

Ben Goertzel: We don’t yet have the discriminative models we would need to generate an interesting enough diversity of learning environments. But it would be certainly interesting to do. I mean the world that we live in, it has a weird and diverse collection of patterns in it and we’re just not at end game worlds yet. Like tonight, no, I baked cookies with my wife for my 16 month old baby. We make one batch of peanut butter cookies and one batch of cookies was another type of nut butter. One of them was much thicker than the other before we cooked it. And that results in a different consistency after we cooked it. Right? And I mean the baby is learning from that and there’s endless things like that.

Ben Goertzel: In our everyday lives, like attempts we had to do that in a game world now would still be too sort of rigid and limited in their variety to drive the abundance of examples that humans have in their minds from which we draw analogies to do our abductive inference that drives our transfer learning.

Ben Goertzel: So as a concept, it’s great. I’m not yet convinced that we have the models needed to generate diverse enough environments to really do AGI now whether… Of course generating game worlds with diverse environments is cool for prototyping and experimenting and that’s been done by a lot of people over a long period of time. But if we broaden the idea of it, I mean I think the idea of trying to use learning to learn more training examples to train your AI and then the smarter AI and then use learning to learn more training examples and create more training examples. That’s interesting. I mean I’ve been thinking about that in the mathematical theory improving context, where one issue you have there is the number of theorems that human mathematicians have created in the history of mass.

Ben Goertzel: It’s not that large when you compare it to the requirement of deep learning algorithms and other machine learning algorithms. Like we don’t have as many theorems as we do sentences or images. Right? But yet the complexity of theorems is probably greater than that of sentences or images. So on the other just generating a bunch of random true theorems is not useful because they’re boring. Right? So the idea there is, if you had a rule to identify what’s an interesting theorem, then you can generate trillions of interesting theorems. Have you theorem improved or tried to prove them? It would fail a lot, but maybe you could prove billions of interesting theorems, than the proofs of those billions of interesting theorems, our training data for machine learning to learn how to do proofs, right?

Ben Goertzel: So there’s a similarity there in the game world idea, right? You’re wanting to no study what you have to generate new training data, use that new training data to train AIs, which then gets smarter and you can use to generate even better training data and rather or hence, repeat. So I think that the idea could be interesting in game worlds, it can be interesting in theorem proving and in a lot of places. It’s a question of whether you have enough like requisite variety in your available data to see the initial stage of automated environment generation.

Jim Rutt: Interesting. All right. I think it’s an area well worth people studying, I’ve been passing, includes paper around trying to get people interested in it, sounds like you agree there is at least some merit to it. Let’s move onto one another item closely related, which, at some level dealing with robotics is a pain in the ass, right? Stuff breaks, they’d fall over or they have to deal with the holes in the ground, et cetera. However, when working with robotics, you get an amazingly detailed simulation for free, i.e., the universe. Could you talk a little bit about the intersection between AGI and robotics and what you think robotics brings or doesn’t bring or the mixed bag of things and pranks?

Ben Goertzel: I would have to add that programming is also a pain in the ass. So I mean we’re just choosing between pick your poison, right? None of these things are not annoying once you really dig into them, I mean, writing is simple, python script is one thing, but building OpenCog is another way that’s simpler or entertaining, either. There’s a lot of kinds of pain in the ass and it is true the closer you get to the real world maybe the more painful things become, but I mean dealing with huge image corpuses and video corpuses is also highly, there’s a lot of butt-hurt there too, right? Anyone who’s going to work on AGI in any aspect has got to have a very high pain tolerance or otherwise do something that gives more immediate gratification and hazy more money quicker. Right?

Ben Goertzel: But robotics, I mean the main issue isn’t that tinkering with the robots is a pain, the issue is that the current robots don’t so easily do what you want them to do for AGI, like what you want is a robot that can move around in everyday human worlds freely, battery runs for at least like a day on end before you have to recharge it, it is gathering multisensory input even if there’s glare or dim light or background noise or something.

Ben Goertzel: I mean basically that’s doing what a little kid does. It rambles around your house. It looks in here as and sees and grabbed stuff and manipulates it and picks it up and puts it down. Yeah. You may not let them run up and down the stairs and you may not be able to pick up everything as heavy as you can, but he’s manipulating and perceiving and moving a lot. Right? And right now we don’t quite have that toddler robot yet. If you put all the pieces together, if she put together Boston Dynamics movement, with Hanson robotics’ emotion expression and human perception, with the arm, eye, hand coordination of I come and the fingertips sensitivity of SynTouch, if you put it together, all the robot bits and pieces that exist now in various robotics projects around the world, you would have that artificial toddler, but no one seems to be funded to do that.

Ben Goertzel: So we have all the pieces now, but no one has put together that toddler robot for AGI yet. And to do so would be expensive. Right? And to do it cheaply is also probably possible. But then it involves a lot of those engineering R&D, which is going to take years, to do it expensively, it’s going to cost you at least hundreds of thousands of dollars for the first toddler. And so that’s what’s really a pain, is that you’re dealing with one or another robot that’s very limited in what it can do either in manipulation or mobility or perception or battery life or something. Right? And that’s a bit ironic because in theory, the robot should be much more flexible than AI in a virtual world than practice. Given the limitations of each robot, it’s so limited what you can do with each robot.

Ben Goertzel: But I think, this is like years, not decades away from being resolved. It is mostly a matter of integrating components that exist now and bring down the costs through scaling up manufacturing. So, I think within three to eight years, let’s say robots are going to be a really useful tool for AGI, although that’s only very weakly true right now. Of course, in principle for AGI, you don’t need a robot, right? I mean in principle you could get a superhuman super-mind living on the Internet. There’s a loads of sensors and actuators attached there, with a free one a human like mind for it to have a roughly human like body is going to be pretty valuable cause so many aspects of the human mind.

Ben Goertzel: I mean they’re attuned to I think a human like body. That’s what we are and that simple things, relatively simple things like how we do eye hand coordination by combining movement and perception and then lower level cognition but also no little things like the narrative self and the cross eye illusion of free will and the relation between self and other. Like all of these things have to do with being agent’s controlling a body that feels pain and has an inside and outside and so on. And I mean you know when you eat and then you shit, or you put something in your mouth, squish and spit it out. All these things teach you something about the relationship in yourself and in the world and persistence of objects.

Ben Goertzel: You’ll get all those lessons in different way, if you’re a distributed super-mind who’s mind and body are the Internet, or how much are you going to understand human values, culture and psychology that way, I don’t know. Right? So there’s one question which is how important is embodiment if you’re getting it really smart AGI? The other question is how important is embodiment forgetting the AGI that understands what humans are and empathizes with humans to significance?

Jim Rutt: That’s interesting. I had not thought of that particular additional benefit from doing the embodied cognition. Even if it was, wasn’t necessary to get the AGI, it might make an AGI that’s a much more relatable to us and us to them. Let’s move on to the next subject I wanted to talk about, which is your Singularity Net Project. This is really interesting. The idea of building a very broad network that anybody can build AIG, AI components on, they can work with each other, et cetera. Why don’t you give us a good detailed description of SingularityNet and what status is currently and what you see going forward?

Ben Goertzel: Absolutely. So SingularityNet, it manifests some ideas I’ve had for a long time and that I was prototyping in the WebMind project in the late nineties because WebMind was a distributed network of sort of autonomous agents cooperating together to manifest some emergent intelligence, open context some of those ideas and try to make sure they are more orderly and structured where you have a very carefully chosen set of AI algorithms acting in a common knowledge store, much more carefully designed and configured to work closely and tightly together than anything in WebMind was. SingularityNet goes the other direction. It’s like let’s take these different AI agents, take a whole population of AI agencies doing AI in their own way and they don’t necessarily need to know how each other are working internally. They can interact via APIs. If they wanted to share a state, they can do that too, but it’s optional.

Ben Goertzel: And then this society of minds can have a payment system where the AIs in that society pay each other for work or get paid by external agents for work. So this society of minds is an economy of minds and the economic aspect can be used to do no assignment of credit and assessment of value within the network, which is an important aspect of cognition as well. And then, this economy of minds is both… It’s another approach, you’re getting emergent AI where you have a more loosely coupled network of AI agents, so you have an OpenCog but which can still manifest some emerging cognitive dynamics, sense collective behaviors. But then also you have potentially a viable commercial ecosystem, wherein anyone could put an AI agent into this network and the AI agent given charge external software processes for services and the AIs in the network can charge each other for services.

Ben Goertzel: And then this becomes a marketplace and it’s however the infrastructure that we’ve chosen to implement this, was based on the idea that this agent system is a self organizing system without a central controller. Just as, I mean the brain doesn’t have a central cell, right? I mean the brain has some aspects that are in more of a controlling role than others, but it’s a massively parallel system where each part of the brain is getting energy on its own and metabolizing and in some sense guiding its own interactions.

Ben Goertzel: So we use blockchain as part of the plumbing for the SingularityNet infrastructure to enable a bunch of AI agents to interact in way that has no central controller, but it is heterogeneously controlled and the AI agents and the network sort of in a participatory, democratic ish way. They interact and guide the interaction, the overall network. And I think this is both a good way to architect a self organizing agent system moving from General Intelligence toward AGI. And it’s a very interesting way to make a practical marketplace for AIs, which potentially can be more heterogeneous in what kinds of applications it serves and then who gets to profit from the AI that’s done, than the current mainstream AI ecosystem, which is highly centered on a few large corporations.

Jim Rutt: Yeah. The vision is obviously just astounding, in that on two fronts, one, this idea that, anybody can play, pieces can be added, there can be explorations by AIs to find out what other pieces of AI might be synergistic to their own capabilities. The last point you made that this is an ecosystem not controlled by the big boys, is something I found very attractive when I first learned about SingularityNet. Because we are in a world where it seems that the deep progress in AI is being more and more concentrated into fewer and fewer hands and SingularityNet, it looks like one of the relatively few counter trends to that concentration. Could you talk a little bit more about that? And why you think it might be important?

Ben Goertzel: Yeah. I think the importance of a decentralized, and open approach to AI is multifold, right? It’s important right now because it can enable AI to do more good in the world than there is going to be possible with a centralized, hegemonic approach you see in the industry now. It’s also important if you think that the current inter network of Narrow AIs is going to evolve into tomorrow’s emergent AGI because I mean if you look at what’s the current network of centralized AIs doing, I like to summarize in this selling, spying, killing and gambling, right? I mean it’s advertising, it’s making surveillance systems. It’s controlling weapons systems and it’s doing finance, financial prediction, risk management for large banks and so on. And so I’m in this selling fine, killing and gambling are part of the human condition. We’re not going to stamp them out.

Ben Goertzel: I don’t know if we want to stamp them out. But I don’t want them to be as large a percent of the AI ecosystem as they are right now. You’d rather see more like educating, curing of diseases, doing science, helping old people, creating art, both cause these are cool things to have around more on the planet now and because if our Narrow AIs are going to turn into AGIs, I’d rather have the AGI as doing these compassionate and aesthetically creative things, then serving the goals of current corporations and large governments. So I think yeah, there’s a near term importance and then there’s a little more speculative importance in terms of the potential of current now AIs in giving rise to tomorrow’s AGIs. And, if you look at how the current hegemonic situation has come about, it’s not really because of bad guys, it’s because of self organizing, socioeconomic dynamics.

Ben Goertzel: I mean the Google founders, pretty good people they’re trying to make money like all business people are, they’re also genuinely trying to build AI that will do good for the world. But I mean, the way they’re doing it is the way a public company has to do it as they’re trying to do good, primarily as a side effect of generating shareholder value. And the way they’re generating shareholder value is kind of the way they have to do it. They’re checking whatever worked and they’re just doubling down on it. I mean, they’re trying a lot of side projects too, but in the end ads is what’s making them money, I mean it’s their fiduciary duty to be using their best AI for that as much as makes sense. And yeah, if you look in China, I mean [inaudible 00:56:04] is doing some things that I don’t agree with.

Ben Goertzel: On the other hand, Chinese government has lifted way more people out of poverty over the last 30 years than every other part of the world combined. And I think, Chinese government is genuinely trying to advance the total utility of the Chinese population. And they see that AI is very helpful for this. And they think building a surveillance state is in the interest of the utilitarian total good of the Chinese population. We’re not looking at psychopaths who are trying to develop AI to build the terminator or something. We’re looking at people developing AI in the interest of generally beneficial minded goals. But there’s these network effects involved, and that’s both good and bad, right? I mean the network effects allow a smart AI to accumulate more and more resources. They can partly go toward building smarter and smarter AI.

Ben Goertzel: But the network effects also mean that whoever succeeds first with Narrow AIs in particular tasks that lend themselves to rapid deployment of Narrow AI for useful ends, whoever succeeds first in the new accumulate hell of a lot of money, have a lot of data to train the AIs hell of a lot of processing power to feed the AIs. And there’s a powerful network effect there. And this is what Google has been benefiting from. I mean the Chinese government is benefiting from this, Facebook is, Microsoft is. IBM for example, isn’t so much because they’ve been deploying AI in markets in ways that don’t lead to these tremendous network effects and that’s caused them to fall behind in spite of having a lot of smart AI people and some AI technologies that are very good at certain things. I mean, one of the cool things with SingularityNet is intrinsically, at least the logic of SingularityNets economic model has a lot of network effect to it.

Ben Goertzel: It’s a double sided platform like Uber or Airbnb or Amazon’s neural net model marketplace. It’s a double sided platform in the sense that one side is the supply of AI that developers of open ending network. And the other side is the demand, which is product developers who want to use AIs in the network to fuel their product, send it at end users, right? And so that’s something that can be very powerful if you can get off the ground. So if you get enough demand, you can get more supply, you get the supply, you can get more demand, if you had this going enough, it can take off really, really fast.

Ben Goertzel: So there’s the possibility to use the same network effect trick that Google, Facebook, and then Tencent, Badoo, Alibaba and so on, have you used to get this decentralized network Ais off the ground and you know, it doesn’t have to displace totally the big tech companies to have a huge impact. We can look at Linux as an example. I mean Linux didn’t obsolete Apple and Microsoft, they’re still making money, trillion dollar companies. But on the other hand, Linux, it’s a number one mobile operating system. It’s dominant in the server market. It’s big in the open source ethos behind it has been huge. It’s been usually available for the developing world for the maker and robotics community. Right?

Ben Goertzel: So similarly, if we can get a decentralized AI network to have a major role in the way that AI is utilized in the world, I mean then that’s going to be tremendous even if it doesn’t obsolete, the hegemony. Growing that network with the double sided network effect is certainly a practical challenge. Right? And that’s what we’re working on with the SingularityNet that platform now.

Jim Rutt: Well, one thing I’d like to pick up on the SingularityNet model, you described how you have a potential network effect around a two sided market. Those are indeed very, very powerful business models. market strategy for SingularityNet and how you expect to achieve a critical mass somewhere to get the two sided market start the cycle?

Ben Goertzel: There’s a couple of different strategies we’re applying here, but I would say if it’s one side of the market first, we’re essentially focusing on building the demand side of the market first and then creating this supply internally initially, because we have a bunch of AI developers on the SingularityNet Foundation team who are largely the people I work with on previous projects. And so we’re able to build some AI ourselves to put in the network. And then we’re doing AI developer workshops and we’re putting some requests for AI on the platform and say people with the tokens to put new AI into the networks. And we’re not totally neglecting the supply side, but the demand side is getting more of a big push in a couple of forums. Some, one of those forms is we’re spinning off a for profit company called Singularity Studio, which is a whole separate enterprise aimed at building commercial products aimed at the enterprise on top of the SingularityNet platform.

Ben Goertzel: And initially we’re building a product suite aimed at fintech in the finance industry. So say a lot about a risk management product, which then would be subscribed to my large financial firm to solve a problem in say hedging or credit risk assessment or something. But then on the back end, that product is getting its AI by making calls into the SingularityNet platform. So if this business succeeds in creating and selling successful products initially to financial services and after that other vertical markets, internet of things, health tech and so on, then each product that’s sold, the licensing fees for each year, a fraction of that will be converted from [Fiat 01:02:25] into AGI token and used to drive the AGI token based market in SingularityNet. So that’s one thing. The Singularity Studio, which we’re currently pulling together and we have some enterprise customers already for that.

Ben Goertzel: I mean a couple that are publicly announced, but there’s a bunch more big ones that we’re working with that are going to be announced in the next few months. We’ve also launched a accelerator called SingularityNet XLab where we’re recruiting projects from the community that will build software products aimed at certain niches, again, using AI in the SingularityNet. And this again is focused on the demand for the network side because they’re building products on top of the SingularNet, but they also help with supply cause in most cases the AI that we’ve put in there is being augmented by AI that this team puts in there also and then the projects in the XLab incubator. So that they’re being… We can give them some tokens, so they get AI services on the network and we can help them with publicity using our API engine, and help them with the AI expertise where we can. So, these are both efforts kind of brute force, the demand side.

Ben Goertzel: We’re also talking to some investors scaling up this accelerator/incubator effort where we’d be able to put some more money into seeding projects, leveraging the platform, using some investment money raised specifically for that purpose. So I think if, forget the Singularity Studio building enterprise products, then getting some large enterprises using through these products, the SingularityNet, and then we get some smaller entrepreneurial products from the XLab accelerator. Then you know, we’re getting some serious utilization of the token. And if through these efforts we get serious utilization of the AGI token and the AGI token based ecosystem, all of a sudden we have a utility token with actual utility, which is almost unheard of in the blockchain space. Right? And this is going to attract a lot more attention in the blockchain community and hopefully in the AI community.

Ben Goertzel: And I think this will then incent more people to put their AI into the platform. We’ll get some usage from the developer workshops renewing. But in the end, I mean having demand for the platform was sort of, it will give some extra appeal to developers because then it’s not only that they’re putting their AI out there and they’re really cool, decentralized democratically governed platform. There’s an actual market of customers who will pay them to use their AI, which is certainly. The financial incentive only won’t be enough for quite a while until it has a really huge market, but a decent financial that’s meaningful combined with the cool value and the political appeal of what we’re doing. I think we’ll be able to juice up the supply side a lot.

Jim Rutt: Well, this is to say a truly a visionary project, which could change the world. There’s a lot of will it work? How will you reach a critical mass? Does the token aspect getting away or does it add value? All unknown. But it certainly a very interesting experiment worth trying. Let’s move onto another topic which is thinking about AI and AGI in the context of complex self organizing systems, emergence chaos, strange attractors. I know those are things you’ve thought a lot about and I thought at least a little about, so take it away, Ben.

Ben Goertzel: Yeah, I think my focus on complex, nonlinear dynamics and emergence is something that sets my line of thinking apart from the mainstream of the AI world now. Like I remember when Jeff Hawkins book on intelligence came out, 15 years ago or something? It seems like forever, but I mean that book laid out the vision of AI in terms of hierarchical neural nets combining probabilistic reasoning with back propagation. I mean it was different than the deep neural nets that are most successful now. But conceptually it was along very, very similar lines. And when I reviewed that book, well, I said this is interesting, it’s part of the story, but he’s leaving out two key things.

Ben Goertzel: He’s leaving out evolutionary learning, which we know is there in the brain in the sense to read human’s neural Darwinism, modeling the brain is an evolving system. And he’s leaving out nonlinear dynamics and emergence, since strange attractors, which you know are key in how the brain synchronizes and coordinate solids parts, like these hierarchical networks doing learning and probabilistic pattern recognition are there but they’re only part of the story if you don’t have evolution and other pollicis like self reconstruction and self construction based on linear dynamical attractors, then you’re really missing out a lot.

Ben Goertzel: So in the early 90’s when I was first thinking about how the mind works, I wrote a book called The Evolving Mind. Well argued, there are sort of two key forces underlying intelligence systems. I mean in philosophy terms they come down to being and becoming, so we’re back to Hegel, right? But in dynamic terms you can think of them as evolution and auto pollicis, other pollicis being a term introduced by Maturana and Varela meaning like self creation, self building, which is one particular kind of complex nonlinear dynamics that you’ve seen in biology, where a system is involved with rebuilding and reconstructing itself a new all the time. And you know evolution creates the new from the old and other pollicis keeps in organism a system intact in a changing and mutating environment and nonlinear dynamics are key to both of these.

Ben Goertzel: You could also think of this as evolution and ecology, which are hand in hand in natural systems and even in the body where you have the human’s neural Darwinism explaining brain dynamics, because evolution, and Darwin’s total selection theory explained the immune system as an evolving system along with the cell assembly theory, which Hebb came up with to explain part of the how the brain works.

Ben Goertzel: I mean that’s basically an ecological theory and its cell assembly is sort of other orthopedics system that keeps constructing and rebuilding itself in the immune system. The network theory of Journa is the other parodic ecological aspects. I mean, if you leave out ecology/other places in evolution and you have only hierarchical pattern recognition, you’re leaving out a whole lot of what makes the human mind interesting. Like creativity is evolution, and you know the cell and the will and all these and you know the conscious focus of attention, which is binding together different parts of the mind into a perceived practical unity. This is all about strange attractors emerging in the brain building, although orthopedic systems of activity patterns.

Ben Goertzel: So you’re leaving out all this like you doing more than deep learning systems. Well, you’re leaving out a fuck of a lot about what makes the human mind interesting, granted that you’re also capturing some interesting parts and not much study or thought is going into these aspects in the mind right now. And this is partly because of the business models of the large companies and governments that are driving AI development. Because, creativity and ecological self reconstruction, these aren’t that directly tied to easily measurable metrics that you can use to drive supervisor reinforcement learning. So for a company that’s driven by their KPIs and what not, that company is naturally driven toward AI algorithms that are focused on maximizing some simply formulated reward function. That’s a little harder if you’re talking about evolution to create new things, or an ecological system whose goal is to maintain and grow itself.

Jim Rutt: And so the money on money returned [inaudible 01:10:50].

Ben Goertzel: Yeah, yeah, yeah, exactly. So it ties into like the Neo Darwin’s theory of evolution as nature red in tooth and claw. I mean this theory of evolution thinks evolution is about maximizing some simplicity to find fitness function. When you look at evolution and ecology tied together in Marvin nonlinear and dynamical view, you’re getting beyond the simplicity defined fitness function and it’s a little more fuzzy to grapple with. This goes back to you know something I observed a long time ago, I pressed forward the idea of AGI, but actually it’s a bad term from a fundamental philosophical. You like, “No one knows what intelligence means. Humans aren’t really all that general.”

Ben Goertzel: I didn’t know what the artificial is really because it’s all part of nature. So I don’t like the artificial, the general or the intelligence. Really what I’m after is self organizing complex and after that ethical systems and I mean that’s just more of a mouthful and it’s kind of fuzzier to grapple with and conceptualize. This ties in with what my friend Weaver David Weinbaum called Open-ended Intelligence in his PHD thesis of that title from the Global Grain Institute in Brussels. I mean fundamentally, if an AGI emerges like out of the Internet of the conglomeration of a bunch of AGI system, Narrow AI systems, this may be an open ended intelligence, which stretches our notion of what intelligence is. But there’s an incredibly complex self organizing adaptive system, which in many ways has more generality than humans do, but isn’t even about maximizing reward functions in any simplistic way. Although you may be, you can model some aspects of what it does like locally in terms of certain reward functions.

Jim Rutt: Yeah, that’s a very interesting question on what is that thing? Is it a mind? It may not be conscious in the sense we think of ourselves as being conscious, but it may have some attributes that are in some way and now against the consciousness or in part of a larger space of efficacy in the world, which aren’t necessarily congruent with consciousness. But nonetheless, give it the ability to respond to feedback and find its way in the world.

Ben Goertzel: If you take a view like Chalmers takes in his philosophy of consciousness, what he says is every thing in the universe has some spark of proto consciousness. And I guess he calls it proto consciousness to try to make it more palatable to people with different perspectives on consciousness. So if everything in the universe of some spark of proto consciousness in some form, then you would view like a human-like or mammal-like consciousness. It says into emergence from these sparks of proto consciousness. You know when they’re associated with a certain type of information processing system, like a system that achieves goals in the context of controlling some localized embodied organism, right? So if you look at it in that way, the sparks of proto consciousness involved in a global distributed, complex self-organizing dynamical system across the Internet, which doesn’t have a central control, even as much as our like hypothalamus or Basal Ganglia are our central controller.

Ben Goertzel: In some broader sense, maybe there is a variety of consciousness associated with it. Like there’s some complex self organizing pattern of proto conscious sparks there, but they may not have the unity that characterizes human like consciousness. And so what in a way like does unity of our consciousness is there? Because our body is a unified system that has to control itself without dying, right? And so that gives us some unified goals of like food, sex and survival. If you have a different kind of complex self organizing mind’ish system, which can replace it’s parts that will, and when the different parts are pursuing various overlapping goals in a very dynamic way, I mean whatever self organizing, conglomeration of proto conscious sparks exists there, maybe much less unified than what’s associated with the human mind brain, and then is that better? Is that worse?

Ben Goertzel: I mean it comes down to your value system and yeah, the fact that our value system values this unified variety of consciousness’ish stuff more than this more diffuse, but perhaps more complex variety of consciousness’ish being like, is there any more profound than saying like we think human women look more beautiful than gorillas, right? Like we like what we have.

Jim Rutt: Yeah, exactly. Yeah. I will say, I’m finding it useful to not commingle the concept of intelligence so strongly into this broader picture of mind. You know, the more I dig into consciousness, the more I appreciate John Searle. You know, I used to say Searle is damn, Chinese room worth misleading stuff. But, the more I thought about it, the more I appreciate it. Searle argues that our consciousness is our consciousness. Human consciousness is very specific to the way we are organized in terms of our memories that are the couple on various time frames. We have perhaps, something like, Bernard Baars’ global workspace and how those things work. Produce something called consciousness. And as Cheryl likes to say, consciousness is a lot like digestion. You can’t point to some part of the body and say that’s the digestion and digestion is a process that includes our teeth, our throat, our stomach, our colon, our liver, et cetera.

Jim Rutt: And, consciousness can be thought of the same way. And so, Cheryl had been laughed at by some AI researchers, when they interpreted him as saying that, machine consciousness is not possible. But as I’ve gotten to understands Cheryl’s argument better, I think his argument is that something that is analogous to human consciousness, and a machine can’t be the same. Because it’s details of its design won’t be the same. In the same way in the food industry, in the pharmaceutical industry, we have digesters, which are analogous to what our digestive system does, but they’ll do it the same way in very, very different with respect to the details.

Jim Rutt: So I’m starting to use consciousness in a narrower frame as something that is more like what humans are. And so, I’m willing to buy that. Well, at some point have things that are sort of like our consciousnesses in a machine, but that’s only a small part of the much bigger mind space. And the things that you were talking about, what is it like to be a loosely coupled set of intelligences running across the internet, solving many problems both in serial and in parallels, might be nothing at all like intelligence. We have a tendency to anthropomorphize from our consciousness to what these larger brain mind, I should say I use the word, ‘mind types’, might be. And I’m increasingly finding that attempt to expand the concept of consciousness unhelpful, actually.

Ben Goertzel: Yeah, I guess Weaver’s concept of open ended intelligence was created to deal with intelligence in these broader types of dynamical systems that are beyond anything human-like or a mammal-like, whether you want to extend the word intelligence to deal with these different types of dynamical systems or have a different word for it. That’s the kind of issue that I never worried about much. As a mathematician originally, my attitude is like you can define your word to me whenever you want and then use it that way. So whether that really is intelligence or not or is some other broader thing, terminology choice.

Ben Goertzel: Regarding consciousness, I think you and I don’t entirely see it the same way but vaguely close. I mean when I wrote a paper on consciousness, basically I called it Characterizing Human-Like Consciousness I think, because I considered that sort of separate problem. I mean one type of problem is understanding the nature of consciousness in general, which is interesting, right? Another type of problem is, the consciousness of human-like systems where then you have to define what you mean by human-like system, but I was thinking if you have a system whose goal is to control a mobile body in a much larger and more complex environments and then the goals involve or constraining actions over various timescales, I think yeah, these requirements sort of drive you to a narrow subset of the scope of possible cognitive architectures and you could say then they lead you to some of the aspects of human like consciousness.

Ben Goertzel: I think where we might differ is, I mean I think Searle was arguing, telling strongly than what you’re saying, and I think Searle was arguing in essence that qualia are there in the human being, and the qualia wouldn’t be there in the digital computer. And maybe since you don’t take qualia as seriously, maybe you’re ignoring that aspect of Searle’s argument. Whereas I do take qualia seriously, but I think that qualia are universal, I’m a Penn psychics, so then the way I look at it is the elementary qualia associated with every precept-concept entity, [inaudible 01:20:51] whitehead Ian Process, wherever. These organized themselves into collective system level qualia differently depending on the kind of system. So they’re human-like species or variety of consciousness, that variety of experience. It’s associated with systems that are organized like a human.

Ben Goertzel: And it’s not clear that at the level of description Searle was using, of like what words come in and out of the box and the guy in the Chinese room, it’s not clear. You can distinguish what is the state of consciousness of the guy inside the room. So, I mean his point there was sort of if you have some giant lookup table or deep neural net or whatever inside a box that’s acting like a human, it doesn’t have to have the same conscious state as in him. And then I think that seems true to me, but that just means that that level of observation, the external data doesn’t let you reverse engineer the internal state. Right? I mean it’s amusing that quantum mechanics would, like if you studied all the vibrations of the elementary particles in the universe, you can induce the state of mind of the guy inside the box. Right?

Ben Goertzel: But from, from the verbal productions, I guess you couldn’t, but I don’t know if that proves what Searle wanted it to prove really. It just proves that the functional description at a crude level doesn’t imply the internal state. But if the state of consciousness of the qualia are associated with the internal state and dynamics and not just the crude functional description then, so what?

Jim Rutt: I’m not sure. But I think at the end of the I… As you know, we do disagree about this a little bit, and at some degree I do expect that when we fully understand consciousness, we’re going to say, so what? It’s less amazing than we thought. However, I do believe it is Searle in the sense that one would not expect consciousness from the Chinese room because consciousness is the experience of processing information in a specific architecture. And I strongly suspect it has to do with the couplings of our memories on various time frames.

Ben Goertzel: Yeah. Human-like consciousness is that, but then you’re bypassing the question of whether a tree or rock or the Internet has some form of qualia, some form of awareness or experience.

Jim Rutt: Yeah, those, I… A tree maybe, I don’t know about a rock, but a tree could have information flows with a storage and sense of continuity perhaps.

Ben Goertzel: You can’t distinguish physics from information. Every physical system can be equivalently viewed as doing information processing. So I mean that distinction sort of isn’t there on the maps level.

Jim Rutt: Yeah. Though it’s a much simpler kind of information processing, one could argue. Well, anyway, this has been a truly interesting conversation. So, thank you very much for a very interesting conversation, Ben, and I look forward to seeing the work that you do going forward.