The following is a rough transcript which has not been revised by The Jim Rutt Show or Ben Goertzel. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Ben Goertzel. Ben Goertzel is one of the world’s leading authorities on artificial general intelligence, also known as AGI. Indeed, Ben is the one who coined the phrase, or at least so it is said on Wikipedia and other places. He’s also the instigator of the OpenCog project, which is an AGI open source software project and SingularityNet, a decentralized network for developing and deploying AI services. Welcome back, Ben.
Ben: Yeah, good to be back, Jim. There’s always a lot to talk about. You know, the world is changing so fast now in the AI space, in particular, that in between each of our conversations, there’s been a ridiculous amount of new developments.
Jim: Yeah, I sometimes compare the rate of AI and it’s opening up of new domains to what was happening with PCs in the late 70s and early 80s, except that it’s happening 10 times faster. You know, some of the work I’m doing, the recent OpenAI developers meeting announcements are like, whoa, our heads are all still spinning and a number of people I talk to in allied spaces, even really smart people are like, what the fuck? Or I try to think through, what does this mean? And the only thing you can count on is that these upheavals will continue to happen, probably of larger magnitude and even faster as we move forward. Oh my God.
Ben: This is what our friend Ray Kurzweil has projected for some time, right? We’re getting
Ben: Exponential acceleration and as Ray and others have predicted for a long time, it’s occurring differentially in some areas of pursuit. The automatic checkout at the supermarkets so kind of sucks, but I mean, advanced AI technology in other areas is accelerating super, super fast. And we don’t need the acceleration to be super fast everywhere to get to a singularity, right? You just need it to be super fast in a few judicious places, which does look like it’s what’s happening. And I mean, this gets into the main topic here. Like I’m not bullish on LLMs in a narrow sense, becoming AGI’s. On the other hand, I am bullish that what we see happening in the LLM space, it’s useful for getting the AGI and it’s an indication that the unfolding of AGI is accelerating, right? And there’s details to parse out to that are confusing to the average human being, perhaps less confusing to the average listener of the Jim Ritt show.
Jim: I would say not. I know I’m right in the middle of it up to my elbows and I’m fucking confused.
Ben: So I’m not confused. Maybe I’m just deluded, but I don’t feel confused.
Jim: That’s true. The man is always firm in his opinion. That’s right. Last time we chatted, you were predicting, was it 50-50 chance of AGI within five years? Is that still your number?
Ben: Maybe 60-40 instead of 50-50.
Jim: I guess I’m gonna better spend all my money because it’s not gonna be worth anything. Why the fuck, right? Well, fortunately, I still have some of my AGI token.
Ben: The problem is knowing exactly what date to spend it by though, right? There’s a certain variance around the mean. If you spend it all three years too soon, then you’ll have a boring three years before the singularity comes.
Jim: Anyway, today we are gonna dig in to a really deep paper Ben wrote. It’s called Generative AI versus AGI. The cognitive strengths and weaknesses of modern LLMs. It’s very deep in cutting edge. Unfortunately, it’s ancient history now. It was written on the September 20th, 2023, and this today is November 14th. So it’s almost two months old, holy fuck. But nonetheless, it does hit on some very, very interesting topics and I really look forward to going through it. As always, of course, we’ll have a link to that paper, which is an archive open source paper on the episode page at JimRuttShow.com. So start off with your basic thesis, your high level view, more or less what you put in the abstract.
Ben: My basic thesis is large language models in their current form, sort of transformer nets trained to predict the next token in a sequence and then tuned to do specific functions on top of that. My basic thesis is this kind of system is not ever gonna lead to a full on human level AGI. However, these systems can do many amazing, useful functions. Maybe they can pass the Turing test even, although they haven’t yet literally speaking. And I think they can also be valuable components of systems that can achieve AGI. And this leads to an interesting, albeit somewhat fine-grained distinction. Because if you look at what someone like OpenAI is doing, I mean, what they’re doing, they’re pursuing an AGI architecture that has probably a number of different LLMs in a sort of mixture of experts approach.
And then other non-LLM systems that it’s calling on to do specific things, like a Dalai or Wolfram Alfa, and presumably a bunch of other specialized components that aren’t publicly disclosed either. You have an integrative system, but there’s a bunch of LLMs that are sort of the integration hub for everything. And in the AGI approach that I’m more interested in, which is the OpenCog HyperOn approach, you have this Adam space sort of weighted labeled Metagraph, which is the hub for everything. But then you could have an LLM on the periphery, like feeding into it and interacting with it.
And you can have Dalai on the periphery feeding into it and interacting with it. So the question isn’t really whether an LLM only, strictly speaking, is gonna get you AGI. The question is whether it’s a better approach to have a hybrid system with a bunch of LLMs as the hub, or have a hybrid system with something else as a hub, and maybe some LLMs in a supporting role, right? Which is a finer grained distinction that many people want to make. Like even in the AGI field, people want to simplify it to LLMs are the holy answer, or LLMs are an off ramp on the path to AGI. But when you dig in, it’s not necessarily about that. It’s about in a hybrid system, which sort of representation and learning mechanism has the more central role.
Jim: Yeah, and of course, a lot of the really interesting things in the LLM space even are LLMs plus, you know, LLMs plus an exterior vector semantic database, or LLMs plus agent where.
Jim: In fact, even in our little project that I’m working on, Script Helper, we do a lot of what I call sky hooking, where we determine what the limitations of the LLMs are. But oddly enough, you can get the LLMs itself to solve its limitations if you’re clever, right? But you need some application logic on the outside, sending prompts, getting responses, using those to create additional prompts. That’s the thing I’ve had my eyebrows open up a bunch, is instead of trying to figure out how to programmatically solve every problem, stop first and think, can I get the LLM itself to solve its own problem? And the answer is often yes.
Ben: That’s quite interesting. So I guess that my argument in the paper we’re talking about is LLM plus plus is not gonna lead to human level AGI. But I think something plus LLM may get to AGI faster than something ignoring LLM entirely, right? So this seems a sufficiently fine-grained point to annoy most people in the AGI field because everything is polarized these days, right? So what I see in the AGI field is mostly LLM boosters versus LLM detractors. And it’s a surprisingly small subset who were like, well, these are really, really useful tools. And they may be part of the picture, but let’s not fool ourselves about their basic limitations.
And it’s hard, as you say, because the limitations at the fine-grained level, the limitations change every few weeks, right? With different tools and different releases. And even in GPT-4, I mean, in open AI stuff, the limitations change without documentation or announcement. Like you can try the same query one week. It works well. Two weeks later, it works like shit. Months later, it works super great, right? So I mean, things are changing and unfolding in complex, not fully documented ways. But still, I think there’s fundamentals you can look at that are gonna be hard to overcome with tweaking and incremental improvement within the basic LLM architecture.
Jim: Yeah, indeed, we’ll get to those. So one thing I use as a little benchmark on each model that we explore is this hallucination problem, right? This is famously that if you ask it, what is the capital of France? It almost certainly answers Paris. But if you ask it something relatively obscure, it tends to make shit up, right? It’ll make up fake scientific papers. It’ll do all kinds of crazy shit. I actually discovered an interesting test because I’m a person right on the boundary between obscure and not obscure.
So I am actually a good test candidate. If I ask it for my bio, they used to just be about 70% hallucination. Really crazy stuff, wrong college, wrong town I grew up in, wrong companies I worked for. They’ve gotten a lot better. So I’ve now used another probe, which is who are the 10 most famous guests on the Jim Rutt Show podcast? Turns out they all know about the Jim Rutt Show podcast by now, but they vary massively on their tendency to hallucinate that particular question. Bard, for instance, gets a zero out of 10, correct, which is amazing. What the fuck is going on at Google that their AIs are so goddamn lame, at least the ones that they put out to the public?
Ben: That’s another subtle point. I’ve had fun with hallucinations as some examples I gave in the paper. Like we’ve made a music search, asked you to come up with a, you know, atonal country music. It recommends like the Schoenberg shuffle, right? I wanna do the Schoenberg shuffle now, right?
Jim: I did try some of those probes, by the way, and some of them were kind of interesting. But anyway, my point is that this hallucination problem with the better models has gotten quite a bit better.
Ben: I have a couple of responses on that. I mean, I think my prediction is hallucination in LLMs can be pretty much fully solved without any big advance in the architecture. I mean, there’s already papers you’ve probably seen showing that by putting the right probes into the network, you can get an indicator of whether it’s hallucinating or not. Like in a sense, I mean, it doesn’t know that it’s hallucinating, but you can like read its brain waves, so to speak, and you can see that it’s hallucinating. So I think it’ll be possible to filter out hallucinations with a probe in a way that bypasses reflective self-understanding and so forth. I mean, that’s expensive, I guess, to do those kinds of probes now, but there’s a lot of incentive to figure it out. So I think it’s entirely possible most hallucinations can be ruled out by not so cheap tricks to do with just different signatures of the activation pattern inside the network when it’s hallucinating versus when it’s not.
And then that would be great in terms of practical applications for transformers doesn’t really help in terms of getting to AGI, right? Because the way we avoid hallucinations as humans is by a reality discrimination function in our brain. Like it’s by reflective self-modeling and understanding. Like, because I make up weird shit in my mind all the time, but because I’m not fully schizophrenic, like I can distinguish the weird shit I made up from stuff that came out of extrapolation from the consensus reality, right? So what you’d like to go toward AGI is for all of us to avoid hallucinations by developing a robust reality discrimination function. But it may be we can avoid hallucinations just by probing the network to see that the signature activation pattern is different when it hallucinates than when it doesn’t. And in that case, that’s great for applications, not great for AGI, right?
Jim: Yeah, I gotta tell you, you should go talk to a judge or a prosecutor. People who actually deal with real life, testified memory retrieval will tell you that in good faith, most random people who see shit on the street don’t remember shit, make up, confabulate at a huge rate.
Ben: Yeah, but if you put multi-channel EEG on those people, could you tell a different neural signature from when they’re constructing the eyewitness evidence based on actually remembering? I don’t know the answer to that question, but my best guess for what it’s worth is you could tell the difference by probing their brain, even if they don’t consciously know the difference, right?
Jim: Well, by the way, I should throw this out here. I started actually writing this paper last January, January 2023, and I got hijacked on this other project. Someone needs to write this paper. I found a way to reduce many hallucinations by brute force, which is it turns out that the correct answers have different entropy than the incorrect answers, let’s say on the example. So you just run it 50 times, and then you calculate the ones that…
Ben: We did this too, and I also planned to write a paper on it, then finally someone else wrote a paper on it. So what I did is…
Jim: Oh, someone so finally did write it, okay.
Ben: I mean, what I did is you took the query, then you asked the LLM to paraphrase the query 23 days away.
Jim: Exactly, that’s what I call a para query, because it’s amazingly good at paraphrasing the query without repetition.
Ben: On the whole, if the answer is not bullshit, it will give that same answer many times to the paraphrases. On the whole, if the answer is bullshit, it will give different answers to the paraphrase. I was probably around the same time period. I made that up also, and I also didn’t find time to write the paper, because to write the paper, you have to gather all this statistical evidence, which is very boring.
So then someone else, about three or four months ago, I finally read a paper where someone did that, and I’m like, well, if I was an academic, I would have got some master’s student to do the experiments. But I mean, it’s all part of the accelerating pace. So like with the accelerating pace now, it’s not really worth the time to statistically document a technique that maybe obsolete six months later, right?
It doesn’t have that much lasting value. But the thing you mentioned along the way of why Google hasn’t replicated GPT-4 is quite interesting actually, because Google, they invented transformer neural nets, right? And they have huge number of super brilliant people in Google with significantly more neural net expertise and knowledge than the people at OpenAI, if you compare just the mass to the mass of people. And I don’t think there’s any original genius new algorithm and representation in GPT-4 either.
I quite doubt it. It’s hard to imagine that wouldn’t have leaked out by this point in time as the evident mixture of experts, architecture is leaked out and so forth. So it just seems that like to get something like this to work well requires a whole bunch of little things to be tuned right based on one experiment after the other. And Google just hasn’t been intensively and iteratively like tuning these things to work well over the same period of time that OpenAI has because they’ve been doing different things, right? And they may not have the same test suite and metrics for tuning the different little aspects of it that all have to work together, right?
And it’s interesting when you think about evolution, like evolution tunes all sorts of little things in the brain in different combinations over a period of time and that improves functionality, right?
Jim: Bard is worse by far than GPT-3 .5.
Ben: Right, but Claude is much better than GPT-4 in many things.
Jim: In our evaluation, it’s better than GPT-4 in one thing. It’s better at writing dialogue.
Ben: It’s a lot better on many science, mathematics, and medicine domains, which is interesting to me. But I think if I recall correctly, Anthropic was founded by some ex-OpenAI people, right?
Jim: Our ex-Google people, I believe, I’m not sure. But yeah, they got billions of dollars too. They got piles of them.
Ben: But Google has more billions of dollars, right?
Jim: That is true. It’s amazing to me how lame they are.
Ben: It’s interesting to me that Claude is at the level I would have thought Bard should be at, right? Like Claude is… It’s not as good at GPT-4 as everything, but for some things, it actually does better. Like we’re looking at exporting English into structured higher order predicate logic using LLMs. And I can fine-tune Lama-2 to do that. But with few shot learning, Claude will do it better than GPT-4, at least in the various ways I’ve been trying. And again, rigorously, justly documenting this isn’t worth it, because they’re all moving targets anyway. But yet, it is interesting that Claude managed to crack it in some ways that Google or Facebook didn’t.
I mean, I don’t think that means much historically. Like in a couple years, all these companies will be superseding each other in different ways. And, you know, Google has deep mind. I fully believe by putting AlphaZero together with Transformers, you’re gonna do better at planning and strategic thinking and some things that GPT-4 isn’t doing that, right? It’s still interesting. That is taking as long as it has, even though in historical time it’s all little blips.
Jim: I did look it up. Anthropic, You were right was founded by open AI guys, but their money has come from Amazon and Google.
Ben: I know that, but my thinking is that there may be some tricks to testing and tuning. There was sort of lore within open AI. And that lore passed its way into Anthropic, which is what has allowed them to accelerate well, right?
Jim: Anthropic is one of the ones we have in our Script Helper program.
Ben: This is part of what makes me comfortable open sourcing OpenCog Hyper-On, which is my guess for how we can actually make AGI. Open Source Code, we’re putting it out there. I am quite confident we can get to AGI faster with that code than Google code, even if they decide to start working on it, even though they have geniuses. Just because there is all this weird little lore to how to make a certain category of system work well. It’s not stuff that no one can ever replicate, but it just takes longer than you would think to figure out all the weird little ins and outs of making a certain class of complex AI system function.
Jim: Well, we’re probing extraordinarily high dimensional design space, right? And that’s the problem. And the other thing is, even though these things are fast, they’re not that fast. It takes a while to run each probe.
Ben: That’s really the issue, is that to learn what we’ve learned through running experiments of various random sorts over many decades, you can learn that faster now than we did, but you still have to run a lot of experiments. And then your human brain has to wrap itself around the results of those experiments. And these processes are not entirely parallelizable, although they’re somewhat parallelizable, right? It is now genuinely an AGI race, I think. I mean, you have big companies with a shitload of money who are pushing specifically toward AGI, and they’re still most resources onto an AGI, even within projects they’re talking about, the AGI, but more than zero resources are on AGI within these big companies, piggybacking on teams that are doing more immediate applied, narrower data set bound stuff.
Jim: Oh, yeah, it’s talking about the people, the players out there. The other one you don’t hear much about is character.ai. They’re actually number two in revenue after chat GPT.
Ben: Well, we heard about them recently because Google put a bunch of money into them.
Jim: And that’s two Google brain guys who founded that. And so I suspect at some point something really interesting will come out of that team. We’ll see.
Ben: You mean like being bought back by Google, right?
Jim: Yeah, that’s a possibility, right? Let’s now dig into our real topic here. Let’s lay out some of the signature attributes that you believe distinguish AGI from other forms of AI, which will then make more sense when we start talking about what are the weaknesses of LLMs.
Ben: You can look at this at a couple levels. And this is a big complex topic, which we’re going to skate over pretty loosely here. But you can look at the paper for details. I mean, I think if you look at artificial general intelligence in general, well, it’s very general. It’s not just about humans, right? So then there’s a variety of ways to conceptualize what is AGI and there’s no solid agreement on it. I don’t think this is a big conceptual problem. I mean, biology has no agreed way to conceptualize what is life. There’s a rough agreement.
It’s fine. So one way to look at AGI, which comes out of algorithmic information theory and statistical decision theory and so forth, is looking at AGI’s ability to achieve a huge variety of goals in a huge variety of environments. And if you believe our universe is computable, you can say to achieve a huge variety of computable goals in computable environments. And you can formalize this as Marcus Hooter did and Shane Legg, who was a co-founder of DeepMind, who worked for me for a few years early in his career.
I mean, he formalized this in his PhD thesis as sort of the weighted average over all computable reward functions of how well a reinforcement learning system can achieve that reward function. You have to decide how to weight the average. You can use what’s called the Salamanov prior, which weights basically it weights a reward function higher if it has a shorter representation in an assumed programming language on assumed computer. So there’s a whole bunch of math toward this. Now, the issue is, of course, there’s a lot of wiggle room here because how you weight the space of all reward functions depends on what assumed weighting measure you have.
So like if you’re weighting a reward function by how short its representation is in a given program, which which language you use, right? So there’s some wiggle room there, which can make a difference in real life. Then also there’s the issue that humans are complete retards, according to a measure like this. Like we’re very bad at optimizing arbitrary reward functions and arbitrary environments. Like I’m very bad at running a maze in eight dimensions, let alone 88 billion and five dimensions, right?
Jim: And how Americans still put shit on their credit card at 20%, proving they don’t understand exponentials.
Ben: Yeah, people are not good at general intelligence in this sort of mathematical sense. And there’s also the argument that optimizing reward functions is a perverse in particular thing for a self-organizing complex system like a human being to do, right? Like we do adopt goals and try to pursue them, but we may wake up the next day and say fuck that and pursue a totally different goal or wander around aimlessly, not pursuing any goal in particular.
So there’s other philosophical ways to look at general intelligence, like Weaver’s theory of open-ended intelligence, which just says intelligence is about complex self-organizing systems individuating, like maintaining their existence and boundaries and self-transforming, finding ways to grow beyond what they did. And in pursuing individuation and self-transformation, sometimes pursuing a goal is the thing to do. It is useful to be able to do that, but it’s not necessarily the end also. At that level, there’s a number of ways to think about general intelligence.
I’m not going to go through all of them now. Then we talk about human-level general intelligence, or human-like general intelligence, and all of a sudden we’re way more specialized, right? Because we are biological systems that evolved to do particular things in particular classes of environments. I mean, we evolved approximately to keep ourselves alive and reproduce on the surface of the earth. And I mean, evolution being self-organization as well as differential reproduction, we’re not that strictly focused on that. We can just do a lot of other wild things also, growing out of the self-organizing physical and chemical and biological processes underlying us, right? And so if you look at human-level general intelligence, I mean, you’re looking at what are people good at doing, right?
And then there’s a bunch of ways to conceptualize that. There’s IQ test, but that’s sort of shitty. I mean, an IQ test correlates with being highly capable human, given the constraints of humanity. But it doesn’t mean that anything doing really well on an IQ test is going to be good at being a smart human. Then you have sort of more multifactorial ways of looking at intelligence, like Gardner’s theory of multiple intelligence is saying you should be good in musical intelligence, in literary intelligence, in physical intelligence, in existential intelligence, in logical intelligence. And that in a way, that sort of way of looking at it comes closer to capturing what it is to be an intelligent human. But the bottom line is the field of psychology doesn’t give us a rigorous like data and theory grounded way to assess what it is to be an intelligent human.
It’s all in the end a bit hand wavy, right? Of course, the Turing test, which is like, can you imitate a human in conversation for a period of time, well left to fool other people? There was never any reason to believe that was a good measure of general intelligence.
Because I mean, there’s suckers born every minute and fooling people can be disturbingly easy, right? So I mean, you can see why that made a certain philosophical point for Alan Turing in the 1950s, which was basically functionalism. Like if if you could do everything we think is intelligent, then we should consider you intelligent. But I mean, the Turing test was always a very crude way to encapsulate the notion of functionalism that doing stuff should be enough, right?
Jim: Well, yeah, at the time, it was also obviously way beyond the capabilities of computers. So it seemed like an interesting stretch goal. But now that we’re knocking on the door at it with ways that are clearly not AGI, I think nobody takes it too seriously.
Ben: I mean, you remember the Minsky prize. So Loebner made the Loebner prize, which is he was going to give a bunch of money to anyone who could make a chat bot passing the Turing test, which was sort of an annual freak show of stupid chatbots. And Marvin Minsky offered the Minsky prize, he would give some amount of money to anyone who convinced Hugh Loebner to stop offering the Loebner prize.
Jim: So let’s focus in on just a few things that you would say that so far, LLMs are not good at, right? But humans are.
Ben: So I had proposed the MIT student test, which was if you could make a robot go through MIT and get a good grade going through all its classes, then we should consider that that system is decent impersonation of human level intelligence anyway, right? I mean, because you’re assuming it’s not cheating and it’s operating on the according to the ways that students are supposed to operate, right?
Jim: So I would push back on that one, that being an MIT grad myself, that an awful lot of MIT classes are just kind of crunching on narrow domains. Tell them to go to St. John’s and take the great books test.
Ben: I was going to go there actually. So I mean, I do think university classes have the idea that the professors are trying to come up with questions that are not in that precise form there online. So there’s some and you can look at say the Berkeley School of Music test, which I thought that
Jim: That would be better become a jazz guitar player and get laid at the bar on Friday night.
Ben: But you could push back at all these things and you could say, well, like Berkeley School of Music is sort of famous for people who can play all the scales very fast with great facility, but they’re not super creative, right?
So you can push back on all these things and it sort of gets higher and higher end, right? Because in the end, like to get a degree from MIT, you don’t have to be able to do frontier science to a degree from Berkeley. You don’t have to be able to compose original stuff like Thelonious Monk or Jimi Hendrix, right? So if you push it to the ultimate limit, what I could say is you might be able to make an LLM that can pass MIT and pass Berkeley School of Music, but it’s still not a full human level intelligence.
And to make that rigorous becomes a bit funky. What you could say though is, I think if you had LLMs that were like the minimal MIT and Berkeley Music School graduates, and the whole world were populated with these, I don’t think the world would ever advance very fast. You’re not going to get amazing new science theories. You’re not going to get incredible new genres of music invented.
You’re just going to keep recycling stuff, right? That is a difference, right? And this ties back to two clear limitations of LLMs as they exist now, one of which is the ability to do complex multi-step reasoning of the sort that you need to do to write an original science paper. The other is the ability to do original artistic creativity of the sort that you have to do to write a really good new song or to invent like a new musical style or something, right? So these limitations are genuinely there in LLMs right now. But one thing that we see though is we’re tap dancing around and adapting our criteria for human level general intelligence based on the strengths and weaknesses of what current AI systems can do.
And you can feel sort of lame about that, right? There’s the old saying that, well, the definition of AI changes because as soon as something can be done by computer systems, then we say, well, that wasn’t AI, that was just computer programming. The definition of AI has never been done yet. So we could say we’re doing that same thing with LLMs. We’re saying the definition of AGI is whatever LLMs can’t do yet, right? So I mean, I do think we want to watch out for that.
On the other hand, it’s still true. Like if you took LLMs as they are today, or even something that could pass MIT and Berkeley School of Music using current LLM like methods, if you took a whole world population of 8 billion of those, you’re still not going to create Einstein or Thelonious Monk. You’re not going to ever invent quantum gravity supercomputing. Like there’s a certain leaping beyond what is known that the current LLM architecture seems not to do, that some humans do sometime and that is incredibly critical for driving human culture forward, right?
Like this is why we’re getting to the singularity is precisely that some humans in some groups sometimes are able to leap well beyond what has been known and done before. And LLMs are not doing that either inferentially or in terms of creative arts. And the thing is, if you look at the architecture underlying LLMs, you can see why they’re not able to do this. So it’s not really just a matter of looking at what LLMs haven’t done yet and saying, ah, they haven’t done it yet.
So we’ll define whatever they haven’t done yet as AGI, right? I mean, if you look at what’s going on inside LLMs, I mean, they are recognizing primarily surface level patterns in the data that’s fed up into them. And they’re making this humongous, well-weighted, well-indexed library of surface level patterns. And I don’t see evidence that they’re learning abstractions of the sort that people are doing or of the sort that it seems like you would need to make a system that was smarter than people, according to say Marcus Hooters’ definition of general intelligence in terms of optimizing arbitrary computable reward functions.
Like it seems like the inability of LLMs to do complex multi-step reasoning and fundamental aesthetic creativity, it seems like this is tied to their fundamentally derivative and imitative character, which you can see plainly in their architecture. And what makes it subtle, though, is sometimes they are abstracting, sometimes they really are generalizing, right? So it’s sometimes cool stuff is emerging.
So it’s not like the methodology of predicting the next token using a humongous network is totally hopeless and abstraction is in generalization. In some cases, it’s unprecedentedly good at it. I just don’t think it’s good enough at it to become a human level AGI. And it doesn’t seem like it will become good enough to become a human level AGI. And I have experimented a lot with current LLMs, as well as knowing how they’re built in order to come to this sort of conclusion.
Jim: Yeah, it is interesting. The one thing you represent in your paper, you hit the exact same word that we use in our script helper team, which is banality. The natural state of the LLM output is banality, which is what you’d expect if you take the average of every utterance, essentially, which is a gross over statement. But in some sense, that’s where it’s centered at is the expected case. However, which is very interesting through clever prompting, you can use prompting to kind of move it around in its space way outside of its centers.
Ben: You can, but it doesn’t get as good as whether great creative human will come up with systematically.
Jim: My next point is not the very greatest, but we’ve gotten feedback from people in the industry that know that our little thing, just the small teams been working on for a few months, is already able to create movie scripts. They’re about as good as a first draft created by a professional journeyman screenwriter.
Ben: I totally believe it. I mean, in music, I can use music models that my own team is fine tuned. I mean, we can make 12 bar blues guitar solo, which is not so boring, right? It’s good.
Jim: On the other hand, we’re not even close to James Cameron or AA Abrams or something.
Ben: It’s not Jimi Hendrix or Joe Bonamassa, let alone like Alan Holsworth, who made up whole new chords and scales and figured out how to make them emotionally evocative or something. But still, I mean, you would certainly pass your blues guitar exam at the Berfley School or something, right? Like, I mean, it can play a damn good blues guitar solo and it’s not that boring. I mean, it can break out of the scale now and then and then go back just in time.
I mean, it’s pretty good. But if you look at science on the other hand, I mean, when you’re doing science that’s surprising and interesting and isn’t obvious to the community, you are taking a series of leaps in a row, each of which is surprising to expert, knowledgeable people, right? So a methodology, again, which is cleverly munching together what’s known in the field. It’s hard for that methodology to come up with stuff that’s fundamentally surprising to people in the field over and over in a well orchestrated way. And if if you ask it to come up with a hypothesis in a certain regard that would be surprising to most experts in that area, it’s not good at that.
Jim: And of course, most practicing scientists don’t do that either, right? As we know, most practicing scientists are basically pushing on things that have a very high probability of being correct, because that’s how you get funded. They haven’t been proven, but the consensus view is, yeah, that’s probably right. There are some contrarians.
Ben: That’s true. Now, but then LLM is cannot do science, like it cannot read a science master’s thesis or something like you can’t do science as well as a master’s student, let alone an average practicing professional scientist.
Jim: I will agree with you that at least in our experiments, we’ve ruled writing scientific papers out as beyond the current capability of LLM’s.
Ben: Yeah, it is an interesting borderline area in the routine science where you’re sort of turning the crank. It wouldn’t surprise me at this point if GPT-6 could do an average science master’s thesis because some of these things are predictable, right? And it does seem like it’s implicit in knowledge. So I tried an interesting experiment with GPT-4 in this regard in that my friend Greg Meredith and I had cooked up some new math ideas. And this is not that advanced, but it’s obscure.
And so, and it didn’t seem to have been done. And we sort of know everyone working in this field. So there’s a, I tell you the God, the McBride derivative, which lets you take the derivative of a data structure. So there’s a weird result that like the number of one whole context of a data structure behaves like a derivative, which is too mathy to go into. But you have a weird way of taking derivative of data structures.
So we made up a way to take the directional derivative of a data structure, right? And then I don’t think anyone has done this before. It’s not that useful for most people. It’s interesting to me for some AI things I want to do, like figuring out how to do gradient descent on data structures. But then you give this idea to GPT-4 and you ask it like, OK, here’s my definition of directional derivative of a data structure. Does the chain rule hold? Does the product rule hold for it? Like, flesh out the theory of calculus for this weird definition of directional derivative of a data structure. It does it all right.
Like, define the second derivative of a data structure in the same spirit. It did it in a stupid way. I explained to it why it was stupid. Then it did it again and came out with a different way. Like, you know, can you do gradient descent on data structures using this definition? It could work out how to do that. So you could see like, it can turn the crank on advanced math in a quite beautiful way so that I could go from a few stupid simple paragraphs, which do contain original thinking by Greg and myself, but it can go from those simple paragraphs to a whole paper and then I can publish that if I had the time to submit it somewhere.
Jim: Yeah, that’s I would say similar to what we’re finding in screenwriting. You need still need the original seed of an idea from the humans and you need some curation at multiple places in the process. Right, right. So it has no deep judgment as we know, right? It’s a feed forward fucking network. Of course it has no judgment.
Ben: Mostly feed forward, not entirely, which is also interesting.
Jim: Not quite entirely. There’s little sideways stuff.
Ben: So in terms of science, the thing is this was something you would give a master’s student or a super advanced undergraduate to do to show that they could turn the crank and they’d mastered the formalism. But it’s not like a PhD thesis. It would be could be a master’s thesis, but not like a world class master’s thesis or something, right? It’s not coming up with the original idea. And it’s also not able to tell when it’s understood your original idea wrong.
You got to go back and correct it because you’re in a domain here where right or wrong doesn’t have an obvious metric. Is this an interesting definition of the second derivative based on my definition of the first derivative or is it a sort of dead end definition of the second derivative?
Jim: Yeah, at that point, you’re sort of talking almost in math aesthetics, right? In some sense.
Ben: Right. But the math aesthetics, every single mathematician would know what’s the right way to do it or not. And the aesthetics that mathematicians have really will tell you which one is going to lead to interesting theorems down the road and which will lead to a bunch of stupid stuff. Right. So I mean, the math aesthetics is honed based on what tends to yield to interesting things. And there, what’s interesting is a mix of aesthetics and utility in various senses, including in the end practical senses for physics and computer science and so on. Right.
Jim: So yeah, of course, we still don’t know why that works. But it does seem to. Right. The why should math aesthetics actually ground to the real world in some sense? Of course, it doesn’t always.
Ben: No, you don’t you don’t fully know that. But we have a decent sense of it because the aesthetics is honed. I mean, it’s by our brains, which are conditioned on the same real world that we’re applying the math in. Right. I mean, it’s not.
Jim: That’s the argument is that they’re both tuned on the same reality. Right.
Ben: But this gets us back to the deeper point. So I think if you look at these shortcomings of multi step reasoning and non banal creativity, which it’s clear, LLMs are bad at now, you can see that the human brain does this by forming abstractions. But it’s also true that the human brain doesn’t form arbitrary abstractions.
Like it’s not a brute force search algorithm for finding abstractions. The human brain is not just a next token predictor or reward optimizer. I mean, the human brain is an agent, which is operating a body in a world. I mean, it is at many times a goal achiever and reward optimizer. It’s also an open-ended intelligence, which is trying to survive and reproduce and self transform and transcend and self organize and all this good stuff.
What the human brain mind is doing, it’s forming abstractions that are guided or inspired by its agentic nature. So we, we learned to abstract because abstracting was a better way to get shit done in our embodied life as an early organism. Like, so we originally learned to abstract because, okay, mean animals attacked us in five different ways and we’d survive better if we could predict new kinds of attacks mean animals were going to do that we hadn’t seen before.
And the organisms that could guess future possible attacks by mean animals would survive better than the organisms that can only anticipate the exact kind of attacks that they’ve been hit with before. And same thing with, you know, finding food. I mean, in a famine situation, early humans that could guess whether we’re going to find food in unprecedented ways would be more likely to find food than stupid ones.
You can only look in the exact sorts of places they’ve found food before. So we honed our ability to abstract based on our embodied agency and then based on our ability to communicate, right? Abstracting on how to get the girl or get the guy. Like, I mean, early agents that could make a guess as to the best way to seduce the next potential partner would do better than the ones that just recycled the same exact selection mechanisms they’d seen before.
Jim: Yeah, if you use your father’s pickup lines, you’re not going to get laid very often.
Ben: No, no, especially not in my case, you know, my dad, but right?
Jim: Yeah, your dad, nice guy, but I don’t imagine he was Mr. smooth.
Ben: So we, we learned to abstract in order to be agents in environments. Right.
Jim: Yeah, I actually use a slightly different word than abstract. I’m going to throw this back at you. I’ve come more and more to believe that human general intelligence, it’s kind of a low end general intelligence is actually a whole lot about building heuristics, which are kind of like abstraction, but they’re not quite the same thing.
Ben: There’s a duality between them, I would say, and the any heuristic must be an abstraction because it must work many, many times. And it’s a compact summary of specific tactics and then any abstraction to be useful, the abstraction has to help you do things in different situations. It has to be a generator of heuristics. Yeah,
Jim: I disagree. I don’t think the abstraction has to be visible. The abstraction can be wrapped in the heuristic affordances. And that’s all you really need.
Ben: I mean, I think if you wrote it all out in proper category theory, you could show duality between abstraction and heuristics. I mean, it’s like the duality between logics and programs and the Curry Howard isomorphism or something. I mean, I think there are just different ways of representing the same thing. But anyway, that gets very wonky, right?
Jim: Let’s don’t go down that road. That’s over my head anyway. Over my pay grade. You know, I hate math, right? So now let’s take the next step. All right. LLMs, finality is their rule in the creative domain and making multi-step leaves that steal the math example guided by some strange sense of aesthetics, which is holistic and not easily reducible to formulas. That’s what LLMs can’t do. Is that fair enough?
Ben: There are two things they can’t do and that I’m predicting the next few dot versions won’t be able to do, right? Because there’s a lot of other things that are much more basic that current LLMs can’t do well, of course.
Jim: Well, they can’t drive a car. They can’t drive a robot.
Ben: I mean, they can’t write a good letter even. I mean, they read a form letter, which is fine, but they can’t write a compelling cover email or something.
Jim: Yeah, they can. The very first thing I used GPT for, this was GPT 3.5 was to write a resignation letter from a board of advisors I was tired of being on. It was brilliant, better than I would have done. And it did it in one minute with like two turns of prompts and that was it. It was elegant. It was good. Nothing of a great letter writer, but it was good enough. Anyway, let’s use those two examples. What kind of architecture do you think will be the vector towards solving those two classes of problems? Non-binality and creativity and multi-step leaps in science or mathematics based on some spooky form of aesthetics or other holistic guidance?
Ben: A bunch of different paths that could work, actually. So one path that I think could work, which is not what I’m working on right now, is starting with transformers and adding a lot more recurrence into the picture. So, I mean, you had more recurrence in LSTMs and other sequence predictors before you got to transformers. How they stripped out a bunch of recurrence to make it scale and be trainable on huge datasets.
And that, I mean, obviously has had great impact. But on the other hand, recurrence is the most obvious way to get interesting abstractions into your neural network, right? So, I mean, I think one direction to explore is introducing more and more recurrence into the network, like replace the attention heads with something a little more sophisticated. I mean, the retnet, the retrieval net from Microsoft takes a little tiny step in that direction. I also think Alex Ororbia’s work from RIT, where he’s looking at predictive coding based training of neural nets as an alternative to backpropagation.
I think that could also be interesting. So you could look at adding more recurrence and maybe if that makes backprop training too slow, look at replacing backprop with something a little more closely, narrowly inspired for the training or you could use both. I mean, you can use predictive coding followed by a backprop around or something. So I think there are interesting directions purely within the neural net universe. And if you then look at a, like a Gemini type architecture where you take something like Alpha zero and then take a neural knowledge graph, like in differential neural computing, right?
Take a neural knowledge graph, take an Alpha zero, connect them with a transformer, but do that with a transformer that has some recurrence and use something clever than backprop for part of the training if you need to to make learning converge. This seems like a meaningful direction. Seems like a direction that Google and DeepMind are ideally suited to pursue because they’ve got DNC, they’ve got Alpha zero, they invented transformers. They’ve got loads of experts on like CM AES and Flungpoint evolutionary learning, which is another possible way to help with training of networks with more complex architectures than backprop can reliably converge for. There’s an interesting direction there, which is LLM ish, but it still involves neural networks, but it’s not clear that LLMs will always be the hub. If OpenAI does it, they won’t be.
If DeepMind doesn’t, maybe they won’t be because the DeepMind guys understand LLMs, but they have a lot of other deep neural net and deep reinforcement learning architectures that they like also. I’d say Yoshua Benjiro and his group in Quebec has similar ideas, like taking neural net that does minimum description length learning. So it’s explicitly trying to learn abstractions, couple that together with the transformer, the transformer is going back and forth with a minimum description length reinforcement learning.
So I mean, there’s a whole constellation of different approaches there, taking different deep neural net architectures with heavier amount of recurrence in them and co-training them together. And of course, OpenAI could do that too. Right. It seems not to interest them as much as it does Benjiro’s group or DeepMind at this moment, which is understandable because they’re leading the pack in things that are very transformer centric. So it wouldn’t necessarily be rational for them to put a bunch of resources on something else because they need to maintain their lead in transformers.
Jim: Yeah. You mentioned one of the things that of course has been of interest to mind for 23 years, which is the combination of radically recurrent, not just sort of, because LSTM was even only minimally recurrent. It just had those little loops, but radically, utterly recurrent architectures. And here’s how you do it. Here’s how you do it. What is the training method that works with any architecture is evolution. Right. In fact, I actually started working on a little benchmark two years ago. And again, I stopped. It happens to me all the time.
I have to revisit this one. With the most radically insane recurrent network you could possibly imagine, but with the most simple transformation functions, and then used evolution only to tune it. That should be being explored by people right now, particularly with computation having gotten goddamn cheap.
Ben: I agree. There’s way too little work going on in floating point evolutionary algorithms for evolving neural nets of all different source. Even something like InfoGen, which seemed to work interestingly up to a certain level, then stopped working for training generative models with semantic emergent, semantic latent variables. If you’re using InfoGen to generate pictures of a face, it will automatically emerge latent variables for how smiley is a smile, how big is a nose, how wide are the eyes or something. When you tried that on more complex things, it didn’t converge.
But you never knew if that’s because the architecture is bad or just because back prop is bad. No one, as I’ve seen, tried, well maybe they tried and failed, just didn’t publish it, but I haven’t even seen it on Stack Exchange or something. Seems no one tried that much to use non-back prop training algorithms to make InfoGen type algorithms converge for more complex problems or to make highly recurrent algorithms work. I also think that Arorbia’s predictive coding based methods, which are localized unlike back propagation. I think this is also an interesting learning method that should work better for richly recurrent networks due to its purely localized nature.
So I think yeah, there’s flow-in-evolution, there’s predictive coding, there’s a variety of non-back prop methods that conceptually seem like they should be much more promising for recurrent networks and should be tried at much larger scale. And this is not being explored too much. You know, if I was a billionaire, I would be funding a team to work on that or if I was running like a deep mind or open AI.
As it is, I’m doing alright, but I’m not a billionaire, right? And my AI project, I’m happy that we have a great team of a couple dozen people working on AGI R &D within the OpenConcog Hyper on Mendy, which in some ways is the right size to have, right? But on the other hand, it’s not a big enough size to have other teams working on exploring other parts of the AI design space. So we’re like, we’re all in on OpenConcog Hyper on with LLM as a supporting actor, right?
Jim: Okay, let’s go there now, right? As you know, I’ve followed, for at least from the sidelines, the OpenConcog story since 2013, I think is what I think.
Ben: Yeah, and this deserves a whole other podcast rather than the 10 minutes that I have now, but I do want to say a few things about this. I posted two big, fat, hundred pages papers on ArcSyve. One was on limitations of LLMs and why I think they won’t be in AGI. And the other is a sort of overview of where we’re at with Hyper on. I mean, both of which are obsolete by now because they’re two months old, right?
They’re both roughly the same technical level also, like you don’t need to be a hardcore math researcher to read them. So I would point people to that lengthy review paper summarizing where we are with Hyper on.
Jim: It’s called OpenConcog Hyper on a framework for AGI at the human level and beyond. I need to read that paper and then we’ll maybe we’ll have you back and talk about that one.
Ben: I mean, you can approach Hyper on from a cognitive science and philosophy of mind perspective or from a software perspective. For lack of time, I’m going to look at it from a software perspective right now, but the paper looks at it from multiple perspectives. From a software perspective, our key component is a weighted labeled metagraph, which is like a graph, which is it’s hypergraph because links can spend multiple nodes. It’s a metagraph because links can point to links or to subgraphs, right? And each link in this can be typed, but the type itself can be represented as a little little sub-metagraph. And they can also be weighted with different numerical weights and then try to represent many kinds of knowledge like episodic knowledge, declarative knowledge, procedure knowledge, attentional knowledge, sensory knowledge in this hypergraph and another representations that are linked to this hypergraph.
And we try to represent different cognitive operations like reinforcement learning, procedure learning, logical reasoning, sensory pattern recognition. We represent all of these as little learning programs where the programs themselves are represented within this hypergraph, right? So part of this is a new programming language then, which is called meta, M-E-T-T-A, which is for meta type talk. We have a new programming language where the programs themselves are little sub-metagraphs in this big graph. So meta programs are programs for acting on, transforming and rewriting chunks of the same metagraph in which the programs exist, right?
So in hyperon, the crux of it is this big potentially distributed, self-modifying, self-rewriting metagraph. And but this is not the best data structure for every single thing, right? Like if you have a database of images, sure, represent them as a quadtree or something, right? And an LLM is a very interesting way of indexing certain types of knowledge and processing certain types of queries. So as a convolutional neural net for visual or auditory data, right?
So by all means have some neural nets hanging off this weighted leveled metagraph knowledge store, right? The core philosophy underlying this at the crudest level is a mind as a system for recognizing patterns in the world and itself, including patterns in, you know, which actions and procedures have helped it get which things done in which situations. A key thing we have here that we don’t have an LLM is this system is very oriented toward reflection. I mean, it’s very oriented toward recognizing patterns in its own mind and its own process and its own execution traces and then representing those patterns inside itself. I mean, the LLM, it’s focused on predicting the next token in a sequence. And in the end, the LLM itself is not a sequence.
It’s a program, right? So it’s not ideally suited for recognizing patterns in itself and recursing, whereas OpenCog is focused on recognizing patterns in a metagraph and it is itself a metagraph, right? So I mean, it’s more reflection oriented. Then it lends itself very well to evolutionary program learning, like genetic programming and probabilistic variance thereof.
It lends itself very well to logical inference of various sorts. So various historical AI paradigms, besides reinforcement learning and deep learning, per se, various other historical AI paradigms fit quite well into the OpenCog framework, along with potentially new AI paradigms that haven’t been pursued historically, which are also interesting, like self-organizing, you know, mutually rewriting sets of rewrite rules. Like I’ve called it, Cog is free, right? So if I go back to what we discussed at the very beginning, what we’re exploring in OpenCog is this self rewriting, self modified knowledge metagraph, which has a bunch of different AI programs of different sorts as subgraphs within the metagraph or then activated to help rewrite the metagraph. LLMs can then exist in the periphery of this, helping it to do things. But this is different than the OpenAI type approach where the LLM is at the center and it then invokes other peripheral AIs to do things.
It’s also different than what I see as the probable deep mind or Facebook type approach, where you have a constellation of different deep neural nets at which the LLM is one of them that are sort of co-trained. And I think these are all not totally stupid approaches to AGI. I think that in a way, my approach is the least human-like of these approaches. It’s the approach where if you get to human level with this approach, your path to superhuman is way, way, way clearer, right?
Because if we get to human level AGI with this abstract knowledge metagraph, which is based on recognizing patterns in itself and reprogramming itself for better or worse, like your path from a human level AI that can do this to a superhuman AI system is really short because the whole system is based on rewriting its own code, right? It’s an interesting property. I also think that this sort of architecture is very well suited to doing science because science is about logical reasoning and it’s about precise description of repeatable procedures. And these are things that this metagraph framework just does better than a transformer.
I mean, you have logic in there. You have programs in there. For creativity, you mentioned evolutionary learning. So evolutionary programming is very, very natural for this sort of metagraph. So then I think, in a sense, evolutionary creativity comes more naturally here than to neural nets, like floating point GA based training of deep neural nets. At the moment, leverages the power of evolutionary learning worse than trying to evolve metag programs within hyperonet.
Like crossover works better for the programs within hyperonet and in neural nets. The main challenge we have, I believe, is scalability of infrastructure. And of course, overcoming this challenge just lets us hit the next level of challenges. But if my cognitive theory is right, right? So like if I’m right that this metagraph system representing different kinds of memory and learning and reasoning in this self modifying metagraph, like if this is conceptually a great route to AGI, then basically our obstacle to validating or refuting my hypothesis here is having a scalable enough system. I mean, just like basically having a whole bunch of multi GPU servers in a server form was what was missing to validate or refute that LLMs would work and CNNs would work as they’re doing.
Right? So like we need the right scalable plumbing to see if our OpenCog system will work as we think it will. That’s the main reason we decide to deprecate the old version of OpenCog that you worked with before, Jim, and make a new version from the ground up. I mean, there were issues with usability of the old version of OpenCog. And but on the other hand, you could have worked around those usability issues without rewriting everything from the ground up. But the fact that we had to deal with usability and scalability both in a very different way than was being on the old version of OpenCog made it pretty clear we should be right from the ground up.
So now, now the scalability story is becoming interesting. I mean, we’re working with a compiler from meta, which is the language native of OpenCog hyperon compiler from meta to a language called row line, which Greg Meredith built to make extremely efficient use of multiple CPU cores and hyper threads rather than GPU. Then we can translate row line into hyper vector math, which is the math of very high dimensional sparse bit vectors. Then we can take hyper vector math and put it on the APU, Associate Processing Unit chip, maybe a company called GSI. So we’re working on a whole pipeline, which goes from the hypergraph through a series of stages to really efficient math manipulations on different special hardware than GPUs.
So there’s a bunch to be done here. At the high level, there’s a lot of promise. If you look at what’s happened with deep neural nets, they were around a long time, since 50s and 60s. I was teaching deep neural nets in the 90s, and it took three hours then to train by recurrent backprop a neural net with like 30 nodes. So what happened is these long existing algorithms hit hardware infrastructure that would let them finally do their thing. And we have our fingertips with GBD4 and CNNs and Dali and blah, blah, the results of this. So now our hope is that by having this scalable processing infrastructure for our new version of OpenCog, our hope is that this suddenly lets a whole bunch of other ancient historical AI paradigms do their thing at great scale, including logical reasoning and evolutionary programming. I mean, as well as giving a playground for experimenting with a bunch of other cool new AI algorithms, because we have a very flexible infrastructure. But this is the story told in this other hundred page paper, which obviously could be stepped through bit by bit. Right?
Jim: Yeah, let’s do that. I’ll read it. You know, I have enough background in OpenCog, probably to make sense of it. Yeah, it’s not that hard. And as you know, I was a hawk on distributed architecture when everybody else at OpenCog said I was full of shit. I said, you a boys ain’t going to do deck with this atom space that is only very laboriously paralleled out to maybe five instances if you’re lucky.
Ben: We haven’t yet convinced the great Linus Veptis on hyper on that actually.
Jim: Well, Linus Veptis believes you can’t do it on one box. It may be the biggest box currently commercially available. If you can’t do it on one box, it’s not worth doing. That’s Linus’ hypothesis. I’m convinced.
Ben: He used RocksDB to make a distributed atom space for the OpenCon Classic actually. So it’s not quite as simple as that. But in any case, we haven’t convinced him yet. Yeah, I would say while we haven’t advanced as fast as LLMs have, the hyper on project has been going well. And on most technical milestones, we’ve gotten there a bit ahead of where I thought we would.
So I mean, we’re going interestingly fast on building all this out compared to much of my prior AI career, partly because I mean, there’s a little more money around than there was earlier in my AI career. Partly just tooling is good. And I mean, LLMs are helpful for various things. So I think we’re seeing acceleration on non-LLM AI projects also, even if not quite as transparently as you are in LLMs.
Jim: If you recall way back yonder, I guess you and I had a conversation. We both converse. We estimated that if the OpenCog atom space hypothesis was correct, something around a thousand boxes was what you needed. You know, is it 200? Is it 20,000? Somewhere in that range. It’s not a million, right? Not the size of Google’s infrastructure. Have you guys gotten the distributed atom space to work on a thousand boxes yet?
Ben: We haven’t tried it on a thousand boxes, but I believe that it would. Yeah. I mean, the distributed space that Andre Senna has built now, he was one of the two authors of the original atmosphere back in 2001.
He’s still around. So they basically distribute them space back ends on MongoDB and Redis in a certain way. So I mean, we use Mongo to store the atoms and we use Redis to store the indexes. It seems that Senna has set this up in a way that it pretty much should scale as well as Mongo and Redis do. Like there’s nothing that slows it down and there will be coordinated to.
Jim: Other than how you orchestrate the remote process invocations and all that sort of stuff. Right?
Ben: Well, that’s right. And I mean, Senna and Alexei Podopov have worked this stuff out in ways that I haven’t fully dug into because it’s more their thing than mine. I would say from Greg Meredith, who developed the Role Languages, which I mentioned, we’ve also got R space, which is sort of a few steps beyond BigchainDB. So we have now a secure blockchain based Adam space module that we can plug in along with the Mongo and Redis things when we need it. We can make this interestingly decentralized as well as distributed. So we’re I mean, we’re doing the whole distributed atmosphere architecture in way that bottoms out on Singular, NET, NUNET, Hypercycle, this whole blockchain based infrastructure.
I mean, that that doesn’t help make the AI smarter, but we’ve made it so that doing it decentralized doesn’t slow things down as much as it would have in the past, which is is also an interesting little corner of all this.
Jim: Well, it’s been a great conversation, Ben. I look forward to reading the other paper. When I’m done, I’ll reach out to you again. We’ll get you back on and have a more hyper-on oriented conversation. But I think the audience will find it useful. You’re always been a person who had his own idea about how this is going to work. But you’ve also kept up on what other people are doing. And it’s always interesting to talk to you.
Ben: Cool. Yeah. Yeah. It’s been good. We got to go a little deeper than the usual, which is what we all expect from the Jim Rutt Show.
Jim: Absolutely. Well, thanks again, Ben Gertzel for I think your sixth appearance on the Jim Rutt Show.
Ben: All right. Cool.