Transcript of Episode 28 – Mark Burgess on Promise Theory, AI & Spacetime

The following is a rough transcript which has not been revised by The Jim Rutt Show or by Mark Burgess. Please check with us before using any quotations from this transcript. Thank you.

Jim: Howdy, this is Jim Rutt, and this is the Jim Rutt Show. Listeners have asked us to provide pointers to some of the resources we talk about on the show. We now have links to books and articles referenced to recent podcasts that are available on our website. We also offer full transcripts. Go to jimruttshow.com. That’s jimruttshow.com.

Jim: Today’s guest is Mark Burgess, an independent researcher and writer.

Mark: Hey, Jim, good to meet you here in virtue.

Jim: Yeah, great to have you on. Really interesting background. Mark started out as a theoretical physicist, and then became a technologist, scientist in other fields, and an advisor to public and private organizations globally. He’s perhaps best known as the author of CFEngine, the first industrial-grade configuration management systems for large computing installations still used by some big companies. He’s also what I might call a practical philosopher. I don’t know if he’d like that term or not, we’ll find out with his development of promise theory, which we’ll discuss here today. And yeah, he writes music, composers fiction, and thinks about money. Woow.

Jim: Let’s start out just a little bit about your background. What caused you to bail from physics?

Mark: Money. You’re talking about money. I finished my PhD. I went to Norway of all places too many to go skiing, I should tell you but skiing and mountaineering but there was a postdoc in physics in that and I had a couple of years worth of postdoc money from the Royal Society, which ran out sometime in the mid ’90s. Then it was a kind of a low point in research funding, so I had to find other sources of income. After waiting on the corner with my cup for a while, nobody gave me any money. I decided to move into my hobby at the time, which should sort of become Computer Science at that point.

Jim: That’s great. I’m involved with the Santa Fe Institute and have been for many years and at SFI we have lots of physicists turned as I call them imperialistic invaders of other fields, and they do great. What do you think makes physicists such good generalists?

Mark: That’s a great question. I think it’s the tool set that we have in physics is an amazing tool set. Going back, I mean, all the way to Euclid, I suppose, or Pythagoras, even for that matter, just the way of looking at the world and trying to quantify it and at the same time, having both quantitative and qualitative ideas about it, and being able to put these into a story and tell that story in different ways. I think that’s a general skill, which perhaps other areas of knowledge have not been able to practice in with so many deep thinkers throughout the years have taught us great ways to go about it.

Mark: I like to think that physicists are just great storytellers and are able to apply their thinking to stuff that happens. My definition of physics is the study of stuff that happens.

Jim: Yeah, that’s the concept of the physical, something that really happens. Something I’ve noticed, when I went out to SFI as a junior researcher after I retired from business, is I found that physicists above everybody else are able to think in terms of the simple. One of the things I took away from my time at SFI is that complexity really arises from simplicity. In many fields, people tend to jump in at too high level of complexity and not start with the simple. Does that make any sense to you?

Mark: Yeah, absolutely. I couldn’t agree with you more. Richard Feynman had a famous rant about philosophers. I’m not going to repeat that all I agree with a little bit of it. I’m not as anti philosopher as he was. But when I read philosophy, and I do read quite a bit of philosophy when I can, what I find is that they tend to throw everything up in the air and try to catch everything. Whereas a physicist will throw some stuff up in the air and try to see the essential pattern and then just throw everything else away and just look at that. Then focus in on it narrow in on it, eliminate, eliminate, eliminate, approximate, approximate, approximate, and just keep doing that until some simple essence remains, and it’s boiled down to a perfect diamond.

Jim: Yep. Then go back up, particularly the imperialists who invade biology or anthropology. They say, “All right, but let’s make sure we start with the simple and then from the simple we can grow upward.” Well, enough on that. Let’s jump into the substance here. Let’s start with CFEngine. As I understand it, it was a precursor to today’s configuration management systems like puppet, and chef and even can be considered an ancestor to a Kubernetes or however the hell you pronounce that. What insights led you to create CFEngine?

Mark: That way, yeah. That’s another story, which came about from my dilly dallying around physics, which was when I came to Oslo for my skiing, and mountaineering, and physics, I got involved with installing the computers as one does. For a while, it was interesting to me to learn everything that I could about these machines and make them work and automate them and observe how these processes talk to one another across the network. It was all fascinating to me and that was fun doing everything by hand for a while to learn. I like to get my hands dirty when I learn a new field.

Mark: But after a couple of months, then suddenly it wasn’t so fun anymore to be doing this having people knocking on the door in my office and saying, “Mark, could you just do this for me and could you just do that for me and could you fix my computer?” You know how it goes. I thought being an ardent fan of science fiction and technology, wouldn’t be cool if I could actually write a kind of an AI system that could manage the thing for me or get the computers to manage themselves seems to make a lot of sense because after all, you should always let your tool do the work as my former gardening boss used to say when I was working there in a summer job, “Always let your tool do the work.”

Mark: He instilled into me this ethos that, “Let your tools do the work.” Instead of humans running around after computers, fixing everything for them, nurse meeting them and fixing every little detail, you should get the machine to do it all by itself because how hard could it be, right? I sat down and tried to write some scripts in peril and shell and all of his all languages. The guys at also university have to give them credit because they were some amazing imaginative people who were very early on making script based automation in Unix.

Mark: I looked at their stuff and thought, “This is cool, but what we really need is a much simpler way of doing it.” That’s simplicity again. The thing I observed was that when they wrote scripts to try to automate software, they had all of these, if then else statements, if this is an HPUX machine or if this is an AIX machine, then do this with this extremely long command with lots of different options added. Whereas if it’s a sun Microsystems machine or an Apollo workstation, then do this with another set of different commands and different options added. There were so many variations of this that it became a litany of incredibly enormous branching complexity, which was just untenable after a while.

Mark: First of all, it was unreadable, incomprehensible, and impossible to maintain. I thought, “You know what? What should really happen is that an intelligent agent should wake up and say, “Okay, where the heck am I? What’s my environment? What kind of operating system is this? What kind of things are going on? How shall I adapt to this situation?” Then you should be able to specify as a kind of a policy and a high level declarative way what you intended the state of the machine to look like. What’s it supposed to be doing, what kind of security settings should we have, who are the allowed users and so on and so on.

Mark: Then, just simply make that happen. Turn that into the desired end state of the system, manipulate kind of a curved space-time that you were just sort of rural into this black hole of perfect configuration, and the system would simply configure itself by magic or by technology. That’s how I got me started on CFEngine. I wanted to make that super cool AI based reasoning system that could investigated surroundings, adapt to them, and then set to work at adapting them and changing them and manipulating them.

Jim: You’ve also mentioned in the context of CFEngine, something called the maintenance theorem. A theory that looks to me like policing and equilibrium by detailed balance. Could you tell us a little bit about the maintenance theorem and how it relates to CFEngine.

Mark: Yeah, so because being of a physics background, after a while I started looking at the computer on my desk. For our readers, I can explain them, my background is in quantum physics. I started out in quantum gravity, quantum field theory, and actually even looked a little bit at complexity theory in the ’80s when that was becoming a thing. But because I wanted to look at the world in terms of these nice descriptions of all things communicating and interacting, I looked at the computer on my desk and said, “Shouldn’t I be able to understand this computer? Not as a machine, but as a phenomenon.”

Mark: The standard story in computer science is kind of computers do what we tell them, right? We program them and then they do exactly what we tell them. Anyone who’s ever worked with computers ever knows that. That is an extremely optimistic, idealized idea about what computers might do on a good day. But I figured, “Well, we ought to be able to look at these things as a phenomenon that are somewhat unpredictable now because we connect them to a network. It’s no longer just what we program them to do, but it’s what other people elsewhere program the neighboring computer to do and what it’s doing to our computer, and how they’re sharing resources, and so on and so on.”

Mark: A computer program, any kind of process is actually taking place in a highly complex environment of different multiple overlapping processes, sharing resources as a commons and as in competition with one another and this is influencing every computer program. We can come back to this because it’s really interesting measurements that we did around this. That means that all of the time, in spite of your best laid plans of mice and men have the desired end state of this computer, the programs you’re trying to run, things are going wrong all the time and need to be corrected. Things falling out of whack, programs crashing, need to be restarted, running out of disc space memory, et cetera, et cetera. All of these things can be corrected fairly simply, even by a machine, just observing and maintaining if you know what the proper state machine is supposed to be.

Mark: I took this idea of the intended state of the computer, what’s intended by the system, and this idea of intent comes back many times as a theme. I wanted to create a system, which could… every time it went out of whack, I whack it back in like whack a mole. This inner kind of detailed balance. Out of this, I came up with this theorem, which is essentially a rebranding of Shannon’s error correction theorem, which says that you can set aside a certain amount of resources in your system to correct errors. Assuming that you know what the correct state of the system is, you can correct those errors to that proper state, simply by reiterating the desired policy again and again.

Mark: Then I sort of reformulated that in symbolic terms for dynamical system. The computer is a dynamical system with symbols as well. I’m not turned into this thing, which I call the maintenance theorem, which as you exactly say, is a kind of policing of the desired state as you’ve defined it, your policy for the system. It works in a dynamical way as a detail bounds.

Jim: That’s very interesting. In 1998, you wrote a paper called Computer Immunology about competitive maintenance and homeostasis in large scale computing. Homeostasis is something that I find very central to how I think about the world. Could you say more about that perspective and how has an age?

Mark: That was a really funny story because it was the very first time I went to the United States, in fact, was in 1997. I went to a conference, and I talked about CFEngine. I presented my burgeoning tool there. CFEngine was from 1993 and it developed as an open source project for a few years and then I was invited to the Lisa Conference, which is in beautiful sunny San Diego. I gave a talk about this detailed balance idea. It was a huge conference, lots of people and after my talk, bunch of guys came up to me and they said, “Oh, this is very interesting.” What is this? It’s like an advanced form of crown. Crown is a scheduling agent in Unix for our readers.

Mark: I went back very dejected and thought, “They didn’t understand a word of what I was saying.” I spent that year actually trying to figure out how to explain this idea of detailed balance and maintenance in systems automated maintenance. On the plane, going home… This is a true story. Going home on the plane, I actually got sick, I came down with some flu on the plane or whatever. I came up with this idea of the immune system as an analogy of this idea of detailed balance where your body, in some sense, has a policy for a healthy ideal state. When it goes out of that balance, some monitoring system picks up on that and then tries to restore it to that state by eliminating invading pathogens and whatnot. Not only invading pathogens, of course, but regulating the heartbeat, and the blood supply, and oxygen levels, and all of these kinds of temperature things and whatnot.

Mark: This idea of regulation, both dynamically in terms of rates, and temperatures, and so on, but also symbolically in terms of, are you an allowed person of DNA or are you an allowed antigen in my system? This was a very interesting idea to me, and I wanted to use that as an analogy to explain my idea. At that time, I don’t know if you know, but there was a very interesting lady called Polly Matzinger, who came up with a theory of immunology called the Danger Theory, which was kind of a counterpoint to this idea of self, non self in the immune system. Her idea was that self non self distinction is quite a hard computational problem, it doesn’t make a lot of sense in terms of evolution.

Mark: But if you could detect danger signals, that’s much easier thing to detect. Because in the body, when cells die badly because of attack, they tend to die by necrosis. They burst apart in the legal. It’s in some horrible mess inside and those bits of protein are easy to pick up on that’s what your immune system picks up on. Whereas if cells die properly by program when they’re just shutting down your computer programs just shutting down, it’s finished its job and so on, then they die by this program, cell death and that’s a clean business.

Mark: There is a simple computational algorithm, if you will, for detecting bad circumstances over good circumstances. I wanted to try to apply this idea to my CFEngine. I already had the symbolic parts, but then I realized I could add the dynamical parts. That’s what got me into machine learning, and threshold detection, adaptive systems, and so on. I went and I gave this paper called Computer Immunology. The guy came up to me afterwards and he said, “Oh, do you know this group at the Santa Fe Institute and University of New Mexico Stephanie Forest group, who have been working on computer immune systems for security?” I hadn’t heard of them, but it turns out we got to know each other a little bit and exchanged some papers and that was a fun thing.

Jim: I was going to point out that if you hadn’t made the connection that Stephanie Forest has been working in this area for a long time and she’s certainly somebody who’s working worth looking at their work. All right. Let’s go from kind of a theory idea to a little bit more practical. What’s your view of the current state of play of configuration management? I see Kubernetes or whatever hell you call it. How do you pronounce that, by the way?

Mark: I say Kubernetes. I think it’s a Greek word it means the steersman, the [Rhoda 00:18:21] system.

Jim: Okay. That sounds good. Anyway, actually, Kubernetes is everywhere. I think the last time I personally fooled with any of this we’ll use chef back in 2013. As a guy who’s had his foot in this category for a long time, what’s your thought of the state of play of configuration management?

Mark: Great question to ask me a bit controversial as well because I think people’s understanding of this has changed over time. You go to understand the back in the beginning when I started in 1993 a hundred machines was quite a lot and then it was a thousand machines was quite a lot and then it was 10,000 or a hundred thousand and now we’re up to millions. But along the way, the way we manage those numbers and the way we manage complexity has changed a lot as well.

Mark: In the beginning, computers were simply what we call bare metal, the raw computer running programs directly on the operating system. But along the way, we introduced virtual machines to try to manage the capacity of machines better, to share workloads between different processes while maintaining some kind of secure separation between them and within building layer upon layer of this virtualization on top of machines. That’s kind of changed the way we need to configure systems because each layer needs its own configuration to manage it.

Mark: CFEngine was designed for… Again, because of my background, I designed it for extremely large scale and for extremely diverse environments. I was working in a university and as you know, all university guys are special kids. They have special needs. They won’t take on a kind of an industrial approach like, “Let’s make everything the same to keep it simple for the managers.” No, no. They need their own special way of doing it and their own particular programs and then their own settings. Turns out we needed to do everything. Every department at the university, totally different. And that having a system that could adapt to those differences and still maintain them at scale to allow it to be a good precursor for what the cloud was turning into with this kind of multi-tenant environment where all kinds of people have different needs overlapping.

Mark: CFEngine was designed to be as very flexible and highly scalable thing running on bare metal and later it can be adapted to these other scenarios. Then a puppet came along and did a similar thing in a way that was more kind of user friendly to the system administrators. People found my CFEngine a bit too academic I suppose. They found chef to be more friendly and more programming worthy. Those guys took a different, a more programming kind of approach to configuration. Then along comes Kubernetes and Docker and these so-called containerized systems where you had a new way of packaging software where you could deploy software along with all of its attendant dependencies and all of the things that depended upon as a single package or as a set of related packages. All of the configuration could be kind of baked into it in advance.

Mark: This allowed programmers to set up the configuration and maintain it themselves if they could control the process and reset the process, kind of like doing control, or delete on the process but in the cloud. That kind of changed the way that people manage systems altogether. They didn’t need to have an immune system anymore hunting down these problems. They would simply let stuff die. This is not going back to that biological idea of necrosis, if you like. If the system dies badly, well, just make sure you had enough cells to begin with and if you lose a few cells, doesn’t matter you still got plenty to back up and then just make some new ones. Just let them split up and make a few more copies.

Mark: There was actually this shift from a quite rigid and keenly maintained fragile system that needed to be maintained and repaired in realtime by detailed balance to kind of an approach using redundancy. Let’s just set up a whole bunch of things in advance and if we lose a few, no matter we’ll just start a few more and the users will probably never notice because the thing that had changed in the meantime was the web. Everything was coming online and web-based systems were built around this client server architecture, which was very much… if we lose touch with the server we just reconnect and try again. Everything’s based on trying again. Redundancy, do it again, keep doing it until it works. That is a kind of a repair loop in its own right. That allowed people to change the way that they manage systems.

Mark: Criminalities was kind of the answer to that. But the self healing aspect of CFEngine was built into Kubernetes now not at level of individual process attributes, but more at the level of these entire packages of systems managing software.

Jim: I must say I love Docker. I like to play around with various technologies and Docker has lowered the threshold to be able to play around with some fairly advanced technology by a tremendous amount. As you said, you no longer have to build the environment yourself. You’d say, “All right, just give me this complete canned environment, bring it down, fire it up.” Docker, say learn to play with TensorFlow or something and when you’re done, delete it and instead of it being a couple of days to come up to curb, you might be actually writing a program in an hour. Quite remarkable. The other thing I like about Docker and similar revolution is how much more efficient it is. The layers of virtual machines was an unbelievable suck on performance.

Mark: People are still adding those layers, by the way. I mean, it’s not ending anytime soon. Or this could be… Actually, I make myself unpopular in the IT industry by talking about the next climate crisis and we as IT developers and designers should think more about the resources we use because the IT industry actually pastS as many people know past the airline industry for its carbon emissions some years ago and it’s only getting worse. When blockchain came out, of course that’s now melting one ice cap per week or something like that, just to exchange some dollars between people.

Jim: Yeah. Ben Gursel, my good friend and AGI researcher, he likes to say about the proof of work idea in Bitcoin. Really, all it’s doing is accelerating the heat death of the universe, right?

Mark: Absolutely. Yeah. I can’t agree more.

Jim: All right. Let’s switch directions a little bit here and from your practical work in CFEngine, at least I can see, you can tell me if I’m full of shit or not. You develop promise theory. Tell us about promise theory and maybe if there actually was relationship to what you learn from CFEngine and where it went from there.

Mark: Promise theory actually did come out of my experiences with CFEngine. I wrote CFEngine again in 1993 pretty much by intuition and no idea what I was doing. I just had some idea of how it should probably work using some of my physics experience or background I suppose almost a thermodynamic or statistical mechanics approach to management. Then as I mentioned to you, I got into the theory of computers and I wanted to understand computers as a kind of a physics of what would be a physics of computer networks.

Mark: Being a good physicist, of course, I immediately tried to apply all of the tools that I knew, differential calculus, statistical mechanics, and on and on all of these classical quantitative methods. I tried to write, I was making a degree actually at the university at the time and wanted to write a textbook for this. I was trying to put down everything that I could think of. I tried every theory you can imagine to try to analyze computers in a helpful way. Graph theory, game theory, queuing theory, statistical mechanics, set theory, on and on.

Mark: I wrote this book and made a few steps forward here and there I would say. But at the end I was feeling pretty disappointed in the amount of sensible predictability that you could get out of computers. It turns out that there were very few predictable patterns in computers. The only one happens in very, very busy servers, and then it turns out that servers actually look a lot like the heat spectrum of a black body because network traffic turns out to have this kind of a black body radiation like signature, which just being a physicist I could recognize, but I was one tiny little thing that we could observe by applying traditional physical methods.

Mark: But I suppose by the end of that I realized there was one thing missing and that was intent. When we make technology, it’s not just a bunch of things, random things, interacting and competing. We make something to perform a function. A process has a role to play in a larger functional system. Same is true in biology of course, although it’s not designed, it’s honed by the forces of selection to play a particular role in a functional system. But this notion of functionality, I think, was a huge missing piece, not only in my work, but actually you could even say in modern science. We’ve shied away from this notion of intentionality because going back to our philosophers, people tend to think intentionality assumes the idea of free will. We have to have intelligent beings or something of this nature.

Mark: I think what I successfully showed with applying what eventually became promise theory is that we don’t need to bring up this notion of freewill to talk about intent. We can define the idea of intent simply as a set of possible outcomes. What are the possible pathways forward at the causal pathways forward that a system might take and those different alternatives are equivalent to the different intentions you could have for the system because it’s no good hoping and wishing for things that couldn’t happen. Of course, you can wish for them if you like, but they won’t come about so you can eliminate them in some way.

Mark: This told me that there would be some process of things that could happen and things that would select them a little bit, like you have mutation and selection in evolution. But in any kind of process in which something is developing causally in time, you would have things that could unfold and things that could prune that tree of alternatives. This led me into the notion of what eventually called promises. Promises form a kind of a graph between a selection of nodes in which nodes are the active things in a system. They might be computers, they might be processes, they might be cells in biology or people for that matter.

Mark: The important thing about computers that I had understood from looking at these so called black body spectrum was that in fact the largest influence from the environment on our computer systems was humans. It makes no sense to speak about computers in isolation from humans. We have to speak about human computer systems and that overlap, that interaction, between humans and computers forms a statistical in physics we would call it a reservoir or a bath, a thermal bath, if you like of interactions between humans in the machine, which drives it. Much as physical systems can be driven, you can drive oscillators and it drives it as a kind of an external boundary condition in such a way that it steers the system in a number of ways.

Mark: On a small scale by rather strict programming constraints, it can make selections of outcomes in a symbolic fashion. On a much larger scale, on a statistical level, if you will, it can steer systems towards using certain resources or competing with one another instead of complimenting one another and that can cause systems to crash or to go into nonlinear behavioral modes, unfortunate feedback loops, and so on. Trying to combine all of these things into a picture of the combined both the dynamics of the system with the semantics or the intent behind it, led me to a kind of a network view of the world, which also was fed into by network science, which you guys from Santa Fe know all about as well.

Mark: Promise theory you could say is a kind of a generalization of network science, which can describe systems from the smallest scale to the largest scale tend to have the scaling became an enormously important issue.

Jim: Very cool. Good introduction. Let me drill down a little bit in some of the specifics. One of the things that you do throughout the book or at least quite a bit in the book, it’s contrast the idea of obligation and the idea of promise. Could you expand on that a little bit?

Mark: Yep. This became my favorite subject in computer science. Of course, you have this history or tradition of using logic to try to explain things. Logic, again, for our readers is a very constrained form of mathematical reasoning. Logical systems are, I would dare to say, even over constrained at times. There are so many constraints on the logical system that the outcome can only be either true or false. This was actually a… And people often call that Boolean algebra. In fact, George Boole back in the day was entirely aware that there could be a whole range of possibilities between true and false, which are taking on board the idea of uncertainty.

Mark: In his writings, he actually wrote zero and one as we sometimes do today and computing and all of the values in between four for different levels of certainty where one is true absolute certainty and zero is false absolute certainty and not of truth, if you will. That’s actually a form of obligation when you design computer systems and say, “You must have this outcome or that outcome and you must produce this or that.” This is the notion of obligation, which is also called Deon technologic or it’s a form of modal logic and as my friend and collaborator Jan Bergstra who co-wrote the premise theory book with me likes to say, and he is a logician by training.

Mark: In the 50 or 60 odd years of deontic logic, there’ve been almost no useful results and the reason for that is that logics of obligation are quite inconsistent. If you have two agents and they try to impose on a third, one says you say tomato and the other one’s says you say tomato and that agent in the middle has no way of resolving that conflict because the conflict arose elsewhere in these two other agents and it has no power over the source of those obligations.

Mark: Obligations are fundamentally hard to resolve because obligations come from without rather from within. What I realized was something that’s important in physics, especially inquantum physics, and that is the notion of locality, that if you only base descriptions a system on what happens from the location at which the property is asserted, that property comes from within, then you can always resolve conflicts because if I say to you, tomato or tomato, that might be inconsistent, but I can choose one or the other and decide which of them I want to go with. I am able to detect that inconsistency myself.

Mark: I actually used this idea in CFEngine to detect conflicts of policy. If a user had tried to say, “I want this program to be switched on or I want it to be always switched off,” that’s a conflict and it’s clearly resolvable as long as that policy comes from the same source. My computer can decide for itself whether it wants to do it or it doesn’t want to do it and it can choose between them. But if those obligations come from without, then they will simply compete to try and switch it on, switch it off, switch it on, switch it off, and there’ll be an endless competition between the two, which is destructive.

Mark: This idea of promise captures the notion of what we would call locality and physics where rule-based systems can be completely consistent and have proper resolutions versus systems where things are imposed from without or if you like have contradictory external boundary conditions and will forever be in competition with one another. That allows you to build a notion of stability, which is again another way of saying this idea of an immune system. What an immune system does is it creates semantic and dynamical stability in a functional system. That’s what we wanted to try to achieve for computers as well. I think that’s what you can say the state of modern management has really taken on board that idea in the way that we do things today.

Jim: Yeah, I really liked what I thought of as the base level of simplicity that only an agent can make promises of itself, right? As you pointed out, the number of contradictions that the system has is much less when only an agent can make a promise about itself rather than having various agents trying to put obligations on each other. However, as you pointed out, this results in a kind of relativity problem, right? And that any agent can make its own and sometimes contradicting assessments of whether a promise or another agent fulfilled the promise or not. What are the implications, what I might call that information relativity.

Mark: Yep, exactly right. Of course, this is super interesting for me because this goes right back to the huge shift that happened in physics in the 20th century from this Newtonian view of the world in which the universe is kind of an obligation imposed by God. These laws of physics are almost like a handle. God is turning with His own hand and driving the universe deterministically into this picture first picked up on by Einstein, which showed that actually what you see in the universe depends very much on where you are and what you’re doing at the time and which processes are available to you, which ones you have information about. That introduced the idea of the communication or the channel of information between a source and a receiver, the phenomenon being observed and the user of that phenomenon or the observer of that phenomenon.

Mark: In a computer network, of course, this comes about all the time because computer networks are built on the idea of services and clients. A server will, say server webpages or a database or something of this nature, and a client will be trying to observe the database or pull out the results of that service from a different location entirely. You have one local system which knows all about its data and then you have another system on the other side of the planet perhaps trying to observe the data from a distance possibly through multiple different routes through the network, different pathways.

Mark: All of those problems that emerged in 20th century physics are now re-emerging in the computer network. The idea that messages can take multiple paths and interfere with one another, the idea that I’m certain information could have be reversed in its order because I’ll travel them along one path. The speed of communication is different from the speed of communication on another path allowing inconsistencies to arise. If you try to impose collaboration between systems by obligation, by pushing data, you can easily find those obligations, those inpositions of data turn out to be absolutely inconsistent and unresolvable. Whereas if you take the opposite view, which is the Einstein observer view, the local observer view, that each observer make some observations of its own accord and promises to resolve conflicts or differences that it sees by its own hand as it were, then it becomes up to the observer to resolve those difficulties and find the intended outcome.

Mark: Now, this is interesting on a number of levels. First of all, it implicates Shannon’s theory of communication, which was the idea of having a sender and a receiver. The sender will send a certain message and the receiver will try to receive it, but the receiver may be either unable or unwilling to receive the message. It might be distorted, it might be in a different language. The resolution of the receiver might be unable to discern the different symbols. There might be errors. It might be encrypted again on and on. What is promised by the source of a message or service and observation is not necessarily accepted or received by the receiver and this adds an additional degree of freedom and promise theory that is somehow related to this notion of source and receiver in Shannon’s theory what he recalled the overlap of mutual information.

Mark: Interestingly, it’s also analogous to the offer and acceptance of the wave function in quantum mechanics. In quantum mechanics, we have this curious reformulation of physics in which what originates from a physical system is then collapsed onto a set of Eigen States or potential outcomes like these pathways we were talking about earlier, these intentional outcomes, if you will, which are represented as so-called mathematical Eigen States in quantum mechanics. That matching between the two is almost like the acceptance of the overlap function between what is transmission and what’s received by a receiver with a limited number of degrees of freedom, which is analogist to what Shannon thought of as an alphabet of possibilities in the theory of communication.

Mark: Now, when you try to build this into a system, you find that this additional degree of freedom between offer and acceptance plays now an important role in the semantics of the system. We use that to our good intent, for example, insecurity. Will I grant you access to my system or not? You may try to impose information on me, but will I accept it or not? I’m going to filter your packet on the network even though you sent it to me. I don’t want to hear that. I want to filter out all of the fake news from Facebook. I have a policy about this and that and the same thing works both on receiving requests to supply information at a server and it applies once the server replies with data in the other direction. There’s a kind of a filtering, which goes on between sender and receiver.

Mark: Again, as a physicist what I find interesting there is that that filtering has the analogist role to the mission operators that you insert into the quantum mechanical wave functions in quantum mechanics to extract certain properties of the system in a consistent way. There’s this absolutely fascinating thing that’s happening in cloud computing because of all of the layers of virtualization that are taking place. That is that as we get closer and closer, you have virtualized world of processes interacting, we’re getting closer and closer to seeing these very bizarre phenomena that have confused us in quantum mechanics, perhaps for a similar reason or perhaps for totally different reasons. Of course, we don’t truly know, but that idea which goes back to Newton, that similar phenomenon may have similar explanations, opens an intriguing possibility, which is that we may be able to learn something about quantum systems by actually looking at how our technology now exhibits phenomena. That seems somewhat counter-intuitive, but at the same time somewhat familiar from quantum mechanics.

Jim: Very interesting. Let’s bring it down one level of abstraction. That was very, very deep. But let’s bring it down a level or two or three of abstraction and maybe you can tell us how promise theory either does or doesn’t shine a light on what is one of the big issues in distributed systems today, which is the trade off between consistency and scalability. Lots of systems have chosen one form or another of what’s called eventually consistent. Think about people like MongoDB Cassandra and one of my current faves because it’s just so cool, Apache Ignite. Does promise theory or your other aspects of your work? Tell us how we should think about this idea of eventually consistent.

Mark: Yeah. I like this topic very much because it is exactly one of those cases where the quantitative, the qualitative, the relativity, and all of these issues come into play for a very concrete, practical purpose. I’ll try and not to get too deep into it. Let’s see if we can start simply first. This idea of consistency, first of all, is the idea that all of the observers in a system, in a distributed system would always see the same answer when they asked the same question of a particular service.

Mark: Now, this of course would be what we would like to happen because we’ve constructed computer systems based on the idea that we only had one computer in the beginning, and so of course, one computer is always consistent with itself. Just as if you read one book, the words aren’t changing in front of your eyes, but if you listen to a story from multiple different storytellers as it’s being spread through the rumor mill, you might get 20 different answers, which might not be consistent all.

Mark: This is not good because you would like computers to be reliable, especially if it’s your money that’s involved in the bank and so on. But, of course, we believe very much in this notion of truth in the modern world, no matter how naive that might be. We want simple answers to simple questions. Of course, what Einstein showed us is that different observers will tend to see different answers depending on all kinds of different reasons associated with the channel, between the sender and the receiver. It could be because there was a finite speed of communication, so updates that are made from one source take longer to propagate to some listeners than to others.

Mark: The time at which a listener or reader tries to ask a question might be critical to the answer that they get. As I mentioned earlier, may actually influence the order in which they see certain events. If somebody is trying to resolve a question about, “Do you owe interest to the bank because your account was overdrawn?” You’re interested in knowing, “Did I pay the bill before or after I received my paycheck?”

Mark: In other words, “Was my account overdrawn for a few days or was it actually never overdrawn?” The answer, that question might not be resolved in the same way by different observers. Clearly, this is an important issue.

Jim: Yeah, of course. It’s also a performance issue, right?

Mark: Yes.

Jim: The tighter you try to make that or minimize the relativity of views, the more expensive by an exponential, it turns out it is to maintain a large scale distributed system. There’s sort of a highest level design perspective of how long does eventually consistent means or how big can the relativity be?

Mark: Exactly. Where scale comes in is in the distance and therefore the time it takes to make that measurement and also the amount of data that need to be transmitted because obviously it takes a longer amount of time to equilibrate or will make consistent a larger amount of data than a smaller amount of data. Again, for our readers, this is especially critical for databases because the process by which databases are maintained and made consistent is drastically more inefficient than the process by which databases can be queried. The reason for that is they have to be serialized in order to get things in the right order.

Mark: Serialization leads to single threaded, very slow moving, cues that tend to build up and resolve very quickly like traffic jams in the internet. Then there’s these two philosophies, one called absolute consistency or acid consistency and this counter veining base consistency or eventual consistency that says, “Okay, we’re going to respond to you very quickly, but the answer you get might not be exactly the same as you have from someone else deal with it. It’s up to you as an independent observer to deal with it.” That would be a kind of a promise based approach because it’s putting the responsibility onto the receiver to resolve that inconsistency themselves because they have that control to be able to do so.

Mark: The alternative is this absolute consistency at the source, which tries to make it an obligation on all of the potential suppliers of that little cartel, if you like, of servers that are promising the information. They’d all better be consistent or else. The only way that they can promise that is to check with each other all of the time, whether or not they have the same information in spite of all of the possible changes that may be coming in from all over the place. That requires them to talk to everybody else in a kind of an N squared process for the mathematical people in the audience. You can talk and clients can talk to a single server in time of order M but if M serve as a trying to talk to everybody else in the network, it takes something of water N squared or N times N to do that, which is much, much slower.

Jim: Yep. Get a thousand servers. It’s a million, right?

Mark: Exactly. The number of servers that you can have to replicate your data is very, very small. Typically, three to seven is the rule of thumb and it’s an odd number for other reasons, but that’s typically the rule of thumb that people can expect to wait an acceptable amount of time in order to equilibrate those data before giving a request. Even seven is already slow. The Google’s of the world because they have these ultra fast networks and fast service can get away with that for very small amounts of data. But whenever your amount of data gets large, you can forget about that consistency and you’re really then stuck with so-called eventual consistency.

Mark: Of course, that’s the model which is used by all of the social media companies because they have such enormous amounts of data that they have no hope of being able to make it strictly consistent.

Jim: Yep. Of course, semantically you have to think about the semantics. For many applications in the world, the level of consistency is not very important. For example, if I’m looking at a toaster on Amazon and I want to know how many stars it has my usual rule of thumb is, “Don’t buy it unless it has four and a half stars.” Right? Whether that information is one minute or even one hours old, it doesn’t really matter to me making my decision as long as the end is bigger, bigger than a hundred, that usual number I use. Once case the semantics of consistency would say, “Don’t matter much.”

Mark: Yeah.

Jim: So do your best job. If all viewers of Amazon even have an hour window of slop about the star rating on a given toaster, it doesn’t matter. On the other end of the example you gave, the order in which debits and credits are done on a bank ledger well, for some applications it needs to be real time. It’s not that many. But again, eventual consistency and time for calculating fees is probably where the semantics comes in. One has to be smart and thinking about what you’re trying to accomplish when one thinks about these things.

Mark: Exactly right. Again, coming back to your point at the beginning, Jim, about why a physicists… why do we have a role to play in IT? I think in physics we’re taught from the very beginning to assume uncertainty in measurements and to deal with it by statistical means or by repeating experiments again and again and building up a picture of a time to take into account the uncertainties. But there’s no culture of this in computer science.

Mark: Computer science was built around this idea of logic, true or false. The answer is either one thing or it’s not. There is no in between. There’s no uncertainty or no variability, no multiplicity, if you will, no plurality of answers that you can get, but this is turning out to be entirely untenable in a world at scale. The kind of world we were talking about in the cloud where the measurement outcome of a measurement is now looking again a bit like quantum mechanics. It could collapse to any one of a number of different values depending on how and exactly when that collapse takes place. This is an area where physics can really contribute to designing systems because we can build in uncertainty as we have to in causal models of physical phenomena to take into account that scaling of causality in a way which computers simply can’t do.

Mark: I have, I mean, a funny story about this as well. I mean, it’s almost an… there’s a bit of… I was a bit embarrassed by it, but one of my colleagues at the university had just graduated his PhD and he had been in the subject of object orientation. An object orientation is one of those philosophies in computer science actually comes from Oslo. I should probably be careful what I say about this, but it’s a top down philosophy that tries to organize classes of data according to types and related activities.

Mark: But it’s basically an obligation model. They were having this resolution problem, a case in which object orientation was absolutely unable to resolve an obligation inconsistency. He asked me what I thought of this problem using premise theory. Could I come up with an answer for this? We sketched around on the blackboard for a few minutes and I simply drew a bunch of agents and the promises they made to one another and how to resolve the inconsistency locally as we were just discussing and actually solve the problem in about five minutes. This poor guy sat down and he was absolutely flummoxed because he’s literally spent three years on this problem trying to solve it, using the logic of obligations. They’d found a work around which was potentially unacceptable, but it was a possible problem associated with this object oriented model.

Mark: Literally, by turning the thing from top down to bottom up or local, we were able to solve the problem in five minutes on the blackboard.

Jim: That’s amazing.

Mark: Today, we would call the service oriented architecture. Service oriented computing has now come in, in a big way and you’re seeing this in microservices and all of these things here about on the net, but in a way you could say that’s the legacy of promise theory to turn the picture upside down from top down to bottom up in a way that allows you to resolve these inconsistencies.

Jim: Oh, well that’s wonderful. Let’s now turn from promise theory applied to computing to some other topics, but before we do that, I’m going to jump back and hit two things from our previous discussion. The first is that you pointed out the Ontech logic has not been very satisfying in terms of actually useful results, particularly people tried to apply it in AI and ended up in dead ends. But there’s another alternative that Ben Gursel I mentioned before and he’s actually been on the show before, has developed called probabilistic logic networks where he does exactly what you say is built formally into his logic, formalism, the sense that every logical statement is uncertain and in a couple of different ways and he’s finding that to actually be quite useful. Any thoughts about whether that sounds reasonable?

Mark: Absolutely. This is actually taking us into the area of AI now and machine learning and whatnot. Going back to the late ’90s when I was building CFEngine in this computer immunology study, I came up with the idea that we really needed to learn the patterns of behavior and systems in real time. One way to do that was to gather information, get the machine to learn its own behavior and try to look for patterns within that over time. When you do that, you’re basically learning. This is called unsupervised learning. It’s real time learning without some kind of pre-trained curation of information upfront, so it’s really trying to figure stuff out as you go along, fly by the seat of your pants if you will.

Mark: When you learn in that way, you’re building up a picture, which is longitudinal in the probabilities. Your measuring data are a bit like when you’re doing polling for an election or something, you’re seeing changes in real time and you’re aggregating the results in time longitudinally, one after the other, like a queue. That’s not what we normally do in probabilities and it’s not what we do in statistical mechanics in physics or in probability theory. What you tend to do there is to aggregate data transversely or over space rather than time.

Mark: You look for a bunch of equivalent instances that you can compare, which are running in parallel and you’ll say, “I want to compare myself to this or to that and then build my notion of what’s normal or not normal based on multiple instances.” For example, in biology you have all these redundant cells and you’ll be looking at the mall and saying, “This is normal. Or when there’s a cancer cell, that one’s not normal. It’s definitely got a different signature than these other guys.”

Mark: The way you aggregate data and former or what we call an ensemble or a body of data to form your body of certainty as it were, the way you aggregate that is important with if you do it in time or in space or longitudinally or transversely. If you’re a service oriented system, you have no choice but to do this longitudinally and if you’re learning to adapt to a process in real time.

Mark: Again, if you’re a robot or some visual learning system, again, this is data that are arriving longitudinally in time rather than in space, but if you’re trying to pre train a deep learning network or something of this nature based on big data that you’ve accumulated over time. This is more analogous to the process of evolution where you’ve got multiplicities of similar agents in an environment responding in different ways and you’re looking at a statistical distribution in space and looking for the truth from a population rather than from an adaptive singular system. You see what I mean?

Jim: I like it.

Mark: These are two very different perspectives on the world, which now people across getting terribly muddled about in AI, believing that systems, these mano scale neural networks, deep learning networks that are basically a single scale fabric and will be therefore capable of responding to a single scale phenomenon. If you want to try to adapt that to make a multi-scale phenomenon such as you might do with big data across something tuned over an evolutionary period of time of a multiple timescales, you’re going to find that to be extremely inefficient and you’ll have to have an enormous and number of machines, a huge amount of nodes and it will be incredibly wasteful and the time it will take will probably be extremely silly for one of a better word.

Mark: I think that ties in a little bit with this result people like to quote about cheering machines, being able to simulate any kind of system, nurturing of this famous theory that said that a Turing machine can simulate any kind of a computation, no matter what it is, whether it’s a brain or what because he didn’t say how long it would take. You might be able to simulate using a Turing machine and a deep learning network, any kind of process, but it might take you a thousand years to do what a human does in a split second.

Mark: Trying to impose multi-scale phenomena on a single scale process is going to mess up your reasoning and your timescales. This is one of my hobby horses, which if you like comes out of promise theory, but it’s a feature of any kind of a dynamical system. Your guys at Santa Fe know all about this because multi-scale systems each scale making its own promises if you like, or having its own special interactions are exactly what the characteristic of a complex systems. This is exactly why deep learning will never be more than something like an eyeball or an ear, a smart sensor rather than an entire thinking cognitive system.

Jim: I love that because I agree 100% deep learning while miraculous. I mean, that’s amazing the results they’re finding. I think in the end on the road to artificial general intelligence, we will think of deep learning, much like perceptual systems as opposed to the real thinking that gets done in other stages. Let’s move on to a very quick discussion of another thing that you talked about in passing one of my favorite bugaboos, freewill. Frankly, I’ve come around to the view that freewill isn’t even worth talking about and it just seems to me to be in a confused model and people use it in such different ways that there really isn’t any value in talking about freewill. Do you have any thoughts on that?

Mark: I do, and again I think this is where philosophers have really gone off the rails. They have this kind of… I mean, you remember back in the Victorian times we have this ladder, all the species going all the way up to gods and of course mankind is next to God. This is one of those unhelpful things where we try to set humans apart from the other species. I like this idea that we can understand cognitive systems as from a scaling perspective, going all the way down to a single atom that can see a photon absorb it, change its energy level, and emits the photon and again, if you like passing on the inflammation or absorb the photon. That would be perhaps the simplest cognitive system you could imagine it has both a sensor, it can receive information, it has the memory to remember that bit of information by its energy level changing and it can adapt to that change of information.

Mark: You can go on scaling that from molecules to cells to single celled organisms to multicell organisms to humans and so on and so on, up and up and up. With increasing amounts of memory and increasing specificity of sensory operators, you can semantically and dynamically scale that process up and end up with a human brain. I wrote a book this year called Smart Spacetime to try to explain how a consistent story around that scaling is not difficult to write down. I mean, there’s a degree of speculation involved obviously, but there are bits of evidence as well that we can rely on like the Dunbar hierarchy for instance for our readers, Robin Dunbar was an anthropological psychologist actually I believe originally from Liverpool university is now at Oxford who studied the primates knots, not the religious primates but on the ladder to go up, but the monkey primates like us and the sizes that he brings and he showed this straight line relationship between the size of the neocortex or the reasoning part of the brain and the size of social groups that animals live in.

Mark: He showed that the amount of processing capacity, memory, and analysis that goes on inside the brain is basically related to the size of the social group that we’re able to call our friends or to have a relationship with and a relationship is not just, “Yes, I friended you on Facebook.” It’s an interactive trust-building interaction in the Robert Axle rod mode of exchanging like playing a dilemma game to see whether you… “Yes, I keep my promises, no you don’t. Do I trust you? No, I don’t.” This takes some time to build up and so obviously it requires some memory to remember those interactions. More memory, more socials groups, more trust, and so on. He also showed that the depth of the relationship is important.

Mark: With a human brain, my brain can manage about the size of a family for a really close knit relationship that I know really well, but the level of working together, we can handle maybe 30 odd people and the level of sort of sending a Christmas card every year at Christmas, we can handle up to about 150 people. With the same processing capacity, we can do different jobs in different of amounts of detail depending on how much computation is required. That, again, is an important constraint factor on the kind of scaling that you can do in any kind of learning system from atom to brain.

Mark: Now coming back to free will, I think if you remember at the beginning I said this way of defining intent as simply a possible outcome. When you get to a certain scale, the number of possible outcomes that you can foresee because you have learned of the many possible things that can happen in a cognitive network, a semantic network, they become really quite large. You may have the illusion of freewill simply by having many possible choices that you can foresee as being rational outcomes by having feedback within a brain, if you will, or a cognitive agent.

Mark: What I found interesting from this simple model of cognition and memory… o in other words, a cognitive system being something that receives inputs through some kind of a sensor and has some memory and is able to divide memory and by essentially smart discriminators and the smart discriminators are honed by evolution over long, long timescales. Vision, and movement, and place, and direction, and even faces apparently have a specific so-called hardwired circuitry. Those discriminators can then form a reasonably organized network in a memory system, let’s call it a brain for want of a better expression.

Mark: Then when the senses are switched on, it can reason in a certain way, which is constrained by the outer world. If you switch off those sensors, like when we’re asleep, you’ll see a different kind of a story being told on the brain through dreaming, which is based on the same set of memories within the brain or within the memory system being piece together with different possible outcomes.

Mark: Then of course things go a bit haywire because there’s no external criteria to set a sense of time. Our brains use their own sense of time to piece things together and their own sort of resolution mechanisms like emotional resolution to piece together storylines. I find this notion of stories to be extremely compelling in these cognitive systems around the idea of consciousness. I think that if we ever figure out what consciousness is, we’re basically looking at whatever it is in the brain that pieces together stories in a timeline. Because even when we’re asleep, we experience the sense of time and yet we just have random access memories.

Mark: If you store a bunch of data in a database in your Oracle database, your MongoDD beta Bates, it doesn’t suddenly turn into a story or sense of time that has to be constructed by some ongoing process extracting pieces, characters, and forcing them into a storyline even without external sensory stuff going on. In order to have this sort of illusion of freewill and consciousness and these related issues, I suspect… Of course, this is pure speculation, but I suspect that these things will simply emerge naturally once we have a sufficiently large multi-scale semantic network, which is informed by data from the outside, but at the same time can reason on the insight through these feedback loops and using an independent process, possibly several if you have multiple personality disorder, right?

Mark: Multiple storylines going on at the same time. This could be the way that we understand those issues.

Jim: Yeah. Actually if I have a day job, which I don’t really, it is the scientific study of consciousness and my own theories would be essentially at least partially parallel in that episodic memories or absolutely critical. As far as we know there are pointers between episodes. There is an implicit timeline built into the, you know, the memory structure of reasonably advanced animals. We’re not quite sure where the line is, so that’s interesting.

Jim: I think it’s time to move on a little bit here. You’ve mentioned you’ve applied promise theory to city scaling along the lines of the work of Lewish Betancourt and Jeffrey West. Could you say more about that?

Mark: Yeah. I had the most amazing coincidence. I was in New York for a conference and I met Jeffrey West for the first time, which is amazing because actually we rediscovered, we only have one degree of separation between us. I think it was Mark Hindmarsh, a mutual friend, but we’d never met and I had no idea what he was going to talk about. He had no idea what I was going to… And you never heard my talk because he left early but I listened to his talk about cities scaling and was just fascinated because I realized he was giving almost the same talk that I was going to give just on with a different focus. I went home and I started to download all of his papers. I spent my Christmas that year literally working through all of the papers that he wrote about scaling in biological systems and then with Lewis Betancourt on scaling in cities.

Mark: I reproduced Lewis’s model for deriving the universal scaling in cities and then I read arrived the whole thing is in promise theory because I figured that although these universal scaling stories are nice, they don’t really reveal what the mechanisms are. In fact, they somehow actively cover up what those mechanisms are. I was interested in mechanisms because I’m more of a hands on kind of guy, I suppose. I mean, Lewis did all the work, but just applying the additional few bits and pieces that are in promise theory that are not in ordinary network thinking allowed me to come up with a couple of ideas about how might this apply to a different kind of network rather than a city.

Mark: Again, thinking of cloud computing because that’s where I normally work in my day job, if you will, mere mortals like myself have no access to the data from the big cloud companies because I don’t work for Google, I don’t work for Facebook and they don’t share with the other children. We don’t have access to that data. They have a monopoly on that. I began to wonder… These guys from Santa Fe had done an amazing job of acquiring data about cities from all of these obscure sources and putting together a picture. I wondered if some of those data might have similarities with the kinds of networks that you would see in a computer, for instance, rather than the networks in a city.

Mark: Just by generalizing the argument to different kinds of networks, could I infer what kind of scaling you might see in a computer network? I did something around there. I actually wrote a paper on that. I sent it to Lewis and he was generous in giving me some comments, but we were actually both way too busy and had to move on to other things. I don’t think either of us either ever followed up on that stuff. I saw he just posted a very interesting new paper about cities in China, which I started to read. I’m still interested in that stuff because I work a little bit on smart cities from time to time, but there are so many interesting cases where again, multi-scale phenomena and network science play a role and I think promise theory can inform a lot of those discussions.

Jim: Cool. Another area you’ve at least commented on a little bit is applications of promise theory to agile, both at the group level and at the organizational level. What can you tell us about promise theory and agile?

Mark: Yeah. Again, another amazing a coincidence. Our apparently mutual friend Daniel Mezick wrote a book called Leisure by Invitation, I believe is the title, in which he mentioned promise theory and he let me know that he’d used promise theory in his book inviting leadership by invitation. Of course, I got the book, I read it and I thought, “Oh, this is pretty interesting stuff.” I’d always had this idea that I wanted to use premise theory at the scale of not just quantum mechanics, computers, biology, but it’s society as social interactions as a potential network as well.

Mark: Of course, teams and companies and organizations have some kind of dynamics of their own. Actually, even Jeffrey West has done some work on that on a different level, but I thought, “What could I do to apply promises directly using my algebra of corporation to Daniel’s work?” He’s brought up this issue of boundaries where the interesting boundaries are in a system and how that influences the cooperation between individuals and companies and so on. This is a pretty interesting topic on all levels. Where is it exactly, the boundary of a system? Think of an organism, even though biological organism. We’re all shedding hairs and cells around us every day in all the Dustin, our apartments is basically bits of us that are fell off.

Mark: If I spit on the street, is that still part of me? It’s got my DNA, right? It’s labeled with the promise of me is in that little lump of spit, but it’s no longer able to feed back on the process that I conventionally think of as me, so do I call it me or not me?

Jim: If you don’t mind, I’m going to jump in here because I have a strong theory about this. It comes up all the time on what is me, right? It goes all the way back to conversations in middle school, right? What I’ve decided for practical purposes is what is me is those sets of cells that are actively engaged in the homeostasis network around the transfer of gases, nutrients, and toxins. I think I actually came to this realization after reading Jeffrey’s book on scale and some of his papers on biological scaling because it seems to me there’s a qualitative difference between those cells, which are getting oxygen and carbon dioxide out, nutrients in and toxins out on the order of a couple seconds versus those that don’t. That allows you to make pretty strong distinctions. For instance, our fingernails are not me by that theory nor is our hair, but the follicles for the hair are because they’re still in this realtime homeostasis around gases, foods, and toxins. I’ve just throw that out as at least one way to think about what is me.

Mark: I like that. In your view is, are the several kilos of bacteria in our gut part of me or not?

Jim: Yeah. That’s interesting. Yeah. That’s adjacent to me, right?

Mark: The adjacent possible me.

Jim: Exactly. Correct. Because they’re not exactly tied into these homeostasis networks, but they’re utterly dependent upon it.

Mark: Well, it does some controversy about that as well, right? They may or may not be tied into those networks in ways that we’re not quite sure about.

Jim: Yeah. But probably indirectly rather than directly. They’re not literally plugged in through capillaries pulling gases in and pushing gasses out is one level of indirections. That’s actually a wonderful example that will make me go back and rethink my idea. Anyway, I’ll let you get back to your discussion about agile and PT.

Mark: Yeah. No, I like your characterization and the promise idea I think feeds very nicely into that way of agents interacting in the autonomy of the agents and where they come together. The thing that promise theory kind of points out is that the default state of any agent, if you will, is an autonomous state where basically free to do whatever you want under gathering in the forest. But if we want to come together in a corporation, a social corporation, we basically voluntarily give up some of that freedom by promising to behave in a certain way, promising to deliver, which constrains us a little bit. It eliminates a little bit of that freedom to do anything that we want.

Mark: Of course, we can talk about this fascination. We have obsession with freedom of speech and so on. In the West, we’re going to a new cost free to say whatever you want. We might not be wise, but I remember the only advice I ever got from my high school teacher for my university exams was just, “Whatever you do, don’t tell him to fuck off.”

Jim: Well, I violate that one all the time.

Mark: Yeah. Yes, of course, we limit our freedom and we do it by the promises we offer when we offer ourselves. With this discussion I had with Daniel was to apply this to the notions like providing services to one another in a business. What does it mean to have authority over others given that agents start out basically free and autonomous. That turns out to be related to a notion of symbiosis because you can try to impose authority on others by attack, if you like, by actually trying to overwhelm than other agents.

Mark: But normally we do it by a cooperative means by delegating a mandate to cooperate and sort of a mandate for leadership and accepting that mandate for leadership and that goes to the point of rights and whether or not we have rights, a kind of permission, which is the premise to be able to access or to do something that I control and you don’t. There’s even this thing came out of it called the downstream principle, which is who is basically most responsible for an outcome happening.

Mark: If you have a chain of service chain or a logistics chain, if you like, an end to end delivery, you can try to blame the… If you order something from Amazon and it doesn’t arrive on time, do you blame Amazon? Do you blame the postman or do you blame yourself for not having it on time? What promise theory kind of says is interesting. It says you can blame Amazon a little bit if you’d like. There’s not too much use. You can blame the postman a little bit more because he’s closer to you, but you are the one who ultimately has the autonomy to do something about getting that book.

Mark: If it didn’t arrive on time then you could go and buy it from someone else. The farther downstream you are, the closer you are to the recipient of a promise in a chain of delegated promises, the more responsibility you have for the outcome. that also makes an interesting point that if you ultimately try to blame someone else, the only reason for blaming someone else’s because you didn’t keep your own promise to find the source for that resolution somewhere else. These ways of applying promises to leadership had some pretty interesting insights, I think, in ways of truth telling, if you will, about organizations and management structures. I find that quite fascinating and I continue to work with Daniel on putting together some training for people in business.

Jim: Well, let’s make a real call out to Daniel Mezick for being the person who nominated Mark to be on the show. Thank you a lot Daniel. This has been a very productive conversation. Talking about leadership, I recorded a podcast yesterday with Jamie Wheal and he is involved with a group called Game B, which I’m also associated with greater or lesser degree, which thinks about very nontraditional forms of social operating systems at all scales from the small to the large and one of the ideas he talked about yesterday is one that’s practiced by the US Navy SEALs, the elite military unit called Dynamic Subordination where there are no leaders or there are normally leaders being the Navy people have rank, but when the seals are out in combat, nobody has power based on their position.

Jim: It’s a very rapidly evolving dynamic situation where people take role-based leadership sometimes just for seconds and then the team is so well coordinated that they transition in real time from one role based leader to another, and they call this whole thing dynamic subordination. Any thoughts that strike you about that idea?

Mark: I love that idea. First of all, this idea of Game B… I learned about this through you, from you or from your podcast, very interesting absolutely up my alley. This idea of role based leadership and role based activities in a cooperative network is exactly bang on for promise theory. This notion of military voluntary subordination is one of the examples I use again and again in my book because many people will often object to this idea of voluntary cooperation and fundamental sort of default state of autonomy in agents by saying, “Yeah,” but that’s not how society works. You basically tell people what to do and that’s what happens.

Mark: Look at the military the most successful system in the world or whatever. You point out that, of course, people in the military have a very ordered hierarchy of command and control often, which is designed to maintain a semantic stability and an operational stability and operational terms. But of course the roles who gets to decide may be changing on a basis of whether or not you can actually communicate or notes. That delegated authority and the roles, as you mentioned, is exactly the right analogy for a dynamical system. Of course, it’s exactly what happens in biology with different cells taking the lead on different processes as certain things come in and out of play. The immune system a great example. Most of the time the immune system is pretty inert and other systems are calling the shots but when an antigen is detected that is unpleasant or whatever, our response will be mounted and certain other systems come into play taking a lead role on things like heart rate, and temperature, and so on.

Mark: Similarly, in computer systems, multi scale systems, and computing, this notion of so-called microservices where individual services may be determining specific outcomes on our case by case basis depending on what the particular scenario is taking place and whatever context we’re in. I think we had an interaction through Twitter the other day about biases in databased decision making systems and whether different biases are actually important in discriminating different contexts for decision making or wherever they are in fact things to be eliminated. I think you and I had this view that biases are actually the mechanism by which we make decisions.

Mark: In a human, for instance, I’ve always had this view that we will never understand that intelligence or real or artificial without understanding the role that emotions play in resolving when things are determined or not determined. Basically, my idea of when we have a good explanation for something or a proper outcome or proper decision, when we’re happy with that decision, it’s whether or not we have an emotional response, which is favorable or whether we feel like we need to keep going because we’re fearful and uncertain. Tell me why this thing is true. Like kids tell, “Yeah, but why daddy? Why daddy?” “Well, it’s because of this.” “No, but why is that true?” “Oh, it’s because of this other thing.” “But why is that true?”

Mark: Eventually, you give up, right? You simply give up because there’s an infinite chain. You could go down, but you will feel either exhausted or happy to receive a particular answer and that’s when you stop. That’s your go to explanation. The ultimate resolution of logic is emotional and that’s why I think the logic will never be the basis for intelligence. It’s why without this multi-scale approach and this approximate very fast system one and system two are in Daniel Kahneman’s language. This fast resolution, this emotional resolution is in fact the ultimate arbiter of decisions in spite of all the possible semantic pathways that may be selected by our neocortical reasoning processes. That’s my long answer to your simple question.

Jim: Yeah, I love it because it’s actually something I am a strong believer in and actually in my own work I use this and that is even in system two it turns out that emotion is the final adjudicator. You build multiple competing models, and it’s frankly your emotions that tip you that choose A or B or C as it’s well thought out, carefully articulated thinking we thought, but it’s not really true. I would point people to one of the, I think fundamental books for people thinking about consciousness and how it relates to the organism, the ape with clothes that we really are. That’s called The Feeling of What Happens: Body and Emotion in the Making of Consciousness by Antonio Damasio. The book’s a little dated from 2000 and in the field of consciousness studies that’s old, but it’s a book that I would really point people to who are interested in understanding quite deeply how the body and emotion are utterly involved in our consciousness in every single possible way.

Jim: I think we’re on the same page there. Let’s move on a little bit. We can talk about consciousness forever, but my list is still too long. I want to turn to the next big topic. Some of your thinking has returned to your roots in physics and you saw an isomorphism more or less between promise theory and spacetime, which you developed into semantic space time. In fact, you wrote a cool book called Smart Spacetime. In fact, a quick look at that book convinced me that Daniel Mezick was right. This was a smart guy worth talking to. Tell us about Smart Spacetime.

Mark: Yeah. I mean, this was another one of those stories that actually came out of that digression through cities and networks. Over the years, I’ve discovered that more and more of these phenomena that we’re seeing in the cloud and in cities and so on implicates a reasoning, which is based on the notion of things being next to one another, being directly adjacent, being able to communicate or being in sequence to one another in time where time is in fact just changes that happen. Basically, it’s the sequence of changes that happen is what we mean by a clock in Einstein’s meaning. This idea of space and time was always in the back of my mind as being conspicuously present in many of these explanations of an abstract way.

Mark: Being a physicist, I recognize these things. We are taught to think about space and time these days in terms of symmetry. In physics, symmetry plays a large role and we talk about the symmetry of translations or linear motion and the symmetry of rotations about different axes like the earth turning on its axes. If you rotate it and it looks the same like a bold and we say that’s a symmetry of rotation. If you move something and you can’t really see that it’s moved because there’s nothing to compare it to, you’d say that’s a symmetry of translation. The whole of modern physics is based on this notion, this shift in thinking towards symmetry in space and time.

Mark: But what I realized in these discreet networks that I deal with every day in computer science is that that idea is not going to fly in computer science, first of all, because at the scale of the networks there is no large scale order, what we call long range order in physics, there’s no large scale translational invariance. In other words, on a long, long scale, you don’t see any uniformity. Things are pretty muddled up and messy still. But as the cloud gets bigger and bigger and bigger, we eventually find these virtual forms of uniformity.

Mark: You can move a virtual machine from one physical machine to another and you can’t tell the difference. That’s a form of translation and variants in the cloud. You can reorganize or we connect to different servers. We were just talking about eventual consistency. You could flip your server from Asia to America for instance, and you would not see any difference and that would be a kind of rotation law in variants. There are these analogs between these invariances or covariances that we see in physics and the changes that we can apply in computer science. I wondered if anyone might be possible to formalize some of that using promise theory.

Mark: I set about doing this. I sat down with my little pen and paper and I literally do work with a fountain pound in a mole skin book when I do my work. I started trying to rediscover space time by using the networks and promise theory, thinking of the points in space as individual agents and the connections between them being based on the promises that they make to one another. This connected immediately with Shannon’s notion of the communications channel, sender, and receiver, which only takes you halfway from A to B. This was already extremely interesting to me because again, in physics we have this notion that if A is next to B, then obviously B is next to A.

Mark: But in information theory and in premise theory, that doesn’t have to be true. There’s an additional promise that needs to happen in order to get back again if you go from A to B to go back from B to A. The natural most primitive stays of a space time if you will, in other words of points being next one another in a network, is for them to be in a kind of a directed graph, what we call a directed networking in math. To reconstruct everything that we know or take for granted in physics using a promise theoretic network takes actually a lot of work. It takes a lot of promises that we take simply for granted in physics and that is very interesting because it means that there’s a whole bunch of things that we ignore in physics that we should be paying attention to.

Mark: Now, imagine like a leap of faith, if you will, to one of your earlier guests, Lee Smolin who talked a bit about quantum gravity and some of his work about quantum gravity. He used this pan on graph theory. Quantum gravity is a theory of quantum gravity, some of his work with Kohler Valiant and so on this loop quantum gravity. This is a model in which space time at its most fundamental level may be built on points or agents connected together into a network and in his language it’s suspended work like Roger Penrose, spin networks. But forgetting about that stuff, just thinking about it as a promise network, a bunch of agents connected with information channels and the notion of Shannon or the notion of promise theory. Then you tried to reconstruct space time. You find a lot of very interesting things pop naturally out at you like some of the structures in quantum mechanics around the emission structures of predictability and how that might emerge from a smaller level picture.

Mark: The reason for singularities a whole bunch of topics that we don’t have time to go into, but which I found extremely interesting and I tried to write a little bit about in my book Smart Spacetime. Then right at the opposite end of that, I read that a fascinating book by Peter Wohlleben, The Hidden Life of Trees about the complex networks and forests between trees as agents and the communications channels they have between them using guy and various other microorganisms and the interactions they have. I find this absolutely fascinating to go from the very tiniest things in nature to the very largest things in nature and see essentially a kind of spacetime structure in both where the notion of space as memory and time as change allow you to construct a simple model of cognitive agents that are able to think and remember stuff about each other simply by forming cooperative networks. That’s essentially the short version of my very long book, Smart Spacetime.

Jim: You know what? I just realized that your perspective is very, very close to one of the five research agendas at the Santa Fe Institute called Complex Time. I’d recommend you take a look at that on their website and if you feel the same kind of a congruent that I do, I’d be happy to introduce you to the director of that project. I think you guys would have something very interesting to talk about.

Mark: I would really love to.

Jim: I’ll send you a link to it. By the way listeners, we’re going to put links up to all the many books that we’ve talked about here, including of course Mark’s and all the various other ones will be up on the page for Mark’s episode. Just go to jimruttshow.com and you’ll be able to find links to all these very interesting books. I’m going to do a little bit more chopping on my topic list.

Jim: One of the areas I personally know a lot about and utterly fascinated with is money, but we do not have time to talk about money. But if you’re willing, I’d like to get you back on again and talk about money.

Mark: Sure.

Jim: But let’s switch to a couple of other smaller topics. One that listeners of the Jim Rutt Show know is a regular recurring feature. If I can find any excuse I work it in with my guests is talking about the Fermi paradox, the Drake equation, and where the hell are all the space aliens. Mark made the big mistake in an email back and forth indicating that that was a topic he was willing to talk about. Mark, what do you think about space aliens?

Mark: I love this topic. I first thought about this because Isaac Asimov got me interested in his book, Extraterrestrial Civilization. This is a book I got as a teenager. I’ve had this topic on my mind for years and years. But I thought about it again recently because I wrote about artificial intelligence in a blog post on my homepage rather, and I forget the name of the blog, but it was basically about scale.

Mark: Again, my favorite topic of scale, timescales and spatial scales and the point I wanted to make was that because I’d been reading all these books about the AIs are going to take over, they’re going to read all our stuff and become evil and steal our money and run away with civilization and try to kill us. I thought, “Hang on, hang on a second. How do we even know that a life form, whether it’s a real life form in outer space or an artificial life form as an AI would have any sensory operators that responds on the same timescale that we do. How would we know that its processes are in any way comparable to ours in order to notice that we even exist?”

Mark: The obvious example of that is the case I just gave of The Hidden Life of Trees. Peter Wohlleben’s book in which we look at a forest and this is a set of life forms that are thinking, reasoning, changing, adapting on a timescale that is so much longer than ours that we literally can’t see any of it happening. It looks totally frozen in time to us and yet there is this living, changing dynamical network happening over years, millennials, centuries, possibly even millennia involving organisms and all kinds of scales and you could make the same accusation of AI.

Mark: How do you know that the data an AI, if it evolved, got interested in and the sensors that it was connected to explicitly or implicitly that might lead to an emergent intelligence? How’d you know that those would be related to our scales, the scales on which we are interested in things? Howard, it’s understanding of the world essentially overlap with ours. I think that may be the key to why we are unable to perceive or see not only life but dynamical phenomena in the universe because we are simply ill-equipped on a sensory level, on a reasoning level, to think about them on the same timescale or special scan for that manner.

Jim: Of course, that leads to possibilities, which both of which could be true, things that are way faster than we can perceive. I suppose there is something like life somehow in the Corona of the sun with this tremendous energy flux going through it. It may be operating at quantum mechanical time speeds, right? On the other extreme, take your example of trees and fungus networks and scale it up to something life that lives in interstellar space and reacts on a million year time frame. We would miss both of those. Does that make sense?

Mark: Yeah. I mean we know the answer to the first one because we saw that in Star Trek. If they’re going really fast, they sound like bees. You may remember that episode.

Jim: I don’t recall that. Well, that must be for one of them. God damn later ones. I’m only the first generation Star Trek, man. I do not watch the later ones. I would-

Mark: No, no, no, this was captain Kirk. He fell in love with that bee. In the end of this, I’ll try and dig out that name.

Jim: You have to find it.

Mark: Yeah. The second one, definitely interesting in my book actually also make the point that if you observe phenomena over long, long times, you may actually extract a kind of a spacetime picture that looks a little bit like Newtonian mechanics with things like velocities, and accelerations, and masses explained in terms of the nearest neighbor interactions between those things. Actually, scaling also has an explanation for how Newtonian ideas can emerge from a quantum level, which again is one of those problems that we’d have no good answer for currently in physics, but it could well be that it’s a figment of scale.

Jim: Very interesting. Well, I think with that very interesting insight. I’m going to call it end here even though I go on for hours. I’m going to take you up on your agreement to do another session later about mostly money, but I know we’re going to get into other stuff too, so I really like to thank you, Mark for one of the most interesting, the most wide ranging episodes we’ve had so far on.

Mark: Thanks, Jim. Really my pleasure to do it and it’s been huge privilege for me to do it and I love your show. The guests have been amazing. I’ve listened to as many of the podcasts as I possibly can. I will continue to do so.

Jim: Well, thank you very much. Production services and audio editing by Jared Janes Consulting, music by Tom Mueller at modernspacemusic.com.