Transcript of Episode 11 – Dave Snowden and Systems Thinking

The following is a rough transcript which has not been revised by The Jim Rutt Show or by Dave Snowden. Please check with us before using any quotations from this transcript. Thank you.

Jim Rutt: Howdy! This is Jim Rutt, and this is The Jim Rutt Show.

Jim Rutt: Today’s guest is Dave Snowden.

Dave Snowden: Hi, Jim. Good to be with you.

Jim Rutt: Dave is founder and chief scientific officer of Cognitive Edge. His work is international in nature and covers government and industry, looking at complex issues related to strategy, organizational design and decision-making. He has pioneered a science-based approach, drawing on anthropology, neuroscience, and complex adaptive systems theory. He is a popular and passionate keynote speaker on a range of subjects.

Jim Rutt: This is my favorite part. According to his documentation, he is well known for his pragmatic cynicism and iconoclastic style. I like that. My assistants once added to the end of my mini bio, “And he has a delightful potty mouth.” I like people that stand out a little bit from the drab corporate norm.

Jim Rutt: Dave is also affiliated with the University of Pretoria, Hong Kong Polytechnic University, and the University of Warwick. His various other awards and publications too numerous to enumerate. You should look him up online.

Jim Rutt: He’s probably best known as the inventor of the Cynefin framework, spelled C-Y-N-E-F-I-N. Until I did my research for this podcast, I always pronounced it something like Cynefin. Dave, how do we actually say that?

Dave Snowden: It’s Cynefin. It’s Welsh. It’s phonetic, but we have a different alphabet.

Jim Rutt: Cynefin. I like it. Okay. Previously, you were at IBM doing knowledge management, which I remember back in the day in the 1990s as a senior technology executive, it was pretty much an oversized shit show sold to dumb companies by slickster consultants. What did you learn from that that led you to Cynefin?

Dave Snowden: It goes back a way then. I think what you actually saw, and obviously your experience was people who focus on technology, so there were two main schools of thought around then. One was it was all about codification and dumping stuff into databases. And the other group, which was myself, Prusak and the like basically argued decision support was a lot more complex than that.

Dave Snowden: You had to take into account cognitive neuroscience. You had to take into the way people made decisions. That was kind of where we were. That fairly soon morphed into work on narrative and complexity theory, and then that got me heavily involved in DARPA programs before and after 9/11, looking at weak signal detection and supporting decision-making in complex policy environments.

Dave Snowden: I think you may have had some bad experiences there, and you may be spending too much time with the top six consultants rather than people who understand the subject.

Jim Rutt: Yeah, fortunately, I never got called into doing such projects, but I know lots of my contemporaries who did. And of course, the tech publications of the era were full of the miracles of knowledge management. As you say, it was all very naïve attempts to over-codify what is nothing like and easily codified process.

Jim Rutt: Which actually brings me to another one of my comments. I saw in your original Harvard Business Review article you talk about what I call naïve Newtonianism, people who never get beyond the 13-year-old nerd’s belief that with all the data about position, velocity of everything in the universe we could predict the future. I certainly saw that again and again in corporate America, a lack of knowledge of even basic concepts, such as deterministic chaos. What’s it like out there today in the world of business and government?

Dave Snowden: You haven’t seen much difference. People are still taking a linear approach to causality, and to be honest lots of system syncers guys do as well. The assumption is you can, if you get the input right, you can define the output. Or that you should be able to forecast or backcast to a future state.

Dave Snowden: I think the work we’ve done, which is primarily based on complexity theory, which is different from deterministic chaos. Complexity theory in human systems is sometimes known as the science for central uncertainty, in which you can understand the present and you can map the coherent pathways from the present, but you can’t define an outcome.

Jim Rutt: Indeed. As a complexity science guy myself, I’m very clear about the distinction. I mentioned deterministic chaos because I call that the baby first step. Once somebody understands that deterministic chaos is real, they ought to throw out their naïve Newtonianism. It’s amazing to me today that people could still hang on to it in positions of authority, yet they clearly do.

Dave Snowden: I think there’s areas where it still applies. The point about the Cynefin framework is to say that human beings have learned, for example, in traffic management or operating theaters, to create highly predictable Newtonian systems. There’s nothing wrong with them provided we don’t think they’re universal.

Dave Snowden: I think deterministic chaos is not necessarily the best next step, because it turns people to think that they can use agent-based models or simulation or AI. They’re really not dealing with highly interconnected systems. I prefer to get people into a fundamental distinction between complicated and complex, between systems where you can define the future states and systems where you can only understand the present, and then worry about the other stuff later.

Jim Rutt: Yeah, we’ll get to that in just a minute. That’s actually going to be the heart of what I’m going to be asking you about. But before we go there, complexity science clearly informs your work. How did you come to get exposed to the complexity literature? And I’ve seen references in your essays to Stuart Kauffman, Prigogine, who else was an influence on you from the complexity science school?

Dave Snowden: Brian Arthur. To some extent, people like Ralph Stacey were among early ones to popularize it. [Alicia Guerraro 00:06:06] in particular and her sort of work, and then a whole range of material in essay and other format. I think the unique thing I did was to combine it with cognitive neuroscience and basically say that human beings aren’t ants.

Dave Snowden: I think quite early on, and I remember a big debate with Walter Freeman, Stu Kauffman, and Brian Arthur over dinner in San Jose many years ago, and I think what all of us were saying is the study of complexity in human systems is different from the study of complexity in termites nests. That requires us to take a more transdisciplinary approach. So at that point you’re talking about people like Freeman for decision-making. You’re talking about [Chalmers 00:06:50]. There’s a whole body of academic and other influences.

Jim Rutt: Chalmers. You mean David Chalmers the philosopher?

Dave Snowden: Slight confusion. No, I’m talking about David Chambler.

Jim Rutt: Oh, Chambler. Sorry. Not Chalmers. Okay.

Dave Snowden: I’m a philosopher, so the hard problem is one that complexity gives us different insights on.

Jim Rutt: It’s one that’s hard, right?

Dave Snowden: Well, the nice thing about complexity is it has mutually ontologically diverse systems to coexist, so there’s a few issues like free will you can deal with.

Dave Snowden: Chambler’s, just his latest book is a nod to politics in the anthro scene. He’s talking about how the sort of word we now live in is radically different in terms of the way we make decisions, and he’s coming from a political science background.

Dave Snowden: I’ve drawn on the biological end of anthropology, the sociologic end of anthropology, a whole bunch of cognitive stuff, and complexity theory.

Jim Rutt: Very good. With that as a little bit of groundwork in place, maybe you could dig in now and really tell our audience about the Cynefin framework. Keep in mind our audience is smart. Hey, you morons get out of here. But most will not have a background in complexity science, so please take it slowly.

Dave Snowden: Okay. Cynefin is based on a fundamental divide into three types of system, ordered systems, complex systems, and chaotic systems. It comes from a principle that there’s a phase shift between those types. It’s not a gradation. It’s a phase shift. It’s important here that chaos tends to be defined different than social science or physics, so I’ll run through my definitions and make sure we’re using the same language.

Dave Snowden: An ordered system is one which has a very high level of constraint, to the point where everything is predictable. In the UK we drive on the left. In the U.S. you drive on the right. Human beings have an ability to use constraints to produce predictability. The drive on the left, drive on the right is an example of what Cynefin has called an obvious approach to order. The relation between cause and effect is self-evident. Everybody understands it. Everybody buys into it. That’s the domain of best practice. There’s a single right way of doing things. We sense, we categorize, respond. We have rigid constraints.

Dave Snowden: The other type of order is complicated. That’s where for experts it may be obvious but for the decision-maker it isn’t. It’s not self-evident, so you have to carry out some sort of investigation, bring in expertise. You effectively sense, analyze, respond. But there is a right answer and it can be discovered. You may discover the answer within range of possibility. It may not have to be precise, which is why we talk about that as the domain of good practice, not best practice. For example, the medical practitioner should be allowed a degree of flexibility in what decisions they make about patients. They shouldn’t be forced into a single route. That would be an example of complicated.

Dave Snowden: If you over-constrain an ordered system, and we had a lot of this in IBM, so the expense system was so ridiculous that people found workarounds. For example, claiming for a taxi fare and using that to cover food for staff working late at night, and that was a classic case. So if you over-constrain the system, which isn’t naturally constrainable, sooner or later it breaks or fragments into chaos. That’s called a catastrophic fold. In the bottom of Cynefin is shown as a fold. It’s a reference to René Thom.

Dave Snowden: So that’s disastrous. If you fall into chaos accidentally then the model is act, sense, respond. You have to recover very quickly. You see that, for example, when whole industries collapse almost overnight, because it’s what Clayton Christensen called competence-induced failure. They’re so good at the old paradigm they don’t see the change coming, so when the change happens it’s catastrophic.

Dave Snowden: A complex system on the other hand is one which has what are called enabling constraints. Everything is somehow or other connected with everything else, but the connections aren’t fully known. One of the concepts I created is called a dark constraint, a reference to dark energy in cosmology. We can see the impact of something but we can’t actually see where the impact is coming from.

Dave Snowden: In a complex adaptive system the only way to understand it is to probe it, to experiment in it. But critically you have to experiment in parallel, which by the way gives us a significant conflict resolution device. One of my definitions of complexity is if the evidence supports conflicting hypotheses of action, and you can’t resolve those hypotheses within the timeframe for decision on an evidence base, then the situation is complex.

Dave Snowden: So in Cynefin you don’t try and resolve it, you construct a safe-to-fail micro experiment around each coherent hypothesis. You run them in parallel, that changes the dynamics of the space, and then the solution starts to emerge.

Dave Snowden: The final domain is the domain of disorder, which is a central space. That’s kind of like the state of not knowing which of the other systems you’re in. You might enter that accidentally or you might enter it deliberately. But in fact it’s a type of inauthenticity. If your natural tendency is to bureaucracy, you’re likely to impose order when it’s inappropriate. If your natural tendency is towards complexity and emergence, you may not impose order when it’s appropriate and so on.

Dave Snowden: The essence of Cynefin is to basically say context is key. It also comes from one of my drivers that got fed up with management fads of people… Business process re-engineering was universal, learning organization was universal. None of these were universal. They all work within a specific context. Part of the function is Cynefin is decide what context you’re in before you decide what method you use.

Dave Snowden: As such, it’s been used to understand role of religion in the Bush White House. It’s been used in epidemiology. It’s been used for decision-making in not-for-profits. It’s been used for decision-making in companies and the police. It’s taught to anybody who wants to become a colonel or colonel within most of the U.S. Armed Forces, because that concept of contextual appropriateness of decision type is of increasing importance in an uncertain world.

Dave Snowden: That’s a very high level summary.

Jim Rutt: Very useful. Let’s maybe drill it a little bit into distinction between just the complicated and the complex. I believe I read in one of your essays that you describe complicated as something that could in principle be taken apart and put back together again, while complex that could never be true.

Dave Snowden: Yeah, a complicated system is the sum of its parts. You can solve problems by breaking things down and solving them separately. In a complex system, the properties of the whole are the result of interaction between the parts and the linkages and the constraints. In fact, in a complex system how things connect is more important than what they are. So the properties of that emergent pattern can never be decomposed to the original parts.

Jim Rutt: Got you. As I was thinking about this, I thought of another distinction I thought I’d run by you and see what you’d think, is that at least in a purely complicated system one generally assumes that the components of the complicated system are not antagonistically adaptive, i.e., one would never have to assume that your carburetor, showing my age there with that reference, isn’t going to perversely develop better and better ways to keep the engine from running. You can assume that the carburetor is a relatively static device. It’s not a strategic or adaptive element. Is that a useful distinction about the complicated world?

Dave Snowden: Yeah, a car engine is… Complicated systems tend to be engineered as well. You don’t tend to see them so much in nature, unless you get highly static constrained relationships. In a complex adaptive system, something which was beneficial one day can be something else. A classic example is symbiosis. All symbiotes actually start off as parasites, and then they coevolve and coadapt with their host. Then you’d end up as symbiotic and you’re totally dependent on them.

Dave Snowden: The stomach bacteria you rely on to actually handle your digestion actually killed a lot of your ancestors off before they actually coevolved.

Jim Rutt: Of course it goes the other way, right? The bacteria can’t live without us now also. They’ve given up a fair big part of their genetic code, so they can’t do the processing of raw material anymore. They have to have our chemicals do it for them, so it’s really a mutualism.

Dave Snowden: It’s quite interesting if you look at Ebola at the moment, because one of the reasons it’s so dangerous it’s not killing everybody anymore. It’s starting to adapt.

Jim Rutt: When it kills everybody it’s not going to spread, right?

Dave Snowden: Mm-hmm (affirmative). You’ve got it.

Jim Rutt: Another thought about complicated and complex, which is talked about in circles some of the people I work with, is that in general a complicated system will be embedded in a complex system. Let’s take for instance a 1950s style command and control assembly line factory. That’s a high example of a complicated artifact, and yet for it to be meaningful it has to be embedded in a marketplace of suppliers, customers, competitors, and substitutes, all who are interlocking in a coevolutionary network which is certainly complex.

Jim Rutt: One of the things I didn’t see in your writings, maybe I missed them… I will confess to not have read them all, but I read a fair amount… was the concept of complicatedness embedded in complexity.

Dave Snowden: I think it’s there if you look at it. I would say it’s self-evident, so I probably wouldn’t point it out. Any highly structured engineered system works within human dynamics and human interaction. I think it is actually possible for it to be the other way round, actually. I think complex systems can be embedded in complicated systems. We actually see that in politics. One of the dangers of the growth of populism, for example, at the moment, is it starts to create a level of constraint which perverts the system. So you effectively are micro complexity with an overarching political framework, which is fairly easily managed. I think it’s a little bit of both to be honest. Key thing is to understand which type of system you’re in.

Jim Rutt: I’d love for you to drill down a little bit more into that example on populism and how it represents a pocket of complexity.

Dave Snowden: Okay, let’s say it’s in with one of the other main frameworks I invented which is called apex predator theory. It basically says when anything gets commoditized, you lose requisite variety in the system. The system becomes perverse and that’s where something new can come into play. For example, I’ll give you and industry and example and a political one. IBM dominates the early history of computing because it repurposes punch card technology. That’s called exaptation in biology. That gives it first mover advantage. It then doesn’t see, because it dominates the industry, that hardware is becoming a commodity. When it finally realizes, it’s too late, so it suffers almost catastrophic failure, because all hardware is similarly priced, it’s all priced on quality. It’s become a commodity rather than value-add. Then Microsoft takes over and that’s repeated until software becomes a commodity.

Dave Snowden: What we’ve seen politically is that neoliberalism effectively homogenized the left and the right politically. And the homogenization then the people didn’t feel they had a choice anymore, so the energy cost of extremist for the left and the right goes radically down. We saw the same in the Weimar Republic. If you go back to the 20s and 30s, it’s scarily similar.

Dave Snowden: The trouble is what then happens is whichever of the new predators effectively stabilizes the new ecosystem, they’re impossible to disrupt for a significant period of time, and that’s the worry I’ve got with populism. It’s not that we ever lived in a fact free society. It’s just we used to delegate facts to experts and now we’re delegating them to populists for a change.

Dave Snowden: This is the bridge between destabilization and stabilization that you see in ecosystems.

Jim Rutt: I would point out though that this breakdown of the monoculture, neoliberalism, also provides an opportunity for new inventions to come forth. To my mind, it’s damned time that that happens. Neoliberalism, I would argue is forcing the human race on a march to the cliff of ecocide, right? We need an alternative. Yes, we’ll have some bad things erupt, neo-Nazis, anarchists, Antifa, et cetera, but doesn’t the same space provide room for new theories of social operating systems?

Dave Snowden: I’m going to say something negative and positive here, and remember I was heavily into liberation theology in the 70s, so I’m definitely opposed to neoliberalism. I probably know more about Just War theory from that period than most.

Dave Snowden: I think the problem is that it’s far easy for the populist right to control the system, particularly when you have social media which is what we call an unbuffered feedback loop. A friend of mine in the Mounties in Canada expressed this really well. He said, “It used to be that every village had an idiot, and it didn’t matter because we knew who the idiots were.” But now the idiots have banded together on the internet to legitimize idiocy and elect the President of the United States. That’s my addition to his original. The danger is that that sort of perverse feedback loop, and you can see this in the manipulation of social media, means that people have very little freedom.

Dave Snowden: The right will nearly always win on commoditization, if you look at history. We are not pessimistic on this. If I look at some of the work we’re doing at the moment, is for example to make children ethnographers to their own communities at a micro level. We introduce human agency, which is both horizontally mediated and bottom up mediated, into social media, and we increase the chances of empathy and human interaction.

Dave Snowden: So I think there’s a lot of things like that we can do, but it won’t be done by people who gather together on hillsides in California and talking about how life will be better, if only people loved each other. It’s actually going to be done by low grade, low level human interaction and systems which actually support that.

Jim Rutt: I think that’s absolutely right. I think if there are new theories on living, people actually have to start applying them, not just talk about them. Certainly not just talk about them on social media.

Dave Snowden: It’s actually the point on global… We’re about to launch a big website sometime next week on looking at making global warming a micro issue, because at the moment people are basically totally disabled by global warming. It’s just too big, too problematic.

Dave Snowden: We did a big project for example on plastics this year at festivals. That was one of a series of projects we did to make people feel they could take control of their environment and make a difference. Until people feel they’ve got control at a local level, international initiatives won’t work.

Dave Snowden: I think what we’re trying to do is to use social media but critically with human mediation, not algorithmic mediation, to actually change the nature of human interaction at a micro community level and start to get rid of the so-and-sos of the grandiose schemes and grandiose ideas. I think that’s the solution to the growth of populism.

Jim Rutt: Sometimes the grandiose ideas work, right? One of the things I’m following is the extinction rebellion over in the UK. In fact I have predicted/suggested that they extinction rebellion folks adopt the tactic exapt the tactic from Hong Kong of closing down airports, right? Do that on a worldwide basis and you’d suddenly get people’s attention in a major way.

Dave Snowden: I think… but you see… We’ve actually got a project going with those guys. They’re dealing with things at a local level still. They’re basically arising. If they get too serious, the state will start to move on them. Hong Kong is really worrying at the moment. The day the Chinese move the troop carriers over the border it’s all over, and one of these days they may well do that.

Jim Rutt: Might be over for China though. As we both know, in a complex system it is very difficult to project very far out.

Dave Snowden: Possibly, but at the moment China’s strategy is to control natural resources and to give in when they need to give in. I need to be careful because I’m working on some projects on this and they’re under NDA.

Dave Snowden: I think there is… If you actually look at the history of humanity, you have periods of tyrannical control before you get liberation. The problem is the planet can’t afford a period of tyrannical control which ignores global warming. We won’t come out the other end this time. I think we’ve got to think about a whole range of things. The extinction rebellion is one, but we got to actually create a lot more things as well.

Jim Rutt: Okay. Let me jump back again. I know… because this is perhaps where my interest in complex systems originate. I truly try to engage you on this, is the idea of complicated entities embedded in complex seas. I first discovered complex systems thinking, looked at my Amazon log, it appears to be 1996 when I was at Thomson Corporation, now Thomson Reuters, and I relatively quickly developed from complexity thinking the idea of using co-evolutionary fitness landscapes, and coevolutionary is the key, for M&A.

Jim Rutt: We did 80 acquisition a year, mostly tactical ones but regularly big ones up to $3.5 billion. I found it extraordinarily useful to think of, I didn’t have the complicated complex language at the time, but the idea of a entity like Thomson, now Thomson Reuters, operating in a sea of complexity, which it had by no means great control, but it did have the ability to start to understand to some degree the nature of this complex regime and outcompete its major competitors by understanding that it was living in a coevolutionary fitness landscape rather than in some static game, which the competitors seemed to think it was doing.

Dave Snowden: We use fitness landscapes extensively, but I think the thing we developed and pioneered is to get human metadata on the raw data, rather than just have algorithmic metadata. For example, one of the projects we did in Pakistan we pulled in 50,000 self-interpreted micro narratives within a week. From that we were able to draw fitness landscapes which showed underlying cultural attitudes and beliefs. We still do a lot of that work.

Dave Snowden: If we’re looking at merger these days, for example, we’re often mapping the culture of the two organizations to see where it’s in common or where it’s in overlap. We’re doing a lot of that work in terms of distributed decision support. You present a complex infographic, say, to 3000 employees. They all self-interpret within the same hour, and we draw a fitness landscapes which shows dominant views but critically shows outlier views.

Dave Snowden: From an executive point of view, you go and hunt down the outliers, because those who are thinking differently about the problem. So I think fitness landscapes have come on a way. We always prefer to call them narrative landscapes. We’re doing a huge amount of work to generate those in larger and larger volumes with more and more data. The work we’re doing on children, which is [inaudible 00:26:20]. We’ve done this now in Wales, Colombia, Malmö, Singapore, and three or four pilot projects in the Middle East, is I want every 16-year-old in every school in the world to become a journalist to their own communities on a weekly basis.

Dave Snowden: We create a human interpretation of people’s day-to-day lives which allows horizontal integration of ideas, and which informs policy. That to me is key, because we’re introducing the human metadata element into the system rather than just relying on algorithms, and we’re giving people agency in their own conditions.

Jim Rutt: Very interesting. That’s certainly a relatively unique approach. At least I’m not familiar with other people doing it at anything like the scale that you are. In fact later I’m going to drill down into your SenseMaker software and methods where we’ll get into that more deeply.

Jim Rutt: But before we go there, what about other tools for exploring the complex realm? One that’s popular in complexity science is, you mentioned it earlier… it sounded like with a little disdain… agent-based modeling and other forms of simulation with stochasticity.

Dave Snowden: I think they’re very useful provided you’ve got single agency and you’ve got rules. The trouble is most human systems are dealing with multiple identities and pattern-based, not rule-based decision. So I think simulation is powerful, but you tend to get the confusion of simulation with prediction, like their predecessors confused correlation with causation. Some of the AI people are now actually are now arguing that correlation is causation which is deeply worrying.

Dave Snowden: I think they’re useful tools, but [Manny Gillman 00:27:55] famously said… And it was interesting. I turned up at a European complexity conference, and they said would I give a keynote on why you couldn’t use agent-based modeling in human systems. I agreed to do it, but I didn’t know everybody in the audience just produced agent-based modeling, so it was a bit like Daniel going into the lion’s den.

Dave Snowden: But I got backed up by the other keynote who was Manny Gillman and he famously said that the only valid model of the human system is a system itself. So I think you can use agent-based modeling to understand aspects of the system and to give you insight and clues, but it doesn’t provide the sort of predictive element that people claim for it. And it’s quite interesting. A lot of the simulations are very good historically, but they’ve been very problematic in terms of foresight.

Jim Rutt: I would strongly agree with you there. This is something, when I talk to people about agent-based modeling in social systems, is that they should never take any given trajectory of the system as an exemplar for prediction. However, one of the things I do find very interesting is that the statistics on the ensemble of trajectories is very interesting. It tells you whether you’re in a Gaussian space or a fat-tailed space, what Nassim Taleb would call Extremistan or Mediocristan.

Jim Rutt: There, agent-based models can be very, very powerful and it’s clear that we have to manage in Gaussian distributed spaces very differently than we do in fat-tail spaces.

Dave Snowden: Yeah, and I think… I use a Gaussian Pareto distinction in terms of what we call the trigger from anticipation to anticipatory triggers. If you’re in a Pareto world, the best you can do is trigger human beings to a heightened state of alert when they need to look at something. That’s what we did on counterterrorism for DARPA.

Dave Snowden: You can’t actually predict or model a terrorist outrage, but you can use AI to trigger human beings to a heightened state of alert when a terrorist outrage is more likely. We’re currently moving that technology across intact to actually create a trigger when the plausibility of abuse in an elderly care home is high. That’s called switch to anticipatory triggers.

Dave Snowden: I think the Pareto Gaussian where I spent a lot of time trying to explain that to executives were not helped by Taleb on this, by the way. I think Taleb has blocked more people with more Nobel Prizes than anybody else. He calls me a fucking idiot.

Jim Rutt: Okay.

Dave Snowden: I’m honored to be blocked by Taleb, because he’s got this very narrow perspective in which he’s almost like a demonic Old Testament preacher.

Jim Rutt: He would take that as a compliment, by the way, right?

Dave Snowden: I know he would. I don’t, because I actually think the world needs people who are thinking differently to collaborate and talk with each other and recognize differences rather than trying create one method is unique. At the moment, as far as I can see, is Taleb is part of the problem, not part of the solution.

Dave Snowden: And you got people like [inaudible 00:30:48] at University of Zurich are actually much better in that field. Taleb’s got into the producing a popular book every 18 months trick, and there’s too many people doing that in management science.

Jim Rutt: Even worse, actually. I do like some of his thinking. It’s very important, particularly from his earliest book, Fooled By Randomness.

Dave Snowden: The earliest book is brilliant.

Jim Rutt: I tell everyone read that. Don’t read the rest. Worse than putting out a popular book every months, he’s now a 50 tweets a day Twitter personality. The attractor for that particular role is not good.

Dave Snowden: He’s also got several fake personalities, which jump on you if you dare criticize him. It’s quite amusing, really.

Jim Rutt: And he also has some non-fake but pack dog people that follow him along. I like the Fattest Fat Tony. That’s obviously him, for instance. That’s quite interesting.

Jim Rutt: Another item I’d like to hop into on something that I saw in some of your writings is the chaotic regime. I think you gave 9/11 as an example.

Dave Snowden: Briefly.

Jim Rutt: Okay, good. I think we’re in agreement there, because I would say it was very briefly a chaotic state. We learned very, very quickly for instance that we were not going to have… We quickly learned that 10,000 terrorists had not missed our screen, that maybe there was 20 or maybe there was 100. So this was not a existential risk to the American way of life, despite the fact that that narrative was sold politically which took us down a bunch of bad roads.

Dave Snowden: I think there was some fundamental mistakes made, if you contrast 9/11, and I’ve spent a lot of my time studying this, including time with Clinton’s original Al-Qaeda team on the DARPA programs, is kind of like part of the problem was that there was no need to down all the aircraft. That caused an economic disaster. If you actually look at the contrast with 7/7, because Britain has had The Blitz and multiple terrorist bombings, everybody went on the underground the next day to make a point.

Dave Snowden: That’s one of the ways… I’ve done a lot of work on this and it’s almost like another podcast to talk about how you reverse Ashby’s Law by using your own population to give you asymmetric advantage against terrorism. But coming back to the chaos point, chaos was temporarily there and I think chaos is always temporary. That’s where people get it wrong. It’s a state of the absence of constraints.

Dave Snowden: Ironically in physics, that’s a low energy gradient. In human systems it’s a high energy gradient, because human systems are open and so constraints happen very quickly and very naturally. Some of the stuff we do on distributed decision support, for example, is to deliberately remove constraints, but that takes a lot of energy to do it, so that we can actually get the statistical framework framing that you can get with fitness landscapes, for example.

Dave Snowden: So chaos is temporary. It’s not permanent. You don’t want to fall into it. You want to enter it deliberately. Most of the time we’re dealing with gradations of complexity.

Jim Rutt: Do you have some guidance… Let’s go back to the 9/11 example. There’ll be the other examples. I suspect for instance that the next big financial reverse will be bigger than 2008. It’s going to look chaotic for a while. Do you have any guidance for executives, business decision-makers, et cetera for how to deal with a truly deeply chaotic system?

Dave Snowden: Create constraints. That’s what we’re building, distributed decision support systems. We just ran one on Korea, which I can’t talk about too much. It’s where you present an ambiguous assessment of the situation, literally, to 5000 or 6000 people who interpret it and you look at the fitness landscapes within seconds. You look at dominant patterns and minority patterns, so you can find new ways of creating constraint for new forms of exit.

Dave Snowden: But that requires companies to build those systems before the crisis comes. I’d say you build networks for ordinary purpose you can activate for extraordinary need.

Jim Rutt: Very interesting. Could you maybe retrofit that to 9/11? What could society have had in place to deal with something like 9/11?

Dave Snowden: First of all, they could’ve actually not played politics. There was actually a presidential order which Gore would’ve signed possibly to have F-14s in permanent patrol of above Washington and New York. We kind of knew the strategy on 9/11. We just didn’t know when, but it was considered Democratic paranoia about Al-Qaeda, if you go back to the time.

Dave Snowden: I think one of the issues is we need more continuity on civil issues and less political influence on civil issues in that respect. But distributed networks of people from multiple backgrounds is key. There was a big row on this when the 9/11 report came out. I went on the hill with Dennis and other people around it. I remember we said the last thing you should do is to combine the agencies, because it will reduce cognitive diversity in the system. What you need is the ability to summarize agency perspectives and non-agency perspectives in real time, not through consultation. That’s kind of like where we’re building systems.

Dave Snowden: But to do that you have to have people habituated to the use of the system, so when the crisis comes you can get the fitness landscapes out more or less instantly.

Jim Rutt: That’s good. You probably know of the no free lunch theorem of David Wolpert from Santa Fe Institute, basically says there’s no possibility of one best search algorithm, that you always have to understand the domain you’re in to craft a search algorithm. This strikes me as a very good lesson on how to think about a chaotic regime.

Dave Snowden: It does. We use multiple human agents as well as algorithms and that’s key. Humans understand abstractions. Computers find that very difficult. Art comes before language in human evolution, and we think the evolutionary function of art was to allow us to see novel or unexpected connections, but also to assess the relative plausibilities. So human beings are actually good in chaos. We evolved for it.

Dave Snowden: But you have to deploy… And we evolved for collative decision-making not for individual decision-making.

Jim Rutt: Very good. That’s very true, and still a human superpower with a very large gap between what AIs can do and what humans can do.

Dave Snowden: I actually think it’s a permanent gap, unless… and I did a big lecture on this recently. I said the problem is that… in Singapore… the problem at the moment is they may exceed us in intelligence, because we’re currently working on meeting them halfway.

Dave Snowden: We’re reducing human intelligence to rigid processes and structured approaches, and AI will always be better at that than humans.

Jim Rutt: I do a fair amount of work in the AI space, and I do think that it’s quite likely, though not certain, that eventually within 50 years, say, we’ll have artificial general intelligences which will have those same broad integrative capabilities that humans do, and then that’s when things get very interesting.

Dave Snowden: Yeah, and unless you can build in… You got to build in sense, and you got to build in abstraction, and I don’t see any sign of that at the moment.

Jim Rutt: Not in the deep learning world. This is something that I come back to again and again in my talks, is that much of the public has now been convinced that artificial intelligence basically equals deep learning, and deep learning is rather just one school of AI, currently the dominant one. But there are others. Symbolic AI still exists, for instance. Guys like Gary Marcus, guys like Ben Goertzel, Josh Tennenbaum at MIT. There are many people still working on much more abstract and tractable and transparent approaches to AI.

Jim Rutt: I believe that they in the end are likely to go further than the deep learning guys.

Dave Snowden: I think they’ll go further. I think key in some of our work at the moment is actually focusing on building the training datasets, which I know is a type of deep learning, but it’s also linked some symbolic stuff. I think people don’t pay enough attention to that or they work with limited sets. I think this is a developing field, the silicon ultimately using carbon when it comes down to it.

Jim Rutt: The other thing, of course, that some of my own actual experimental work is on is that humans learn way faster than computers do today. To learn to play Go well took hundreds of billions of games played. The very top chess players probably only played 30,000 games in his whole life.

Dave Snowden: You could wipe out the Google Go play simply by changing the rule. It would take too long to accommodate whereas a human being would accommodate instantly.

Jim Rutt: Yep. You could change the size of the board. Classically, humans play Go on different size boards but AlphaGo had to be trained on one specific size board and if you change the board size even by one the capabilities fell way off.

Dave Snowden: See, I think that comes back to the fact humans are analog not digital. It comes back to… There’s three things AI people at the moment. One is they should be trained in ethics. We shouldn’t be allowing anybody to be a software engineer these days without basic training in ethics from quite a young age, because they’re scary. The other one is aesthetics. If you don’t understand the role of aesthetics in human evolution, you’re never going to get an AI system which really helps too much.

Jim Rutt: Aesthetics. That’s interesting. I have never heard that… Ethics of course. Lots of discussion about how that should become mandatory and more deeply baked in. I’d love to hear you say more about why you think aesthetics is important.

Dave Snowden: Because aesthetics is about abstractions. Music and art come before language in human evolution, so human language evolves from abstractions. The evolutionary argument for this is it allows rapid exaptive thinking. The ability to rapidly repurpose things is actually comes from abstraction. It’s also a matter of you get higher empathy in abstraction than you do in the material.

Dave Snowden: Things like parable form stories for example provide better moral guidance than values or principles, partly because they define the negative not the positive. So an ability to understand or appreciate beauty is actually going to make you a much more effective decision-maker as a human than if you just confine yourself to material.

Jim Rutt: I wonder if aesthetics is a subset of a broader category of abstraction and metaphor. As we know from the work of Lakoff and others, our language and our thinking is way more metaphorical than we often realize.

Dave Snowden: It is and Deacon’s Symbolic Species finally killed Chomsky’s views of language off, thank God.

Jim Rutt: One of my favorite books. I love that, Deacon’s Symbolic Species, not as well appreciated as it should be, people. Go out and read that book.

Dave Snowden: Pity about the subsequent books where he partly plagiarized other people’s stuff, but Symbolic Species is brilliant.

Jim Rutt: Yeah, the book after was unfortunately just not to my taste at all.

Dave Snowden: You can read all the material in two other books. I can give you the references.

Jim Rutt: I’m not going to bother, because I didn’t even like it in that form, tell you the truth.

Dave Snowden: Okay. But I think that that’s the important one. You look at the modern finding in epigenetics and the way we… We now know that Lacanianism is right, so we know the mechanism for cultural inheritance within a single generation. All of that, we need people designing tools for… There’s a thing in archeology called material engagement theory, which is identified the way in which tools have actually triggered significant cognitive and physical changes in humans. The danger is if we continue down our current route in AI, we’re actually going to reduce human intelligence and capability rather than augment it.

Jim Rutt: We probably already are in, certainly, many areas.

Dave Snowden: [inaudible 00:42:32] plague in the States is a good example of that. As I say, you know… I’m being really satirical on this. I know this is unfair, but I’ll make the extreme point. Most of our modern AI work is being done by people who live on the West Coast who are misogynist programmers who take Ayn Rand seriously after puberty. That’s quite scary when you start to think about it.

Jim Rutt: And also a disproportionate number, not a majority but way higher than in the general population, are on the autistic spectrum as well.

Dave Snowden: Yeah, they are and we’re increasing that.

Jim Rutt: Sometimes when I’m being as cynical as I can be… Well, not quite as cynical as I can be, because I can be very cynical. My description of the West Coast culture, where it’s trending to, is armies of autistics led by psychopaths.

Dave Snowden: Yeah. That’s why you need aesthetics as well as ethics. Ethics can be gained. I think there’s basics on the evolutionary biology we should be teaching engineers.

Jim Rutt: Yes, and I think, fortunately, because of the congruence between deep neural learning nets and biology, there is actually a significant increase in learning about biology.

Dave Snowden: [inaudible 00:43:40] of mind is coming back together with cognitive neuroscience, but it’s also scary. I won’t say… I’ve been working with one medical group on epidemiology in the States, and I’m not allowed to mention evolution because it’s a controversial theory.

Jim Rutt: Oh, dear.

Dave Snowden: And there’s a very high percentage of Young Earth Creationists in the IT community, because a lot of them actually think we’re the simulation, and they don’t worry about global warming because they think we can reboot it. I’m not joking. They genuinely think that.

Jim Rutt: I believe there’s more simulationists than there are Young Earthers in the technology world.

Dave Snowden: Yeah, but there’s very little difference between the two, to be honest.

Jim Rutt: That’d be a whole other podcast, I’d have to say, because I’ve got very nuanced theories about the simulation hypothesis in general because of my minimalist metaphysical program, I reject it. However, we can’t reject it as logically impossible, unfortunately.

Dave Snowden: As a concerned realist and a good Catholic, I’m going to argue against you on that one.

Jim Rutt: That’s why I say I’m a metaphysical minimalist of the naïve realist variety, so I think we probably touch grounds there. But on the other hand I also look at low, low, low probabilities and can’t quite rule it out.

Jim Rutt: I, on the other hand, believe you can rule out Bostrom, et cetera, who say it’s almost certain we’re in a simulation. Think that’s total and pernicious horseshit, and as you say, leads people to think about some very bad things.

Jim Rutt: We’ve talked about some very esoteric things here. This has been a very interesting and rich discussion. I’d like to turn it a little bit back towards the more hands on and practical. Some of our audience are managers and executives and entrepreneurs. What could you say about some takeaways for people who are in management positions about different management styles and ways of dealing with their organization that your Cynefin-oriented approach might suggest?

Dave Snowden: Two things. One is the value of Cynefin is it basically says there isn’t one style of leadership. So servant leadership doesn’t work universally. Draconian leadership works in some contexts. So you need to be multiply faceted in the way that you lead, and you need to distribute the leadership.

Dave Snowden: The other thing is that attitudes matter more, because attitudes are lead indicators. I’ll give the example of stuff. We’re currently working on cybersecurity where we’re measuring attitudes in cybersecurity, not compliance. We do that by effectively presenting an infographic of a major cybersecurity breach, we get everybody in the workforce to interpret it in real time, including their own situation assessments and their own micro scenario about the future. From that, we draw a fitness landscape, which identifies the attitudes of your employees to cybersecurity and allow you to nudge the system. That’s called more stories like this, fewer stories like that, in real time.

Dave Snowden: Recognizing… And the whole point about Cynefin is different styles work in different domains, and recognizing that attitudes, whether it’s to security, to ethics, to customer purchasing behavior, virtually anything. It’s more important to get the early weak signals of a dispositional state, an attitudinal state, than it is to try and imply causality or to try and define an outcome.

Jim Rutt: Basically, expand the search space first.

Dave Snowden: Yeah. It’s interesting. There’s forecasting and backcasting. Forecasting predicts, projects forward. Backcasting says what would we like to get and close the gaps. What we do is called sidecasting, which gives me wonderful visual metaphors with [inaudible 00:47:19] fishermen. You cast around in the present to see what’s plausible before you risk the future.

Jim Rutt: Very interesting. Another theme that I, at least, took out of what I read of your work is that you certainly encourage dissent and diversity, which I strongly agree with in fact, in corporate America. Amazing I never got thrown out of there, but my role is often bomb thrower and protector of dissenters, but at the same time, if one’s trying to make an organization work you have to manage this signal to the noise. Not all opinions are equal.

Jim Rutt: How do you encourage dissent and diversity without being overwhelmed by crankery?

Dave Snowden: I think there’s two or three things. First of all, I say we need to shift from homogeneity, which is associated with things like learning organization and most of Agile. I disbelieve everybody should have the same values, the same goals, and the same objectives, because that destroys. That makes systems non-resilient. Into what I call coherent heterogeneity. You have to have differences which can come together in different ways. For example, I’m Welsh. If you meet anybody from Wales, the first thing they’ll ask you is where’d you come from, because we have to establish some way in which we can have a fight with you.

Dave Snowden: We’ll get down to a valley eventually. And there’s those bastards in [inaudible 00:48:33] who cheat at rugby with the referees they brought. I’m sorry, I’m from Cardiff but when the English come we’re Welsh.

Jim Rutt: Of course.

Dave Snowden: That’s an example of coherent heterogeneity. Now what we can do with the attitude map that I mentioned earlier is we can measure the level of cognitive and behavioral diversity in your organization, and we can identify outlier groups that you should pay attention to, rather than them getting drowned out by middle management.

Dave Snowden: That’s the key thing that comes out of the attitudinal mapping. You have to maintain diversity in the system, but you have to maintain diversity, which isn’t freakish. I survived in IBM by top cover. When my top cover went it was all over. You actually need to get around those sort of problems by providing interactions between dissonant groups

Jim Rutt: And yet there still has to be some way to sort out what is useful diversity and what is just crankery or stubbornness or-

Dave Snowden: That’s what we do with the complex base. I present a situation, I’ll get everybody in the workforce to interpret it. I’ll then get dominant views, but I’ll get 15 or 16 clusters of outlier views. I let the clusters run a small safe-to-fail experiment and we see what works. That’s a key conflict resolution device.

Dave Snowden: So I have… Effectively I do what’s called a shallow dive into chaos. I move into an unconstrained system to actually statistically map onto a landscape, and I know which ideas are coherent enough to explore even though they’re different, and which are actually nonsense.

Dave Snowden: The test for coherence is something we do a lot of, and that’s a key concept in philosophy of science. Going back to an earlier interview, we know that most evolutionary theory is wrong because we keep discovering new things. But it is coherent to the facts, whereas Young Earth Creationism is incoherent to the facts, so it’s not worthy of exploring.

Dave Snowden: A lot of our work has been to produce objective quantifiable measures of coherence within an organization, so you know which dissonants are worth talking to and which aren’t. That’s actually a relatively simple process.

Jim Rutt: I would love to hear more about that, in thinking about operating systems of the future with some collaborators of mine. Some have been on our podcast recently. Coherence is a key concept. I’d love to hear what you can tell us about how to measure coherence.

Dave Snowden: Okay. Let’s take my example of a policy situation. You’re thinking of taking over a company or moving to a new area. You put that together as an infographic, like a Facebook news page, where you’ve got dissenting views, common views. The sort of thing people are used to simulating. You present that to your whole workforce in one hour without the chance for collusion. Ask them to write a micro situational assessment. Then we do the human metadata which is non-gameable human interpretation, which gives us quant data from which we can draw fitness landscapes, and we finish off with a micro scenario, how do you think this will develop?

Dave Snowden: That means the minute the hour is over we can present the executive with a fitness landscape which shows clusters of effectively coherent ideas, and then we can go through those and give them cash to do a small experiment. That’s actually relatively simple.

Jim Rutt: So you can essentially do a statistical analysis on the cluster statistics of the narratives, and if there isn’t much clustering then you can say your coherence is low. If it’s tightly clustered around a single or small number of points you can say it’s highly coherent, maybe overly coherent-

Dave Snowden: But also very dangerous because you haven’t got enough diversity. We’ve got metrics on this. There’s a level of dissent you want to have permanently present within the organization. The thing [inaudible 00:52:10] and I did before he died, was to measure the degree of inefficiency a system needed in order to be effective.

Jim Rutt: Do you have anything you can explicate on on how one would think about what’s the right amount of diversity? I suppose it’s situationally dependent.

Dave Snowden: That links in with apex predators. If you’ve got a stable ecosystem you don’t need so much diversity. If the system is suddenly destabilized you need to increase diversity very quickly.

Jim Rutt: That’s in evolutionary computation, which is my home academic field. That’s what we call the difference between exploitation and exploration. In stable times you want to do more exploitation. In unstable times you must do more exploration.

Dave Snowden: And then the secondary work we did which was originally counterterrorism work is when do you… and this is where we produce human mediated training datasets, because we need executive buy in… is you need to use AI to trigger when you need to switch between the two, because by the time you see it normally it will be too late.

Jim Rutt: You say you use AIs to determine that. Could you say more about that?

Dave Snowden: What we do is we build training datasets based on, for example, multiple counterterrorist examples in the past using fragmented observations which were available beforehand. We use those to produce a similar things are now happening and you need to pay attention. But because the executives have been involved in the construction of the training datasets they actually believe it is not a black box. I’ve been in decision support all my life. Black boxes generally don’t work when there’s a lot at risk politically.

Dave Snowden: Traceability of understanding the mechanism of decision-making is one of the big missing things in AI at the moment, and we’re doing a lot on that.

Jim Rutt: Of course, hopping back to our earlier discussion, that’s where symbolic AI has a very significant advantage over neural AI. It’s not necessarily a black box. The ability to look at whatever level of detail you want on the reasoning, or even have it generate scripts that explain it’s reasoning are feasible, while that’s not feasible at least not yet, with respect to neural AI.

Dave Snowden: Yeah, I think you need the anticipatory triggers. I’ve had these debates. I had them in Boston, or sorry, in Cambridge not so long ago. I still think we’re not… The AI industry is trying to reduce rather than increase human agency. We think the really fertile thing is to increase human agency then the AI will get better.

Jim Rutt: Could you give an example of that?

Dave Snowden: Well, that’s an example, for example, I’ve just given you an example where we use a whole workforce to assess the situation. That creates training datasets that you can then check into an AI systems. That’s one way. The other is traceability, several things like that.

Jim Rutt: Okay. I’m going to hop to another concept that I believe I took out of your writings, which is that in an unordered system causality cannot be determined. I’d like to point out there’s an interesting and somewhat slippery concept of downward causality in complex systems. Bring it down to a tangible example, the chemicals in the cells of a body are highly constrained by the fact that they’re evolved to live in a tightly coupled package of chemical homeostasis, which has to exist on a second by second basis and this tremendously constrains the degrees of freedom of those chemicals.

Jim Rutt: It seems to me that if one thinks about this concept of downward causality, there’s a very large amount of constraint that one can basically extract about any given entity within the complex system.

Dave Snowden: That was the point we were making earlier, my point about populism. The danger about populism is it increases the downward constraints and reduces freedom of action, until it breaks catastrophically. That means effectively the system has become ordered not complex, because the constrained level is so high.

Dave Snowden: And it’s also the problem with granularity. You can’t go from a chemical reaction to a human system, because a human system is an order of magnitude greater, several orders of magnitude greater, in terms of uncertainty, linkages, constraints, and everything else.

Jim Rutt: Certainly there’s multiple levels of emergence between chemicals and a human body. But nonetheless the behavior of those chemicals is very highly constrained by being part of this real time chemical homeostasis, which is used to support the entity itself, and the result is actually quite powerful.

Dave Snowden: The chemicals restrict the way we can make decisions in the brain individually. You can’t take an aggregative approach to this. By the time you reach a human system, you got so many micro constraints and macro enabling constraints that the system is actually has much higher level of unpredictability than the low level stuff.

Jim Rutt: That’s good. I like that. Next topic. You’ve suggested multi hypotheses. I didn’t see any reference to the limited amount of data that I read about red teams. That was always one of my favorite management techniques. Is that fit into your methods at all?

Dave Snowden: Yeah, some of the work we did in Singapore was to create an alternative to red teams. Part of the problem with red… If red teams are truly independent they work. If they came from the same cultural background it can be problematic. So we have a technique called ritual dissent, in which we’ll fragment into multiple small micro red teams looking at decision-making, but overall red teams have a lot of value.

Dave Snowden: You can’t do it, say, just with an S2 challenge in military terms, because S2 has been passed in the construction of the plan.

Jim Rutt: I’ve always thought there ought to be a consulting firm that all they do is do red teams for companies. Does such a thing exist?

Dave Snowden: There’s quite a few. We work with a couple, and I’m doing a project this week where we’re… I can’t say who with. We’re actually going to effectively red team a corporate purpose statement by finding multiple ways to destroy to where it can be perverted in practice. I’m going to really enjoy doing that.

Jim Rutt: Once I conceptualize the concept of a red team consulting firm, I said damn I wish I’d taken that road in life. That might’ve been a shitload of fun.

Dave Snowden: The other thing is what we’re doing there is I’m working with narrative experts, so if you understand the way narratives can be corrupted it’s fairly easy to introduce retro viruses into narrative by which people’s story destroy themselves.

Jim Rutt: That brings me to my next topic actually, narrative. Narrative appears to be central to at least your later work. Could you talk about what you mean by narrative and what is its position in your work?

Dave Snowden: There’s a quote by [Polanyi 00:58:32]. He said we always know more than we can say, and I extended that to say we can always say more than we can write down. If you look at narrative in humans, if you want to look at what’s really going on, what you want is the water cooler stories, the stories of the school gate, the stories of the checkout queue in the supermarket, because those actually determine people’s attitudes rather than responses in focus groups to questionnaires or in polling where you’re asking explicit questions.

Dave Snowden: The work we do is to actually gather those day-to-day micro narratives, but critically we give people the power to interpret their own narratives, rather than have it interpreted by text search, by algorithm, or by experts, which means we can scale to very high volume very quickly. So the concept itself interpreted… We tend to use the words observations especially these days, micro observations, micro narratives. We try an avoid the story word.

Dave Snowden: The key thing about narrative is it carries ambiguity with it. It’s halfway house between, say, the black cab driver in London who just knows where to drive because he’s spent two and half years learning every route and the map user. The map user has got explicit data but they outcompete with the taxi driver but narrative sort of sits in a halfway house between it.

Dave Snowden: For example, some of the work we did in the U.S. was on narrative enhanced doctrine. If you put HTML links to real stories into best practice documents, people actually have a richer context. They know more how to interpret the data. Narrative based search will give you better access to documents and so on. That’s kind of like a work is, micro narrative, micro scenarios, macro narratives to understand culture. But critically self interpretation of that narrative, not machine or expert interpretation.

Jim Rutt: As I was doing my research I dug into your SenseMaker software platform. I got to tell you your website sucks in describing it, in other words, essentially nothing that gave me much of an idea of what it was. But fortunately, being a good Googler, I did find some great resources written elsewhere. One called Making Sense of Complexity: Using SenseMaker as a Research Tool by Dave Snowden and some others. I don’t think it’s been published yet, but it’s a draft available on the internet. I’d recommend that to people who want to know about SenseMaker. Also, a company called Veco, V-E-C-O, inclusive business scan product, actually had some pretty nice descriptions from a practical perspective of how one might use a SenseMaker, and it describes it well in a less academic way than the Making Sense of Complexity paper.

Jim Rutt: With all that said as preamble, tell us about SenseMaker.

Dave Snowden: Okay. We know we need to improve the website, by the way. We’re just creating a new website with what we call pulses. That’s sort of easy entry into the field, so that’s looking at cybersecurity, culture, wellbeing, Agile, as very rapid uses of SenseMaker to understand attitudes and domain.

Dave Snowden: The best way of explaining it is to think about an employee satisfaction survey. We used to get these all the time in IBM, and you’d get this question which says does your manager consult you on a regular basis, scale of zero not at all and ten all the time, and you knew exactly what answer they wanted.

Dave Snowden: It was a big annual thing, people wrote reports, consultants analyzed it, and nobody knew what to do. We take a very different approach. We’ll go to, say, 10% of the workforce every month and we’ll ask them non-hypothesis question. For example, what story would you tell your best friend if they were offered a job in your workplace, and then the individual having either spoken that story, taken a picture, or typed or some combination, then self-interprets that story onto a series of triangles. One of the triangles, for example, has in this story the manager behavior was altruistic, assertive, analytical, so we’re balancing off three positive qualities.

Dave Snowden: Now we wire people up for this. What it does is it triggers a shift from what’s called autonomic to novelty receptive processing, what [Kahneman 01:02:37] called from thinking fast to thinking slow. You don’t know what answer is expected of you, so it forces you to think more deeply about the subject, and if you position dots on six triangles, you’ve added 18 metadata points to original narrative. That’s what we analyze, and then the original narrative gets carried with the statistical data to effectively find an explanation of what the statistical patterns mean. It is hard. That’s what SenseMaker does.

Jim Rutt: I loved it. I spent about three hours digging into it, particularly the triads. What you described as the triangles, I think they’re actually called triads in the software, and we’re not talking about Chinese organized crime here people. We’re talking about a three dimensional way of positioning your view about something in a constrained three space. Certainly is extraordinarily useful and might even be a transformational tool in many domains. One that came to mind is it could actually be a great replacement for political polling.

Dave Snowden: Yeah, we need to be careful what we talk about here, but we’ve actually done some work on that. We may even create a new company for it, because we need to keep that separate.

Jim Rutt: I was thinking if one wanted create a de novo political party, which somebody ought to do, doing a lot of work with something like SenseMaker might be a very interesting way to find out what shared values there really are in our society.

Dave Snowden: If somebody wants to approach us, we’re already doing projects on that and we’re very keen to extend that work, so we’re trying to find… this is the extinction rebellion and other stuff. What have people got in common beneath the surface presentation? And that’s what SenseMaker pulls out. Certainly in political use it can be used by, for example, field workers to gather stories from people, and then people self interpret their own stories, so you’re not reliant on interpretation.

Dave Snowden: What we’ve done in social work, for example, is social workers capture stories before and after a visit, but then the people who are their clients can tell and interpret their own story. So they’re not reliant on the experts to mediate it. That was the point I was making earlier on the children stuff about SenseMaker. So 16-year-olds act as ethnographers to their community. We gather quant data in large volume very quickly, which it’s almost impossible for people to manipulate how they can manipulate Facebook and Twitter.

Jim Rutt: The other thing that I believe I read in some of the materials, if not it comes by extrapolation, is if you use self-scoring the turnaround can be very rapid. Going back to your description of those goddamn corporate surveys. Everybody wastes two hours filling the damn things out, then the stupid-ass company that the company you work for hired to do it spends three or four months tabulating the data and running some bogus statistics, and management doesn’t even hear about it for six months, while presumably using SenseMaker and self-scoring you could have the results in hours.

Dave Snowden: You can have the results instantly, actually. Another example is 360. 360 is a major gained issue every year. It’s evaluative [inaudible 01:05:37] of a website in IBM where you can nominate to be somebody’s third quarter responder which HR got pissed off with me over. We do something different. You’re a leader, you nominate X number of people. Every time they interact with you, they record the interaction. They index it onto six triads, which have all positive leadership qualities, and then you see how they see you, and that’s a real time mechanism, and it’s descriptive not evaluative.

Dave Snowden: So you can sit down with your guys and say, look, I know old altruist. I know old analytical assertive, very little altruism. How do I get more of this? Rather than this evaluative thing at the end of every year.

Jim Rutt: Great, and I’ve got to say people I endorse, at least in principle, SenseMaker based on what I could extract out of it, and I am pretty good at understanding these kinds of complex tools. I would love to get my hands on it at some point and use it on a real project.

Dave Snowden: The pulses are designed to make it really easy. It’s [inaudible 01:06:26] preconfigured, just go run them.

Jim Rutt: That would be cool. We talked about what I know so far of your work up until now. What have I missed? Let’s start with that.

Dave Snowden: I think really the work is about how do people make better decisions. It’s kind of like where I came from. That was my original thesis years back. The big stuff we’re now doing like the children of the world project that I talked about, the work we’re about to start on global warming to understand how we can make people aware of the issue at a local level. We’ve got a big project running on understanding abuse in partnerships, and then using the narrative as a therapeutic device to help people out of narcissistic control.

Dave Snowden: My own direction now is focused on democracy, on education, on how do we create a more humane society under conditions of control restraint. That means that a lot of the cognitive side of this is coming forward. We just did this big retreat in Whistler. Last year we created a new approach to design thinking based on complexity, which is about to launch. That was in over three retreats. This year we’re working at decisions in perception. That will come out in May. The year after that we’re looking at meaning and identity.

Dave Snowden: So what I’m really doing is taking natural science approach to social systems, rather than a case based inductive approach, and for me that’s critical. My original degree was physics and philosophy, which taught me to have a contempt for social science from two completely different disciplines, and I haven’t really shaken it off since.

Dave Snowden: So using natural science to inform human polity is where we are.

Jim Rutt: Sounds like you’re moving away from the focus on corporate America and world corporations, and moving more into the social and political sphere.

Dave Snowden: It’s a mixture. Cognitive Edge, we just go new investors and so the commercial side of that is building, and that’s a big element of what goes on, and I inform that. I’m the R&D hub for that now, rather than the execution. You actually find the R&D in the big social projects has commercial application.

Dave Snowden: The stuff we did on counterterrorism has huge commercial application, but I’m more interested in creating a… where I used to say for my grandchildren, then I thought my children. Now I think I may be having to create a safer space for me.

Jim Rutt: Isn’t that ridiculous? I used to say there’s a 1% chance of a social collapse in my lifetime when I was 30. Now I’m 65 and it’s 20%, and instead of having an expected life expectancy of 65 years, I have a life expectancy of 25 years. What is wrong with this story?

Dave Snowden: We’re on the same wavelength and then we’re the same age.

Jim Rutt: Pretty much. Now between the two, between de novo work on social systems and corporate work, there’s the world of existing social science. Have you found interest in practicing social scientists in a university context in your approach?

Dave Snowden: They’re starting to switch. Complexity was ignored. I think part of the problem we got with social science is a lot of them is they’re in very tight debates, say, between modernism and postmodernism, between social constructivism and critical realism. To my mind, those debates are fairly meaningless.

Jim Rutt: I call them horseshit.

Dave Snowden: Well, I think there’s some interest in things in them, but you got to be careful. I was at a Cultural Evolution Society thing in [Edinburgh 01:09:54] recently, and I raised some research outside of the field of the people there. I said do you know Andy Clark work on extended cognition, bang, bang, bang, and this lecturer said, “Yes, I know about that, but if it’s true it can validate 50 years of experimental psychology, so I’m not taking it seriously.”

Dave Snowden: I think that’s the trouble. Social science has become highly self-referential, and highly focused on paper production, because you’ve got to produce X papers per year to get tenure, and so the chance to explore new ideas, you really have to do outside of academic life at the moment.

Jim Rutt: Or have tenure.

Dave Snowden: Yeah, even in tenure they’re still under huge pressure on that.

Jim Rutt: And then as you… you didn’t quite say it, but I’ll say it. Altogether too much of social science today is politics by other means.

Dave Snowden: Yeah, and I think… I wrote a blog post on this. I said natural science gives us explanation and prediction, social science can give us explanation but not prediction. If you actually say your social science gives you a predictive capacity, which is what most management science is, then you’re a pseudoscience. Then there’s a fourth alternative, which is what we do is use natural science as a constraint on what you can do in social systems.

Jim Rutt: That’s very interesting actually. Could you elaborate on that a little bit? Let’s bring it back to the more mundane executive running a company.

Dave Snowden: I teach this to senior executives. There are some basic facts we know. We know for example inattentional blindness. If you give a bunch of radiologists a bunch of X-rays and one of the X-rays you put your picture of a gorilla, which is 48 times the size of a cancer nodule, and 83% of radiologists won’t see it even though their eyes scan it.

Dave Snowden: A natural science expert says, right, that’s the case, so we’ve got to find the 17%. We can’t train people not to do that. We need to create systems, which find the 17% before they talk to the 83%. That’s what we do when I mentioned it earlier about mass sense. Ping something out the whole workforce do, you find outlier communities, we go talk with them. That’s an example.

Dave Snowden: We know that the complex adaptive system you haven’t got linear causality, therefore you stop talking about the future, and you focus on mapping the present. So what natural science gives you is this tight focus on what’s actually possible, as opposed to what most management science has been doing for the past 40 years, which is a series of fads, each of which claims to have found the solution to the problem of life, the universe, everything. Each of which lasts three or four years then dies, because there’s no scientific base to any of them. They’re all fictional constructs.

Jim Rutt: Indeed, if you go back and back test them most of them don’t back test at all.

Dave Snowden: And people love it. Myers-Briggs for example is used in all corporate America. It’s been proved to be a pseudoscience more times than I care to imagine, but people carry on using it.

Jim Rutt: It so profitable for the consultants. That’s the driver, to my mind, of most of this management faddishness. In fact, I generally tell people, when they ask me what do I read, I say almost anything except books about management.

Dave Snowden: I saw a Forbes article earlier on today about Agile and Agile was the future of everything, and I basically added a comment to it said I could’ve written exactly the same thing about every other management movement in the last 20 years, and the author actually wrote some of them. So why should we believe this? It’s another fad it’s not based in anything we know from science.

Jim Rutt: Each of these fads have some applicability. There are definitely some situations where right-sizing was a very valid strategy, but no free lunch theory-

Dave Snowden: Make universal. There was nothing wrong with business process re-engineering until it became Six Sigma.

Jim Rutt: Exactly. That’s the no free lunch theorem. There’s no one answer to every problem. You have to understand your domain first.

Dave Snowden: I did a program in IBM, which actually it was a three month experiment. We proved astrology was more accurate than Myers-Briggs in predicting teen behavior. For some reason, they got upset with me. The point I made is if you get people to think about themselves as different it has utility, so you might as well use astrology because then nobody will take it seriously, but they’ll use it properly. But either way that didn’t go down well.

Jim Rutt: Hey, you get 12 tribes of free right?

Dave Snowden: You do.

Jim Rutt: Maybe that’s all that matters, that you have 12 tribes, not what they are.

Dave Snowden: I’ll tell you something really scary as well. If you go to core astrology then Virgos work very well as deputies for Aries, but Taurians don’t, and that’s been true all my life. I’m really scared by that one.

Jim Rutt: Love it. The total horseshit, but sometimes, right? A broken clock is right twice a day.

Dave Snowden: People don’t know their statistics. They don’t know how lucky coincidence is.

Jim Rutt: Indeed. Well, David, Dave I should say, this has been a wonderfully informative and interesting and stimulating and fun conversation. Glad you could come on the Jim Rutt Show, and I’d love to have you back on again sometime in the future.

Dave Snowden: Always happy to come.

Jim Rutt: Production services and audio editing by Stanton Media Lab. Music by Tom Muller at