EP9 Joe Norman: Applied Complexity

Joe Norman

Joe Norman is an applied complexity scientist with a focus on transforming insights gleaned from complex systems science into practical and implementable strategies and tactics for grappling with an increasingly uncertain and dynamic world. Joe is an Affilate at the New England Complex Systems Institute in Cambridge, MA, an instructor at the Real World Risk Institute, and founder of Applied Complexity Science, LLC. He lives in New Hampshire with his wife where they are focusing their energy on homesteading and local agriculture on an old mill property that has been an actively running homestead for over 130 years.

  1. Introduction to Joe Norman and Complex Systems 5 minutes
  2. Complexity Science 4 minutes
  3. Wholes and Their Components 11 minutes
  4. Irreducibility 3 minutes
  5. Emergence 5 minutes
  6. JJ Gibson, Conscious Cognition and Perception Learning 9 minutes
  7. Complex Systems and Ensembles 7 minutes
  8. Climate Science, Freeman Dyson and Methane Ice 7 minutes
  9. GMOs and the Precautionary Principle 15 minutes

Transcripts for The Jim Rutt Show featuring Joe Norman

9 thoughts on “EP9 Joe Norman: Applied Complexity

  1. 1:00…note that monoculture in previous ages was local not global…it is the interaction of the monoculture with a truly global distribution and procurement system that poses the unacceptible systemic risk.

  2. 57:00 exactly, now where else do these kind of absurd systemic risks show up…im sure joe has thought about it, and quite clearly Dan Scmachtenberger and Jordan Hall Have? The attention wars? weaponized “AI”, nanotechnology,crisper, fracking, education paradigms that select for understandings in terms of linear response and binary choice….Essentially from the dynamical generators of universal competition,
    the paradigm of mechanistic design, and the technological program of control (local and ephemeral) everything about the way of life we have coevolved is a systemically unacceptible existential risk….thus gameb may be our only viable option. I happen to think that we may not writ large be ready for it, and that some operating system level ( incentive structures and legal definitions) of our current system may buy us time and pave the way for a game b transition…so both approaches are essential. Jim…where do you sit on this?

  3. 33:00…exactly, now lets go a bit farther along this line and ask the question ( as Deacon does in Incomplete Nature) of whether it is possible for a digital system that does not immanently do work on its own behalf for its very persistence, to harness external assymetries can even in principle ever emulate a living system that is endogenously purposeful by its very existence. I conjecture the answer is no.

  4. 16:45…wow Jim, im not personally super familiar with evolutionary programming, and as you may have surmised, i don’t by the abstract idea of a gene at all as currently understood for some of the reasons already discussed here by Joe (great podcast by the way). The research on GA’s you describe here is pretty stunning. Given that my position on DNA is that it is a digital storage code, and viewed in isolation from the ambient analogue code in which its immersed, it should be expected to be reasonably well modelled by formal systems ( turing machines). As such, this is a pretty valuable insight for those studying even classical evolution, social systems, and ecconomics.

  5. a couple of papers that Joe might find interesting…i have sent them along to you at one time or another as well Jim…the first also includes something joe alludes to in terms of functional components ( process relational) rather than inert parts in an organism. The other makes distinct the intractability of organism as a constraint constructing system to the dynamical systems approach. http://www.people.vcu.edu/~mikuleck/PPRISS3.html?fbclid=IwAR396ED86P6U6ApHjRupBXkuf2MOdPku4Ok3daZMnG8YIrZtRL8i4JyT8_g


  6. Very interesting show with many things I could comment on. I will choose one: evolution and coevolution.

    Using very general, abstract terms, I see two ways to view evolution: View1 is that an organism evolves to adapt to an environment, and View2 is that an organism-environment system evolves. I suggest that View1 is traditional. I prefer View2.

    Coevolution is a concept within View1 that the evolution of two or more organisms are mutually influenced in a substantial way by their interactions. In contrast, View2 might take the ecosystem as the basic object of evolution. Because evolution of a system necessarily implies evolution of its component subsystems, considering “coevolution” is simply considering a restriction of the system evolution to a small subset of component subsystems.

  7. Just when I start thinking “these guys are incredibly bright” you drop such a clunker I have to reassess completely. To wit, you take seriously the silly and specious “precautionary” principle (the PC for short). Its logic is: “X might lead to a terrible outcome, so we need to do A minimize the chances of X”. According to some versions, minimizing the chances of X must comes before anything else so virtually unlimited resources need to be put into A. It reflects the engineers view that there’s an engineering solution to everything.

    The problem? Doing A also carries risks, and one can apply the PC to that as well. For example, the common assumption is that we should put trillions of dollars into reducing carbon emissions to mitigate global warming. (Let’s put aside for the moment who “we” is.) Well, putting all those resources into replacing carbon will, among other things, reduce economic growth. One can “speculate” about also sorts of risks that could create. Perhaps it would increase global tensions increasing the chances of catastrophic war. However uncertain that might be, you can’t run the risk of a catastrophic outcome! Maybe those resources should be spent on identifying and heading off asteroids headed for earth. One of those could wipe out 99.999% of life on earth like the Chicxulip meteor. It’s a small chance, but it would be catastrophic. Spare no expense!

    Second, every action has an opportunity cost. Trillions spent on doing A means those trillions aren’t spent on B, C, or D, which could achieve the desired outcome faster, or more cheaply. Say, R&D that could create technology that would mitigate the risks much more efficiently and cheaply, or geoengineering which could do it at much lower cost. Also, resources spent on action A are resources not used for eradicating malaria, or reducing childhood mortality, or maternal deaths, or improving education, and a thousand other desirable goals.

    But the first objection is already fatal to the precautionary principle. No action or inaction is without risk. An action to reduce a risk creates other risks. Sorry, there’s no logical shortcut to weighing costs and benefits of alternative courses of action. Invoking the “precautionary principle” to justify action A on the grounds that an outcome would be catastrophic is intellectual laziness, not a logical tour-de-force.

  8. On downward causation, see Donald Campbell’s 1974 paper:

    Campbell, Donald T. 1974. ‘Downward causation’ in hierarchically organised biological systems. In Studies in the Philosophy of Biology, eds. Ayala, Franciso J. and Theodosius Dobzhansky, pp. 179-186. London: Palgrave.

    He writes: “All processes at the higher levels are restrained by and act in conformity to the laws of lower levels, including the levels of subatomic physics.” And yet, “all processes at the lower levels of a hierarchy are restrained by and act in conformity to the laws of the higher levels.”

    Or, as Howard Pattee asks: “How do structures that have only common physical properties as individuals achieve special functions in a collection?”

Leave a Reply