Transcript of EP 175 – Nick Chater and Morten Christiansen on The Language Game

The following is a rough transcript which has not been revised by The Jim Rutt Show, Nick Chater, or Morten Christiansen. Please check with us before using any quotations from this transcript. Thank you.

Jim: Today’s guests are Nick Chater and Morten Christiansen. Nick is professor of behavioral science at the University of Warwick School of Business and Morten is the William R. Kenan, professor of psychology at Cornell University, and a professor in the cognitive science of language at Aarhus, University in Denmark, if I didn’t mangle that too badly. Nick was on the show in EP57 to talk about his book, The Mind is Flat, which radically debunks depth psychology and lots more. I got to tell you, this is one of my favorite all-time episodes. So if you like some of the things you hear here, check it out. This is a really mind-bending episode. Morten and I interacted a bit when Morten was a sabbatical visitor at the Santa Fe Institute. And I’ve been reading the papers that Nick and Morten have been writing together for almost 25 years, so it’s really great to have them on the show together. Welcome, Nick and Morten.

Nick: Well, it’s great to be here.

Morten: Yeah, wonderful to be here.

Jim: Yeah, this should be a lot of fun. I must say, I really like the book and for the audience, this book is quite readable. You don’t need a background in cognitive science or computational linguistics or anything like that. The highly intelligent listeners of the Jim Rutt Show ought to be able to handle it just fine and it’s actually a lot of fun. The title of the book is The Language Game, How Improvisation Created Language and Changed the World. That’s what we’re going to talk about today. Before we get down into the book, let’s drill into an analogy that y’all use a whole lot and that is the game of charades.

As it turns out, you probably don’t even know this, maybe you do, you use the term 197 times. And of course, it’s a small group, social game and one I am familiar with from popular culture, but maybe somewhat surprisingly, one I’ve never actually played. Not a thing in my hometown, not a thing in my adult face-to-face communities and I expect there’s probably some others out there. So because you use it so much and it’s so central to your argument, maybe one of you could take a minute or so and describe in a little detail how the game is charades works.

Nick: Well, shall I take that? Yeah. So charades, which is something that’s quite a big part of my family culture, actually the kind of thing I used to play with my grandparents when I was a small child, it’s a game where you try to convey either a book or a movie or a play or an opera or a song and you do it purely by gesture. So you can’t speak and you can’t make sounds. So you think of something like, say… I think we’re talking, in the book, about King Kong and you think, “Well, King Kong, how are we going to do that?” And you might think, “Well, I’ll imitate some sort of a gorilla.” And if you’re thinking a bit more about it, you might think, “Oh, and if I can get the Empire State Building in, that would be good. And maybe I can even try to swat at some airplanes.”

Of course, the person who’s watching this has got to figure out what earth is this or what is this thing you’re getting at? And sometimes, it works and it’s very satisfying and sometimes it’s hilariously disastrous. You think, “It’s so clear, it’s King Kong,” and there’s all kinds of crazy stuff about, I don’t know, slasher films or Psycho. There’s all kinds of wild guesses. But the miraculous thing about charades is that we actually quite quickly get attuned to each other. So one of the things that’s so interesting, and I’m sure we’ll talk about this later, is that if we played charades a lot with each other, there are certain things we start to do repeatedly.

So for example, if we did King Kong successfully, then if you needed to do some charade about it, something with an ape in it or something with a big building in it for that matter, you might well find yourself starting with that charade and then sort of making a variation. So you’d be saying, “Oh, this is the gorilla, we’ve got that straight. And now let’s make some variation.” And then if you had another thing, you might draw on several of your previous charades and gradually you’re building up this kind of little library just for two of you, just sometimes the space of sort of 20 minutes or half an hour. But that’s a very interesting phenomenon of course, very interesting and relevant for the way communication develops and that’s our idea.

Jim: Yeah, we’ll relate to that again a few times I’m sure, but I think it’ll be very useful just so people understand with some precision what it is. Now let’s hop into the book itself and you start off talking about the significance of language to what it means to be human.

Morten: One of the things that’s often underappreciated because we use language all with time and oftentimes we don’t really think about it, but it is crucial for almost everything we do. It’s crucial for using the internet, it’s crucial for establishing laws, society more generally, norms that we live by, and so on. So without language, we wouldn’t really be able to be where we are today for better or worse. In evolutionary terms, having evolved the ability to use language that really changes the game because now we can start to develop a cumulative culture, that is we can build on what other generations sort of created before and that allows us to have improved skills, improved facilities throughout societies, and just in a way we live with each other.

Nick: And I think the thing is it is so hard to imagine life without language. And I know at the beginning of the book we talk about how it would be if language suddenly disappeared and obviously we’d be desperately struggling to maintain the lives we have now because almost all communication would be extremely painful. We’d be using charades in fact the entire time for everything. But if you’d never had language in the first place, going back to what Morten was saying about accumulation of knowledge, every generation of learners pretty much would have to learn from scratch. Sot hey might have to face the world as kind of alone, isolated being, thinking, “Well, lots of weird stuff’s happening, the world’s very complicated. I’ve got to find food, I’ve got to find shelter, I’ve got to avoid being eaten.”

And I can watch other people and they might try to give me hints, but since we have language, we can say, “This is where that particular type of prey animal tends to be. This ice is dangerous, don’t walk on it. This is how to build a shelter.” We can instruct each other. As Morten was saying, we can create norms and rules for each other. We can say, “We mustn’t do this,” or, “You did that and that’s wrong.” So we can start to build cohesive patterns of rules and laws that bind us together as teams and ultimately as societies. And that sort of accumulation of knowledge, I mean we tend to think about it in terms of libraries and of course that’s also super important. The fact we write stuff down requires language, but without language at all, there’s no question of writing something down or even an oral tradition. Simply every discovery is sort of immediately lost. So it really is fundamental to building a complex, rich society such as humans have. No other creature has language and no other creature has a society remotely as flexible and complicated as ours.

Jim: See, it’s always seemed to me that language is one of those fairly rare, bright lines, or at least language of our sort. And we’re not quite sure what the whales and dolphins and such are doing, but our sort of language compared to the other primates. It goes Mr. Chimp and us share 98 and a half percent of the same chromosomes. And he’s not a dumb guy. In some ways, he’s smarter than we are interestingly in certain kinds of psychological tests, but I don’t see him flying around in a 787. And so that bright line, it’s made a gigantic difference.

Now when I’m reading books for podcasts, I often look for a single, fairly concise theme in the book. And I think I found one surprisingly early and I’m going to run it by you guys and get you to react to it. Now, this is Rutt’s hypothesis of the deep theme of the book. The fundamental misconception that the rough and tumble of everyday language is a pale shadow of an ideal language where words have clear meanings and are put together following well-defined grammatical rules. But this traditional story has things exactly backward. Real languages are not slightly mangled variants of a pure, more orderly linguistic system. Instead, actual language is always a matter of improvisation, of finding an effective way to meet the communication demands of the moment.

Nick: Yeah, I mean I think all I can say is spot on. Absolutely. When we were thinking about the early drafts of the book, one of the ways into it, which we didn’t ultimately major on, but was starting with the idea of a perfect language. So I mean we thought a lot about the history and looked quite a lot at the history of the idea that there is some kind of ideal language from which real languages, as it were, sort of pale and broken copies. And of course, the Tower of Babel story in the Bible is just an example of this. All these sort of mangled tongues that we actually speak and the diversity and the sort of misunderstandings we have through the variety of languages is assumed to have been visited upon us as a punishment for over winning hubris.

But prior to that, the assumption was there was a single and presumably perfected archetypal language, but that’s a very broad idea. It pops up just about everywhere. And the one sense of the perfect language, one viewpoint of the perfect language is historical. So the idea is that once upon a time, there was this ur-language which we all spoke and as declined set in, it just scattered and broke into a million pieces. But another perspective is to think that the real language, the real sort of ultimate ground language is a language in which we think, so the assumption going back to, in fact to some extent Chomsky, but certainly to Jerry Fodor especially, is that…

Jerry Fodor was a very significant philosopher and psychologist of the sixties, seventies, and a very brilliant person, but very different perspective on the world to ours. And his view was that there’s a language of thought and that language of thought would be much more unambiguous, clear cut, generally cohesive than the languages we actually speak and that human languages are kind of… I put it like this, but they are kind of sort of mangled version of this sort of rather precisely defined, logically coherent, inner language, which we could only as it were glimpse through the veil of the rather mangled natural languages we use.

Jim: An awful platina view of reality. And I must say, I always put my flag down with the Aristotelians, right? I remember talking with Ray Jackendoff about this when he was at the Santa Fe Institute back in the double aughts and he was, at least at that time still, somewhat of an adherent to mental ease. And I must say I wasn’t buying it at all, even though he’s a way smarter guy than I am, but even really smart guys could be wrong.

Morten: Yeah, the idea of the perfect language has been around for, as Nick was saying, a really long time. And there’s actually a wonderful book by Umberto Eco called The Perfect Language where he goes through all the different kinds of ideas people have had throughout the centuries about the perfect language. And the notion is that there has to be something essential to language that’s captured in some way, whether divine, I don’t know, if it’s by some sort of God or whether it’s sort of built in our genes as a universal grammar as suggested by Chomsky or something like that, but it’s not taking the messiness of actual, real language into account. And this is where some of the inspiration actually come from, what you just mentioned, from being at the Santa Fe Institute, this notion that language is really more like a self-organizing system, which is of course a key topic at the Santa Fe Institute.

Jim: All right, let’s hop into the first story that you tell. And for the listeners, this book is full of stories. It’s actually very enjoyable to read and you’ll get some really good ideas from the stories. And that’s the encounter between Captain Cook and the crew of the HMS Endeavor in 1769 with some indigenous folks. As best I can tell, we’re in the south end of Latin America, South America someplace.

Morten: Yeah.

Jim: Tell us that story and how it relates.

Morten: It’s a wonderful sort of part of history in that I think it was around January in ’69 where HMS Endeavor was on their way to Tahiti where they’re going to record sort of the transit of Venus and before they rounded the southern tip of South America, they needed to get some water and some wood. And so they put in at the Bay of Good Success on the Tierra del Fuego, the southern tip of South America. Then when they came ashore, they saw some indigenous people, most likely Tehuelche. We don’t know exactly who they were, but it was most likely Tehuelche. And at first, they disappeared again, but then two men from Cook’s party, they sort of stood aside and then two men from Tehuelche party then came towards them. This is sort of the incredible thing.

They had some sticks and they held them out in front of them and then threw them aside and Cook and his men took that as an indication they had peaceful intention and indeed that was the case because soon they were trading gifts and they were sharing food and some of Tehuelche even came aboard the Endeavor and subsequently, Tehuelche helped Cook find water and so on. So it was an amazing interaction even though of course they had absolutely no language in common. And in fact, Tehuelche would probably look very odd from the viewpoint of the Europeans, but also the Europeans would look incredibly weird from the viewpoint of Tehuelche. Yet, very quickly they were able to communicate with one another in what we, in the book call sort of high stakes, cross-cultural linguistic, or not linguistic, but cross-cultural charades.

Jim: Okay, great. And then as we talk about how, not maybe in this case, but in other cases where one society contacts the other, particularly where they’re very sepa-linguistic groups and have nothing in common with vocabulary or syntax, pidgins often emerge. If you could tell us what pidgins are, and how they can then lead to Creoles?

Nick: Yeah, so the I idea of a pidgin… Pidgins are languages which we don’t really have a very coherent syntax. So they take vocabulary items from one or primarily one, but sometimes a mix of both languages, so the languages of two sides. And in reality, this is very frequently documented in the case of empire. So one powerful nation is subjugating a people and essentially forcing them to work at their behest. And so it’s not usually a symmetrical relationship at all. It’s usually rather grim. So it’s usually pretty grim. But the requirement to communicate with people you’re wanting to work for you requires that you have a common set of nouns and verbs for actions and things. So initially, as it were in first context and fairly soon after that, a fairly ad hoc set of conventions get picked up, usually driving, as I say, from the vocabulary of one or both languages and the syntax is very minimal.

So you don’t have the complicated structure of a familiar language like English. However, interestingly, the children of groups where a pidgin is spoken start to produce something much richer. So they start to develop complicated ways of putting words together. So they don’t just allow you to just spew out nouns and verbs in whatever order which gets your message across. You suddenly find wordle starts to appear and then all kinds of other things like morphology might start to appear. And so gradually the language gets more and more complicated. That’s known as a creole. So after a generation or two, something real language like is emerging from something which was originally just a sort of a bit like a salad of words. So this seems to imply kind of a remarkable natural human ability to create, to organize linguistic structure where there wasn’t one. So you’re creating a new linguistic entity out of rather unpromising beginnings.

Jim: Yeah. Really interesting. It does seem to show the power of a language of some sort, even though we’ll talk later, you guys don’t buy the Pinker or Chomsky style. Actually when I was reading that, I had a curious question to self. Have there ever been any examples where cultures ended up staying at the pidgin level and that became the language of a people? Has that happened anywhere?

Morten: I believe there are some languages that are pidgin. They actually become more complex than they are initially, but there are some languages that are referred to still as pidgins and have a substantial number of speakers. So these days, some of the distinctions between Creoles and pidgins are a bit more fluid than people used to think. But I think Tok Pisin is an example of what is still referred to as a pidgin.

Jim: Interesting. I was just curious about it. It doesn’t really fit into the rest of your story. I say, “Hey, I got these guys here, might as well ask them.” But one of the things I noticed throughout the book is that you often balance oral language and sign language. In fact, you use the word sign or signed 91 times, mostly to refer to signing, a few times in the semiotics sense, but usually for signing. Michael Tomasello is well known for his theories that perhaps gestures came before language. What do y’all think about that?

Morten: Well, I think when it comes to the origin, it’s probably hard to know unless we can get hold of a time machine or something like that. But in the literature sort of people, it has been suggested that, “Oh, language originated in gestures,” or somebody else is suggesting that it are originated in vocals. But what Tomasello in particular was suggesting was he presented this thought experiment where he imagined that we take two groups of kids and we put them on different islands, and then on one island, they’re only able to use gestures to communicate, no sounds, no speech, no nothing. Let’s call the first island Gesture Island. And then on the other island, let’s call that Vocalization Island. They’re only allowed to use vocalizations, not speech as such, but just vocalization, but no gestures. He was suggesting that only on the Gesture Island would these children be able to communicate with each other to any degree because his intuition was that sound itself doesn’t carry much meaning as such.

It’s hard to get it, independently of words, to carry any meaning. However, there’s a very smart psychologist, Marcus Perlman from Birmingham University in the UK who did an interesting study where he had all sort of people interested in language evolution create essentially sounds that should correspond to something like a tiger or cutting or water or something like that. And then these sounds were entered into a contest and the people who could best produce non-speech sounds that would capture some of these concepts like cutting or tiger, they would win a $1,000 prize and they took the best examples and then presented them to a bunch of people who knew nothing about what this was about.

And then they had to guess, when they heard a sound like, “Rawr,” what that referred to and there were sort of a bunch of pictures. They would then click on the tiger. And it turns out that people could do this quite well. And they even did it cross-culturally and cross-linguistically across a number of different countries around the world, including indigenous people. And they were able to find that sound can carry some aspects of meaning even in the absence of words and gestures.

Nick: I think that’s very interesting when you think about the charade because when you’re playing plain charades, you get a sense of just how amazingly rich a repertoire of gestures is and how you can imitate a very elderly person or a bear or somebody doing some cutting or an umbrella or all these things. You can create sort of an image, a visual picture in the mind of the viewer, which can conjure up the right concept. And it’s easy to think from that sounds would be hopeless. And this is Tomasello’s intuition and mine as well. Language is great when you have words, but without words, these noises will be no good. So a kind of noise charades would not be nearly as effective. And what Perlman’s showing it is that noise charade is much more approachable than you might imagine. So it’s not really so clear that… If it were the case that noise charades was very poor, then it would be a pretty powerful and direct argument that communication most likely would’ve started gesturally.

Jim: Well, that’s an interesting data point that points in the other direction, makes it even more confusing. Yeah, right? Yeah, that’s funny. Another SFI guy, I had a guy named Walter Fontana one night over a bottle of whiskey, we’re doing what you’re not supposed to do as you talk about the origin of language. One of the French research institutes banned the topic for a hundred years or something like that. But anyway, we were talking about it and we came up all with our crackpot theory of the night, which was it may have evolved from multi-part tools because creating and assembling a multi-part tool has a syntax and a grammar to it and a semantics.

And so he said the mental machinery for building multi-part tools was the bootstrapper and obviously there’s a evolutionary payoff. And if you want genes to pull forward, let’s have a payoff. I can make multi-part tools, I kill more, reproduce better, my genes survive. And so that was the bootstrap to rudimentary syntax, grammar, and semantics. And then we also took a leap into the void and said, “Gestures on how to make tools might have been the precursor to language.” God, I’m not sure if anybody else has ever come up with that crackpot theory, but investment of one bottle of cheap whiskey, and that’s what we came up with.

Nick: That’s quite a good payoff. I mean this relates much more to Morten’s line of research than mine. But I think it is very interesting that the sort of hierarchical structure of sequences of actions in general, multi-part tools being a case where you have to generate particularly complex sequences of actions, but human action sequences in general have a hierarchical form. So something like making dinner is a multi-stage task with all these different sub-components which have to be interleaved in complicated ways and you have to get backwards and forwards. And they have different levels of complexities. Slice onions, but then there’s individual slices, and then there’s a move onions into frying pan and wiggle the frying pan. And when so on it goes. And that necessity to be able to build those complicated, highly hierarchal structures, it is interesting to think how analogous that is to the ability to create complex structure sentences. And I know Morten, you’ve looked at this kind of question, how much the hierarchal structure of non-linguistic sequences and linguistic sequences may connect, so you might want to say a little bit about that.

Morten: Yes. I mean also just to note, researchers have suggested that sort of language are piggybacking on top of abilities for tool construction. So there’s a Dietrich Stout at Emory University who has done some work on that, including neural imaging. But I think more generally, there does seem to be some overlap between action sequences and sort of human language, at least the structured language. But it’s also important to keep in mind that, for example, in the production of language as well as in action sequences, it’s always very flexible. So you’re not really planning necessarily very far ahead because if you’re reaching for something, you have to adjust your reach in some way if it’s heavy or if it’s light more than you had expected.

Likewise with language, oftentimes we feel like we are sort of speaking into the void as it were, that we have some idea of what we want to say, but the specific structure or the specific combination of words that we use are often something that we sort of fill in on the fly. And also going back to the notion of gesture versus vocalization, I think perhaps the most clear answer might be that it’s actually a combination of the two, because there’s disadvantages and advantages with both. So by combining them, it would seem that our ancestors, when they started communicating in more complex ways, that they very likely recruited both sound as well as gestures, just because you can make yourself more easily understood that way.

Jim: Yeah. In terms of charade, just think how much better you could be if one team was allowed to use words and the other wasn’t, right? So both is, so often in the social sciences, the seemingly obvious answer. Let’s move on here, and this is into the history of ideas a little bit. In the post-World War II era, we started seeing first slowly, then more rapidly the proliferation of computers and computer networking. And then soon thereafter, the development of a cognitive science…

Jim: And then soon thereafter, the development of a cognitive science revolution that in some sense tried to apply network and computational models to human cognition. You guys argue that there’s something wrong with that, as it turns out, at least with respect to language. So why don’t you tell that story a little bit and where it connects to your work?

Nick: Yeah, well, just to kick off, the starting point from the point of view of the understanding of language, that you have, if you think about it as a communicating between computers over a network, is you think that the objective is to take the information in one computer and to package it up into a digital form, and digital could be binary stream or it could be a stream of sounds. And to send that, in one case across a network, in another case it would be an acoustic signal, to another machine, another person or another computer, and then it decodes at the other end. You take up some information you want to transmit, an email or a text file or a movie, and you turn it into a message you can compress, in some kind of compressed form and send it across the channel. And then the opposite process of decoding goes on at the other end.

And that seems like a very natural way to think. And indeed there’s beautiful theory of how that works, how rapidly it can be done and so on, developed by the sort of mathematical and engineering genius for Shannon. Now, Shannon himself actually was very skeptical that this was a particularly good analogy for language. In fact, particularly he stressed that his theory, communication theory, is all about all the aspects of communication, which don’t have anything to do with meaning. So he wasn’t himself at all arguing that this was the right way to think about how meaning is conveyed between one person and another. Nonetheless, as a field I think we have imbibed this message. So the idea that you get in your mind if you take this viewpoint is you think, well, if we’re going to have a way of coding up thoughts and turning them into a language and decoding them again, what we want is a clear, precise, formal system for doing this.

So we need to have some mathematically kind of well-defined mapping. It needs to be invertible, obviously. You need to be able to go in both directions and it can’t afford to have too many inherent ambiguities and vagueness, to the extent that they’re being pushed around by context and continually reshaped by the particular situation you’re in. Now, that’s a bug. That’s not a feature, that’s a bug because you want the thing to be as stable as possible. Now human communication, it’s really, well, this is where, go back to charades again. It seems to be really different from that. So first of all, it’s not really clear that there’s a crystalline thought in my mental ease, my language of thought that I’m trying to get into your mental ease. So the starting point. But there’s a kind of thing, there’s a file as it were in my brain, and I want to get that file into your brain.

That itself is probably a poor model of what’s actually happening. But what we’re doing when we’re using communication and language emerges from this process, is we are playing this charade like game. So I’m thinking, I want you to think about King Kong. Now you will have a different conception of King Kong to me, different things about the movie and the background and the … was there a novel? I don’t even know. So we’ll have different background knowledge, but all I want you to do is to pick out this name and just as I will, and I’m going to use whatever tricks I can. So I’m going to try waving my … doing some swatting planes. I’m going to try to mime the Empire State Building shape. And I’m going to do all kinds of stuff, which I’m hoping one of these things is going to get across to you the concept of King Kong.

But of course I’m not doing it in some standardized way. I’ve got a standardized method saying this is how I encode King Kong in all situations. And in fact, the method I use might depend crucially, as we were saying before on what we were just chatting about before. It will answer crucially on what I think you know. Know indeed for that matter, what I know. So if I know you’re an old movie buff, then I think, ah, this is going to be easy. But if you’re doing charades to a small child, I’m think I may think they don’t quite actually know who King Kong is, this is going to be … or they don’t know about the Empire State Building. This is going to be tough. So the message is being continually reshaped based on the particular context, the particular communicative interaction. So it’s inherently moment by moment, a local thing.

I’m trying to solve a communicative problem right here, right now with you to communicate this thing. And doesn’t have to be very precise. I just have to get across enough. So in going back to your example of the multi talk part, tool construction, if we’re making a tool together, I don’t have to give you … I don’t have to have a code that’s precise enough to convey structures for all possible tools. And if I say, pass me the mallet, say, it doesn’t have to be an answer to the question. But what exactly is the mallet under all possible objects I might give you, is this a mallet? What about that one? All it matters is there’s only three things on the table. I want that one. That’s all I care about. So is the context specific, the moment by moment solving the communicative problem that’s right in front of me now, using all the intelligence and creativity that both of us have together, that’s the trick.

So there’s no code gesture in the waving, in the noises I make including using everyday language. This is not equivalent to converting my message into a kind of logical crystal clear language, which has kind of a stable interpretation. Instead of I am doing … it’s charades all the way down. It’s charades, which it gets more and more standardized and conventionalized, the more we talk to each other, the more it becomes similar. And of course we acculturated into a lot of other charade users from birth. So the language is continually becoming more, in many ways, more refined and more precise. But it’s always a process of flexible moment by moment interaction.

Jim: Yeah, I thought a very nice metaphor that you guys used that brought forth the charadeishness of language was the iceberg. Right? And also some examples, the iceberg, meaning you have words, phrases, and sentences above the water line and below it is all of our experiences, our memories, our empathy, our culture, our factual knowledge, and some rudimentary ability to logic that we know humans aren’t particularly good at that particular topic, but at least we have a modest skills at it. You gave a very interesting thought piece for three short sentences. For sale. Baby shoes. Never worn. Tie that back to the metaphor of the iceberg.

Morten: That is sort of a very evocative piece of what’s called flash fiction. And so when you first hear those six words, it’s hard not to sort of concoct some sort of story about devastated parents having lost their child due to illness or something else. And then they they’re poor so they have to sell these baby shoes they so lovingly bought for this little child. But of course, none of this information is in the six words that you just read aloud, for sal, baby shoes, never worn. And with regard to communicative iceberg, the idea is that language is a collaborative endeavor.

So when we are communicating with one another, we are sort of collaborating to create a common understanding between us. And where the iceberg comes in, is the notion is that without our ability to have empathy, the knowledge of common norms, social base, general knowledge and so on, and just how we interact with knowledge about how we interact with one another, we would not be able to understand one another. We certainly would not be able to make any sense of this little six word story. And so from the viewpoint of the sort of communication iceberg, the idea is that what allows us to use words, phrases, sentences to communicate is really having all this hidden knowledge, the knowledge that’s under the service so to speak. And without our words wouldn’t make any sense, and we just sort of sink into intelligibility.

Jim: Nick do you have anything to add to that?

Nick: Well, I suppose that highlights how different this kind of communication across the network perspective is from the way language works. So if you naturally think you’re going to communicate across the network that you’ve got to have some kind of standardized protocol and it’s all precisely defined. And every packet of information has a well defined way of to be decoded, and the message that you’re pulling out as it were, bottled and passed down the wire. So there’s no magic kind of creative license on the part of the decoder. The decoder’s not supposed to think, well, you didn’t actually say here’s a ton of stuff, but I’ve got to think that I’m going to infer all that stuff. But real communication’s all about that inferring process and going back to our charades with King Kong, to even work out what we’re even talking about. You have to do all this inferring, all this reasoning and creative thinking.

Jim: Yeah, there’s some obvious proof. Oh God, I just thinking about this as you were saying this. How could anybody not notice this? Think of the concept of marital misunderstandings. We don’t get it right.

Nick: Yeah, I mean human misunderstandings are in that context on every other. I mean, they are the bane of our lives, aren’t they? And in fact, in a way, the fact that communication … communication is endlessly difficult. I think it’s something … if you spend your life as we all do, spending a lot of time creating documents, creating podcasts, trying to communicate clearly, in a way that very large part of the entire human experience is devoted to the struggle to get one’s ideas across clearly. And if one imagines that language is simply a matter of, a bit like a programming language or some sort of code, then you think, well, once you’ve got the code down, I mean it should be easy. You just think the thoughts and you just translate them into the code.

Well, why is it so difficult? But of course it is an endless challenge, because it’s a fundamentally creative process and a collaborative process. And of course disharmony arises in close relationships where it fails. And I think one of the reasons it particularly causes disharmony is that in a close relationship, the assumption is, you know each other so well, surely there couldn’t be any misunderstanding. The fact that there’s misunderstands is particularly annoying. How could you have thought this? Do you know me at all? Et cetera, et cetera.

Jim: Yeah, Stephen King wrote a really good book. Stephen King, the horror writer who’s actually a damn good writer, despite what anybody might say. He wrote a great book called On Writing where he talks about this, but he uses different language, where he says you essentially have to mind read the other person. And I think what he meant was you have to make some guess on what the underwater part of their iceberg looks like and then in a relatively [inaudible 00:34:12] fashion project into their brain, what you have in your brain and hope that it lands and sometimes it doesn’t.

And so let’s get on to the next reason that makes this even harder. And I love this part because it just makes so much sense and it ties in closely to well-supported laboratory research, which is the now or never bottleneck. You guys talk about the working memory limits, which are sort of popularly known as 7 plus or minus 2, 9 is Einstein and 5 is George W. Bush. And in reality, I think the more recent research says the numbers more like 4 or 5, I think, something like that. But anyway, this turns out to be an amazing bottleneck for this Stephen King experiment of having only a little porthole to send these signals across to activate the underwater part of the iceberg. Take it away.

Nick: Yeah. Well should I start off, this is a, well something we both have lots of thoughts on. But just to start off, one of the amazing psychology experiments that really struck us was that if you give people sequences of random sounds, it’s all beats and clunks and bangs and you give people say 4 or 5 of these sounds over a period of say seconds, people have real trouble remembering even what order they occurred in. I mean also what they were, but what order they were in. So you think, wow, this is very strange. So you present sounds at roughly the rate that we say speech sounds and it seems to be just an uninterpretable jumble. And if you made that sequence more than 4 or 5, you’ve completely lost it. You’ve got no idea what the sounds even were. So that’s really weird because it seems to imply, you’d think it would imply that if we tried to communicate with each other by sending these arbitrary sounds, just noises, which happened to be speech noises, we’d be baffled. We wouldn’t got to hold onto them. We wouldn’t know what order they were in. We wouldn’t know what they were.

So a thought that the trick to get around this is that the brain has to take those sequences and make sense of them as larger sequences. It has to see them as chunks. It has to think, hang on, I’ve seen this sequence of noises before. This is the word tiger say. It isn’t just a random set of noises. And it also has to realize that particular parts of the speech sound can be interpreted as a tuh sound and the guh sound and so on. So it has to do that quickly. Because if it doesn’t do that quickly, more sounds are coming. And because of this limitation of four or five, we’re maxed out with four or five unless we can interpret as we go, we’re maxed out with four or five random noises.

So after four or five speech sounds have been encountered, I’ve got to by then have grouped them together and thought I know what that is. That’s the word tiger. If I can’t do it by then more speech sounds are arriving, I’m done for. And of course we have that experience when we listen to languages we don’t know or don’t know very well, we have that sense of, ah, it’s just a pile of sound coming at me, help. Where one word popped out at me and then oh I’ve lost it again. So you have that sort of sense of zooming in and out of understanding because it’s just coming too quickly for you to decode it. Now thing that is very interesting there, is that approach process of creating chunks has to happen immediately. If it doesn’t happen immediately, the onrushing tolerant will wipe you out.

But it’s also a recursive thing. So you have to take the speech sounds and turn them into parts of a syllables or morphines or whole words. But then the same story arrives applies with words. So if I give you a sequence of words, four or five words quickly and you can hang onto that, but if I give you seven or eight, you can’t if they’re arbitrary words. And so again, you’ve got to do the same thing. You’ve got to put these words into some kind of logical connection and you’ve got to do that as fast because otherwise you’re wiped out where the new lot that here, so the processing of the cat sat, you’ve got to think, well the cat, right? That’s a thing. It’s doing some sitting on the mat, right? Okay, I’ve got an object and it’s sitting on it and you’ve got to interpret this as the sentence is coming in.

Because if you don’t, your representation of those sounds, it’s going be obliterated. So there’s a kind of chunking all the way up as it were. You start with small units, you have to chunk them quickly, it’s fast, if you don’t do it immediately, you’re done for. That’s the now or never bottleneck, it’s now or never. To work out what was said there, if I don’t do it now, I’m never going to know because the onrushing speech street will hit me. But I’ve got to do that level of sounds to words or parts of words to phrases and up to whole sentences or even larger units of discourse as well. So this is hierarchy of complexity, which obviously characterizes language across every language, they all have this same hierarchical structure. That’s sort of inherent comes automatically from the limitations of our memories.

Jim: Yeah. Morten, what do you have to say about language through the now or never bottleneck?

Morten: Well one thing, actually the bottleneck is even more severe than it may appear because oftentimes might feel that, oh our mind’s a little bit like digital recording devices and so on. But it actually turns out that the sound that we hear, for example when I’m talking right now, that disappears the actual sensory memory of it, the coic memory disappear within about a 10th of a second, which is incredibly fast. So if you don’t do something with that, sound is just gone. And of course we experience that all the time. Somebody’s talking and you get disrupted for a second, then suddenly what an earth are they saying. But then you might say, well okay, maybe we are compensating for that. We are speaking relatively slow to accommodate those kind of limitations. But actually it turns out that’s not the case either because on average any speaker of English will produce about 150 words per minute, which of course is many more sounds that we have to deal with.

So all that together creates the now or never bottleneck. And as Nick is saying, to get around it, we really have to sort of push language through that when we are trying to make sense on it. And that influences both how we’d learn language. So one of the key aspects of learning language then becomes how can we do this as quickly as possible so that we can deal with the input and of course learning a second language or coming across different languages as Nick was mentioning. One of the hardest thing about learning a second language is sort of trying to make sense of it, the sound, before it disappears. And the same is true also for sign language as you know that we refer to sign and not in the book, but of course it’s there’s many sign languages. And the same is true for sign language as well, that from a visual processing perspective we have to do exactly the same thing, otherwise the input is just gone.

Jim: Yeah, it is amazing, right, when you think about it that we’re processing 150 words a minute and very seldom, at least on the right side of 15 beers, do we just sort of break down and it’s all incoherent, right? It’s quite remarkable. And yet it all has to go through this little bottleneck, the perceptual memory bottleneck of 80 milliseconds and the working memory bottleneck of maybe a second or two for certain things and 250 milliseconds for other things. And it’s funny, I do work on some projects that are trying to do language understanding and not a single one that even thinks about this idea of a series of bottlenecks that might actually be useful in producing a language that’s efficient. Now it may just be that we have done the best we can and that our language is actually terrible. But at least it’s an existence proof that a pretty powerful language can get through these little straws and yet still be useful.

And probably a big part of that is this concept of the iceberg, kind of like a code book. You have a five letter arbitrary code that actually expands to a paragraph on the other, something like that. So let’s move now to just in time, language production.

Back in 2002 I read a research paper, a buddy he had done a bunch of work with the web of science I think, that was before Google Scholar was widely available and they estimated that 90 per 5% of the research in computational linguistics was on language understanding and only 5% on production. As a noob to the field I’d gone, that’s interesting, I wonder what that’s all about. Presumably that’s shamesome with the rise of LLMs and such, it has always struck me as spooky that we don’t know how we make our utterances, they somehow just pop out. So how does this just in time language production fit in with your theory and just as a personal interest, what’s the current state of play of research or interest in language production versus language understanding?

Morten: Well, so interestingly, well perhaps not surprisingly, you actually see a similar kind of assymetry within the study of language as well. So probably about 90 plus percent of all studies are actually language understanding, language comprehension and much less is about language production. So in a sense we know less about language production that we do about language understanding. And part of that has to do with, it’s just much harder to study. So if I want to try to figure out how you understand language, how you understand sentences, I can either have you read sentences or play sentences for you and I can test you in a variety of ways to see what you understand. But with production, if we want to generalize across different people, it’s hard to sort of control things in a way that we’d like to control in psychology.

But what we are suggesting with regard to the just in time production is the notion that, as I mentioned earlier, we oftentimes feel like we are sort of speaking into the void that we don’t really know exactly where our sentence going to end when we started. Nonetheless, we end up saying something that is typically grammatically correct or reasonably correct and that is, we do the opposite of what Nick was suggesting for doing in comprehension. So we start out with a general idea of what we want to say. We break that down into sort of subpart international phrases roughly that will fit within sort of a breath of air. Then we take those parts and we them down into subparts, that could be either small phrases or individual words. So then continuing that process until essentially moving around the inside our mouth in order to produce the sounds.

And we took an analogy from the car production industry. So the Japanese revolutionized how you make cars by essentially not building up any inventory. So because having lots of inventory is expensive. So what they would do is that they as something needed to go into a car, it would arrive just before that. So it’s just in time production and the suggestion is for language too. What we want to do is that we don’t want to have too many sort of parts of what we want to say is sort of lying around because then they can interfere with one another because of the limitations of our memory processes. So we sort of generate the different patterns that we want to say just as we need them rather than generating a full sentence and then start to say it. So we do it, else, we are saying it.

Nick: Yeah. Then so if we did the opposite, if we had the sort of thought in its entirety formulated, how we’re going to say it at a level of speech sounds, then we might say, well now there’s a string of 80 speech sounds and now I’m ready to say them. But unfortunately that string of 80 just way too much for our memories. So it’d be useless, it’s completely pointless to be storing that because I can’t deal with it. So what I can say is why I’m going generally over here and then a couple of levels down, the next three sounds are these ones here they go. So each level is only —you’ve got multiple levels running at the time, but each level is only jumping ahead of you three or four items because that’s just going to exceed our memory otherwise. So you can’t, yes, if you have a big inventory, you’re going to fail, you won’t be able to manage it, you’ll just have total chaos.

Jim: It’s the all 10 pounds of shit and a five pound bag problem. The thing that always has amazed me, ever since I started thinking about the whole issue of language around 2002 is that the production side is almost entirely unconscious. If you actually try to pick your words one at a time, it’s like you’ve been punched in the head a few times and I wonder what does that say about innateness? I mean that smells an awful lot like a programmed capability to be able to do this chunking, its different scales, get it through the bottleneck and yet still have the whole thing coherent. That’s a damn tricky problem. Pure speculation obviously. What do you think about, do we have some genetic capacity in that area?

Nick: Well I think we do, but not necessarily a capacity that’s specific to language. So the ability to do these complicated or sequence action sequence tasks, like for example, cooking is similar. If you planned your entire protocol for a meal, apart from the fact that as Morten was saying earlier, you’d have to keep making adjustments cause pans will be heavier than you expected and your onions would be bottomed and all these adjustments you have to make. But even if you could plan it all out, you’d have hundreds of actions and you’d never better to remember what any of them ever were. So you wouldn’t be able to actually work that way. So what we actually did, we have this vague sense of now to the sauce, back to the oven was off, and within that there are all these sub-components. So I think the ability to manage that is remarkable and it does seem to be, not necessarily uniquely human, but something that humans are especially good at and probably quite a lot better at than most non-human creatures.

But that’s not necessarily, and I think we would strongly argue that it isn’t in fact specific to language. So it’s a crucial precursor to language. If you didn’t have that machinery, that ability to build these complicated hierarchal sequences of actions, then language could never have emerged. Just to say so tiny bit about your point about the strangeness, the true weirdness of the speaking into the void, I think that’s absolutely right and from a now or never bottleneck point of view, it’s sort of or adjusted time production point of view, it’s sort of inevitable because you really can’t see far ahead and it’s not as if that your mind has plotted the whole sentence and you’re just waiting for it to emerge. Inevitably you are making it up as you go along. So that sense of, gosh-

Nick: Inevitably, you are making it up as you go along. So, that sense of, “Gosh, words just seem to be appearing. I think that we’re just pulling this stuff out of nowhere.” In some sense you are engaged in a creative process. It’s not that you are, as it were, de-downloading or decoding or something that’s something in mind that you could somehow sort of look at from outside and say, “Yeah, I’m halfway through the file now. Oh, there’s a little bit more to go.” If you’re doing, you’re engaged in an improvised activity and it’s a bit like playing the saxophone or something and you can’t explain what you’re going to do before you’ve done it. You’ve just got to go with it.

Jim: Cool.

Morten: And also the notion of just-in-time production also allows you enough flexibility so that you can react to the kind of feedback that you’re getting from the people who you’re speaking with. And one of the things that adds to the pressure from the now or never bottleneck, is that on during normal conversation, the time between I say something and then the person I’m talking to says something, is on average about 250 milliseconds, which is an incredibly short time. And this has shown to hold across cultures and across languages.

And what this means is that you actually have to be very attentive to what the other person is saying. And you actually, one of the things we know from experimental psychology is that if I was to show you a picture and you had to say that’s a cat or a dog or something. That takes you about 600 milliseconds to program and be able to say that. So, the 250 milliseconds pause between when somebody finishes their turn and another starts theirs, that means that we have to try to figure out what the other person is saying, of course while they’re saying it, but also actually have to figure out what we are going to say or at least begin to say something before they’re finished their turn and therefore we need sort of a maximum flexibility in order to be able to do that.

Jim: Yeah, that’s again, how does that happen? That’s pretty cool. As you guys point out, the speed of the language dancing is remarkably high. And to give some interesting examples that it’s nowhere nearly as coherent as we might think, right? There’s lots of ellipses, there’s repeating in other kinds of inference. I mean, if you actually look at a recorded conversation, it’s like, “What the hell. Are those people crazy?” And that’s another probably adaptation to these bottlenecks issues. So, bottleneck on one side and it’s using the iceberg, at least that strikes me as a reasonable explanation for, how is it that this weird crazy looking language dance actually works?

Nick: Yeah, and of course that also picks up on the point, going back to the perfect language point, it picks up on the idea that we tend so often to think at least in the academic study of language, that written language is primary, which it sort of obviously isn’t. Written language is clearly quite recent. And it’s not the way… None of us learn to write before we learn to speak. But nonetheless, it’s often viewed as the kind of canonical form of language. And it’s obviously more stable, and regular and has better grammar, at least in sort of high literary culture. The sort of conversations that we have, which have all these similarities, and stumbles and restarts. But of course, taking this charades perspective, we should absolutely see it the other way, the more natural perspective, which of course the way natural language has evolved and the way we use it all the time, is this chaotic jumble that allows us to get our thoughts across to other people we’re interacting with face-to-face or over the internet or the telephone these days.

And the fact that then you can codify that and formalize it and make it more precise, and develop a kind of more standardized written language is a very interesting thing. But that’s thinking that is the ultimate study of the essence of language, is a fundamental era. But actually linguists have often done that. So, at least post Noam Chomsky, there’s been a implicitly a great focus on what’s a proper grammatical sentence spoken by a lit or not necessarily spoken, actually, really written and read by a literate speaker of the language. But the fact that we speak very differently and jumbled in Celtic way, is usually viewed as sort of performance era. This is just sort of marginal stuff. It doesn’t really matter. What matters is that in a perfect language, which is somehow embodied, a kind of ideal written text, but also ultimately embodied in your, supposedly there’s some sort of universal grammar in your mind. And the chaos is just purely kind of noise.

But we want to see that chaotic improvisation as the absolute core of language. And the fact that they self-organizes to become somewhat structured is an astonishing thing. But the, it starts from chaos and becomes orderly rather than starting from some kind of perfection and degrading.

Jim: Yeah. Let’s brew up ahead to something I was going to do in the next section, and we’ll come back to what I was going to do next, which is, are you guys go right after the harrumphers, who talk about language degrading, kids today, they’re destroying English, French, Russian, Greek, Chinese, whatever. Go after the harrumphers, give them a full shot.

Morten: Well, for centuries people have been complaining that the young people today are ruining the language and so on. But really what that’s just a reflection of, is that language continuously changes. There’s new words coming into play, new ways of saying things and so on. And that’s because they’ll be creating language in the moment. We keep on improvising, coming up with new ways of saying things. And of course there’s always going to be some that’s sort of seeing this as, “Oh, this is not the language that I learned or grew up speaking, so there must be something wrong with it.” And there must be something, an ideal essence to the language that this new ways of saying things are sort of a deviation from. But really what it’s missing is that language is changing all the time. And in many ways we don’t really realize that even within ourselves, that our language throughout our life changes across time.

And it’s actually a really neat study of Queen Elizabeth and her vowels. So, we all know that Queen Elizabeth, before she passed away, of course, would sound very posh to many people. And so that her way of producing vowels and consonants would be sort of quite specific, quite sound posh to many. But it turns out that over the many years that she was the queen of England, she changed her vowels. So, what a set of linguists did, they analyzed her Christmas speeches, which she’s been delivering I think since sometimes in the mid-sixties or something, 1960s. And they recorded her vowels, and they could measure the specific kind of vowels that she produced. And it turns out that over 40 years, after 40 years, she no longer spoke the queen’s English, at least when it comes to vowels because her vowels had changed. So, they actually ended up sounding more like the vowels that the general population would use 40 years ago. But because they had changed too, she still sounded posh, it’s just that she had changed to where they were and they had changed themselves too.

Jim: Yeah, this the essence of language is, it’s always evolving, right?

Morten: Exactly.

Nick: Yeah, and…

Jim: Not necessarily devolving. You guys made a good point, which is if you took the harrumphers seriously, then you’d have to look at the descendant languages like Danish, Swedish, Norwegian, and Icelandic that came from the old Norse as barely functional degenerate. And it’s, you expect them to be going around Reykjavik on all fours, right? That’s just not the way the world is.

Nick: Yeah, I think English would be in a pretty bad state, wouldn’t it? Because it’s a strange mixture of French languages and… French and German, City of Romance language and Germanic languages kind of mushed together with influence of them all over the place. So, yes, I think the history of languages, the history of unwanted and frustrating change for the older generation. And we’re all subject to it’s hard not to get to suffer this. You hear people using a word in a way that, “It didn’t use to be used like that when I was a boy.” It’s hard not to think, “Well, something’s being lost here. The language is, it’s suffering a terrible blow.” But possible, what one’s forgetting is that for every loss distinction, there’s a new distinction. There’s… When people have to communicate something, they’ll find a way to do it… If it’s lots…

Jim: By the way. In a world that’s getting increasingly complex, it’s probably not one for one, it’s one and a half for one or two for one. It’s interesting. I try hard and otherwise still do it, being a geezer. Try not to say “kids today” because you remember that they’ve been complaining about kids today, at least since the time of Socrates. And if we were actually getting worse at every generation, we would be back on all fours by now. So, I think language is a great case. You guys make a really good case against the harrumphers. One of the best I’ve ever seen. Anyway, let’s get back to where we were, which is another one of the chall… In some ways it’s a challenge of language, but it’s also evidence, I suspect of your hypothesis, which the meanings for many of the tokens in the language. And you guys do a really nice job, a little light dance around the phrase, “the unbearable lightness of meaning.” Take it away.

Nick: Yeah. Right. So, and a few people, many people will know the book, The Unbearable Lightness of Being, Milan Kundera. And we’re taking the phrase, which actually a colleague of ours, of PhD students of ours, George Dunbar at Edinburgh. We have never found this in writing, but he picked up this phrase and generated this lovely phrase, The Unbearable Lightness of Meaning, which is wonderful because, and of what George was interested in, was the amazing flexibility with which people’s categories jump around based on really very ephemeral cues. So, we riffed on this discussion of meaning. And I think one of the things that’s rather nice is that if you think about lightness as your starting point, well, what that little… Lightness is a pretty basic kind of thing. It just refers to the opposite of heaviness. I mean, what’s complex about that? Maybe just take lightness itself. You find, of course it has enormous numbers of meaning.

So, you can talk about light infantry, light brigades. You can talk about light loads, you can talk about lighthouses in slightly different context. You can talk about light reds, you can talk about light music. And you see, what’s going on here? It’s the same single word, which is not an exceptional, unusual word. This has this enormous variety of uses. Now in the charades world, this is what you’d expect because as long as you can build some kind of conceptual analogical link between light as in, say for example, the light stone versus a heavy stone. If you can build some kind of conceptual analogical link between that and a light versus a heavy voice, which it appears we can, this is something that psychologists will be quite interested in, the fact that we can make these connections between quite different modalities and quite different kinds of thing, but in a stable way.

So, it’s not the case that some people think light stone corresponds to a gruff voice. And in fact, it’s pretty stably, the heavy stone is the gruff thing and the light stone is the high-pitched thing. If those things are stable, then we can use light in all these different contexts and it’s incredibly generous. So, it’s like the King Kong. You’ve got a charade component, you’ve got a token light, and I want to, I’ve got some wines and I want to pick out the one that’s a lighter, maybe flavor. And I think one, I’ve got a token for this light, let’s give it a go. And if it’s the case that you can make that mapping, which it seems we can in a fairly reliable way, then I can now use it to describe wines, which I wasn’t doing before. And so you’ve got this astonishing flexibility, which is only really limited by our collective collaborative creativity.

I mean, we have to be creative in the same way. So, it goes back to what your Stephen King point. If one person tried to conjure up something in the mind with another, the conjuring up process does require that we’re aligned with each other, so that my connection with what light, a light red or a light wine might be, has to have some link to you. But it doesn’t have to be that precise because it only has to do the job of the moment. So if I say, “Oh, could you pass that light red cup?” We don’t have to have complete agreement on exactly what a light red means of all possible circumstances. We just have to know it’s not that, there’s only two. One’s dark one’s light. It’s that one. So, this tendency for us to continually reuse and evolve and spread out the meanings of our words, is very basic.

So, Wittgenstein, who’s a great philosopher, who the language game, the title really comes from his discussions of language. I mean, we don’t have a profound and Wittgensteinian level of propriety in our book. I can certainly guarantee that. He’s a very deep thinker indeed. But the idea of languages and communication operating in these kind of momentary games, that’s something that came from Wittgenstein. But Wittgenstein was also very interested in the idea that meanings are continually, are not bound together by some common core. They’re bound together by what he calls family resemblance. But family resemblance is, in Wittgenstein’s mind, that each member of the family, each component is just tied somehow to some other one. And it doesn’t have to be the same. There’s no common essence, there’s no common core. And so once you’ve got the idea of a light red, you can probably do a light blue and a light yellow.

If you say, “Oh, how are all those light colors, how do they relate?” Which may have something to do with whiteness or something or translucency, how do they relate to the different kinds of, I don’t know, brigade in the military or frigates in the navy. You might say, “Well, there’s no real connection really, but light gets me from one domain to another. And then in that domain we can play around and we can go to some other domain and play around. And this sort of network of connections can take us in any direction.” But what you don’t get and what psychologists, linguists, philosophers have often imagined you must have, going back to the perfect language perspective, what you don’t get is some kind of common meaning. That kind of answers the question, “Yes, but what do the words mean? What is it? What’s the core meaning of light?” And which the answer for Wittgenstein and for us would be, “Well, it’s just the wrong question. There’s just this…”

Jim: Does it work? Is it useful? Right? People listen to my podcast and know that I love the word useful as a lens on things and I don’t really care if they’re good or they’re true or they’re beautiful. Are they useful? Right? And that’s… We’ll get a little bit later into the evolutionary ideas of language, but I’m going to put those words in your mouth that, if light… One might… If my father’s generation, and when I was reading that section in the book, it reminded me some of my father’s generation, who around the end before payday, they would talk about their wallet being light.

That was kind of a almost literary usage of the term. And obviously the difference in the weight of their wallet was probably insignificant. But metaphorically it worked kind of in the Lakoff and Johnson kind of sense, where you take the physical and you project it into some other domain. You guys also talk a little bit about how this is the way kids use language. Very… Toddlers. And I’m very focused on this because I got a two-and-a-half year-old granddaughter who is very verbally adroit, and will be arriving in about an hour and a half. And I haven’t seen her in a month and I expect she’s made another quantum leap in her use of language. So, how do toddlers use these tokens that are highly overloaded?

Morten: Well, oftentimes what happened is that it might seem that they know beating up the word in the same way that we might think that you and I know it, or at least an approximation of it. But actually what they’re doing is just trying to make sense of what you’re saying in the moment. And if you ask them later on, they might actually have a very different view of it. So, oftentimes kids might think that dog could, for example, refer to the dog in the house. But actually if you ask them if a cow is a dog or a cat is a dog, they might say that too. Because for them a dog would just refers to four-legged animals. And so there’s a nice study where they looked at where children, who in the laboratory, was exposed to some new words, whether they could actually remember them.

It turns out that although they could use them in the moment because they essentially just saw it as a kind of problem solving. When they were tested later, they just didn’t know the meaning. But they could use it in the moment because that the context helped them to figure out what that word is. So, kids gradually over time sort of try to figure out what we are trying to say to them from the context in which they hear the words. And of course they pick up say, “Okay. These words are typically used in this context and then this word in that context.” And they become more and more sort of amazing at that. But oftentimes if you sort of query them a little more deeply, you’ll find that they don’t really have all the nuances of meaning that a word has, but they’ll get there. Definitely.

Nick: Yeah. Just to add something to that, which is an anecdote from my time Edinburgh. Because both Morten and I were graduate students at Edinburgh University. So, we do go a long way, back to work, back to our PhDs. And one of the talks I went to as a graduate student, which I think this is probably something like 1988 or 1987, was Susan Carey, Harvard still active, who was giving a talk about the concepts in young children, how different they are from what we imagined. Exactly in the way Morten was talking about. And the example that I remember she talked about was alive and dead. What children think is alive, what they think is dead. And I can’t remember how old the kids were, that sort of five or six or something like that. And she was talking about how incredibly metaphorical and analogical, exactly in the Lakoff and Johnson kind of way, the usage of alive and dead was.

And I remember my friend and often collaborator, Mike Oaksford, who’s again another same place as us, as Morten and I, another graduate student at the, then second center sites in Edinburgh. He had a five or six year old daughter. And so next time we were with her, we said, “Okay, let’s try this. This can’t be true. This stuff is crazy. Stuff about alive and dead. It’s just ridiculous. Well, let’s just ask Julia.” And who’s now of course a 40-year-old woman or something. It’s quite, I just, how does life go by so fast. And it was truly astounding. So, we discovered that if you asked her, “Is a car alive?” She’d say, “No, there’s no… Car’s not alive.” Oh, well not if it’s going along, it’s alive of course, but when it’s static, no it’s not alive. And you discovered that the TV’s alive when it’s actually on, but obviously not otherwise. And it was just incredible. So, this sort of sense, and the sun turns out to be alive as well, at least according to her. But if you said, well what about nighttime? Well that was a tricky one, wasn’t sure about that.

Jim: Probably sleeping, right?

Nick: Sleeping. Yeah, that’s probably right. She was very lucid and could give good stories about the meanings of these words, but it was just totally not what we imagined at all.

Jim: And yet the kid could communicate just fine.

Nick: Absolutely. Yeah, absolutely.

Jim: I think that was my takeaway from this section, is the stuff is way sloppier than we think, and yet it still works.

Nick: That is the miracle of human communication and human intelligence, isn’t it? It’s not. The miracle isn’t that we’re just so precise that we can get everything down pat. The miracle is that we can be so sloppy and succeed, and we have to be because the world is way too complex for us to understand it. What’s inevitable when you’re a small child is you don’t understand the detailed meaning of dog and cat, and alive and dead, and all these things because the world’s way too complex to comprehend at that age. But of course the world’s always too complex to understand. I mean, we’d never be able to start speaking to anybody if we had to understand the world around because it’s way more complex than we can possibly imagine. And yet we can still communicate very effectively. So, that sloppiness is sort of an inherent miracle of… It’s inherent to sort of miracle of communication that we can talk about a world without really knowing how the world works.

Jim: All right, let’s dig in and get a little bit more theoretical and nerd-like. Let’s… This I think is a phrase from the book, the Forces of Order and Disorder. And this is where you start to go Santa Fe Institute on us a little bit. And I can actually see it, the order and the disorder. Take it away, gentlemen.

Nick: Well, shall I start with a few words of that and I’ll hand over to Morten there. I think the starting point for, as we’ve talked about with the perfect language dream, the starting point for most theorists of language is the thing that the order must be primary. So, for example the idea of universal grammar, which is basic to Noam Chomsky’s position, at least for a long time. It’s not quite clear what his position is now. It’s the idea that this basic structure of language, the basic grammar of language is genetically wired into you, quite where that genetic wiring comes from. The wiring of the brain comes from the genes, but quite where that comes from is not completely clear. But the idea is that that specifies the space of possible languages in a really formal way, in a way that you can write down with kind of the symbolism you’d use to specify the structure of programming language.

And the thought is that languages, the human languages can only inhabit a kind of small space, a tiny fraction of all possible languages. And this at least in the original, or if not the original, but one of the best known variations of this idea, it was Chomsky’s thinking, the idea is that you have a set of parameters. And those parameters can define a space of possible ways a language can be. And the child’s struggle in learning a language is just to think, “Well which of these languages is it?” I’ve got a finite set of parameters. I’ve got all these possible languages, which are, where all the real complex stuff it’s just wired into me. The structure’s all basically in my head. But it got me…

Jim: You got to set the knobs on the universal grammar regime. When I was at MIT, taking philosophy in 1977. That’s how they tried to explain it to us.

Nick: Yeah, exactly that. It’s exactly that. Yeah. So all the complexity’s built in, and you’ve just got to set the knobs. So, the child is thinking, am I in one of those languages where you can drop pronouns or not? “Can I do that or is that not okay?” Whatever it is. They’re all kinds of just specific things that differentiate one language from another. And then the alternative view, our view is that is a sort of disorder first, order later, perspective. So, the idea here is that you’re learning quite isolated little patterns of a language which work in particular context and then gradually it become, those are becoming more entrenched and more connected to other bits of the language. And gradually as you’re acquiring a language, you’re learning it piece by piece, kind of unit by unit, but you’re gradually building more of a system and that system becomes more and more complex, but it never becomes fully coherent.

So, it’s not that you ever get to the point where you say, “Ah, now we can describe that perfectly with a mathematical system.” It never gets to that point because essentially it’s a self-organizing system where the different bits of the system are self-organizing independently, but they don’t quite cohere. So, language actually is a sort of riddle of strange exceptions and misconnections right through it. Every aspect of language from the sound patterns to the grammar to the meanings, there’s sort of exceptions in irregularity and peculiarity everywhere. Now, to the order first perspective. This is a bit of a mystery. I mean, we’ve got this orderly thing in our head, then we have the parameters, the knobs, and we twiddle the knobs. But then strangely, there’s all kinds of exceptions. Language seems to help you, just exceptions everywhere. And quirks and peculiarities, where are they all coming from?

Whereas from the disorder first perspective, of course those, it’s chaotic to begin with and a lot of the chaos is still there. The order is emerging, but it’s not complete. So, just to give one example, I should hand over to Morten, but the kind of thing you see in language all over the place is something I was thinking about today. So, this is not necessarily the best example, but it was just wandering through my mind. Because it’s quite cold here at the moment in Oxford. I was thinking, you can say in British English, you can say it’s nippy today. Or you can also say it’s chilly today and I can, and now that’s fine. But I can say if I want to about myself, I’m chilly, but I certainly cannot say I’m nippy. Now, what’s going on?

Jim: Well, you can, but people will think you’re a little weird.

Nick: They would think I’m very weird. Yeah. So, what’s going on there? I mean it seems very weird. But that’s everywhere. Language is just… Peter Culicover, in particular, a very notable linguist, he works a lot with Ray Jackendoff, the very brilliant linguist you mentioned earlier. Peter Culicover, and Ray Jackendoff, both of them brilliant. I’ve looked a lot at these kind of quirky aspects of language. There’s a wonderful book called Simpler Syntax by the two of them. And also Culicover’s book, Syntactic Nuts, these are just amazing. They just followed the demonstrations of the kind of crazy, semi-regularity of language. It’s just, every place they look, they find more of this kind of weird irregularity. But, so I’ll hand over to Morten.

Jim: Morten.

Morten: Yeah.

Jim: Order and disorder.

Morten: Well, mostly disorder, but from out of that comes order. And I think one of the things that’s interesting from our notion of language being fundamentally about improvisation and collaboration, what happens is that we have these recurring patterns are used over and over again and over time they can even change in their nature. So, there’s an interesting part of linguistics called grammaticalization where, when certain words or word combinations are used over and over again, they can actually lose some of their normal meaning and take on additional grammatical function. So, for example, an example is the word go, which in English primarily just refer to walking, such as, “No, I’m going to Paris,” which means I’m walking to Paris in this case. But actually what happens is that over time, because when we go somewhere, we oftentimes do something, that’s generally why we go somewhere. And so it took on a more abstract meaning of describing something we wanted to do in the future, such as, “I’m going to eat at seven.” And so many other words have undergone similar kind of processes, and so this begins to create some…

Morten: So this begins to create some patterns of order in the disorder. On the flip side, because we are essentially trying to communicate in a way that’s good enough for the present purposes, it allows for all sorts of flexibility and variation so that we don’t need it to be hammered down into a strict order based on rules and so on. And as you were referring to earlier, when you look at actual transcription of speech, it’s incredibly messy, yet it does have quasi-regular patterns, the kind of patterns that we also see emerging in self-organizing system, as they still like to talk a lot about at the Santa Fe Institute.

Jim: Indeed. And you guys give some examples about how this stuff is less stable than you might think. Even something that seems like a bedrock of language, like subject verb object order can, we would say in Santa Fe English, drift in its basin until it reaches a transition point and then flip over. Right? Maybe talk a little bit about that.

Nick: Yeah. So I think if you are a speaker of many European languages, the idea that every language must have default subject verb object order seems very natural. But actually, subject verb object isn’t even the most common word order for languages of the world. So I believe VSO, verb subject object, is the most common. Is that right, Morten? Double check that.

Morten: No, it’s SOV. VOS is definitely not. SOV.

Nick: That’s all right. That’d be bonkers, yeah. So it’s actually more common to actually have the verb at the end of the three. So it feels like something that you think would be very standardized. But in fact it isn’t. And interestingly, a lot of languages are actually very free in the word order they use. So famously, Latin, where you have cases, the cases that basically tell you who’s the subject, who’s the object. So the cases are actually doing the work of the word order. So there in Latin you can play quite freely with the word order. Not completely freely, I think. Not that I’m any expert, but pretty freely.

And as time goes by, it turns out that in, for example, there are other very much languages descended from Latin, a particular word order, SVO, becomes established, and then you can start to throw away all that case information. You don’t need the noun endings, which are telling you, “I’m the subject” or “I’m the object.” You don’t need that anymore, because now you’ve got words obviously to do that for you.

So it turns out that your word order as a crucial clue to who’s doing what to whom is actually something that doesn’t always arise in the languages at all. And often there’s rather a late entry. But also indeed, as you say, Jim, languages can change their word order as well.

So I’m no expert on this, but I think with German there is a major shift at some point in the Middle Ages. I can’t reconstruct the nature of it, I’m afraid, but these shifts do occur. So yeah, even something that seems sort of incredibly natural to us if you’re a speaker of one language, how could it be different? In fact, it’s something that’s much more labile than you think.

Jim: And probably some of them were frozen accidents. Some particularly verbal nine year olds started doing it one way and it stuck. One of the things we think, a useful lens of the Santa Fe Institute way of thinking, is how much of our world is frozen accidents of one sort or another. Well let’s move on to a big topic that we’ll spend some time on this one, is what you guys call language evolution without biological evolution. And I guess that strongly contrasts with people like Pinker and Bloom. I don’t know if they still do, but they used to claim Prometheus, the man of language, suddenly appeared magically out of the head of somebody a hundred thousand years ago. You don’t think that’s true. So tell us how we can get language evolution without biological evolution.

Morten: Well, the Prometheus story was actually Chomsky’s idea that about a hundred thousand years ago, suddenly there was, in some of our descendants, an ability to form structure in a particular way would’ve merged in that person and then would’ve been carried down through descendants. Pinker and Bloom suggested that we would gradually have changes in our brains to accommodate a genetic endowment for language. Now, both of these two stories we don’t find very likely for a number of reasons. One of them is, the fact that when we think about biological evolution and cultural evolution, language changing as quickly as we talked about, that’s part of cultural evolution. But biological revolution is much, much slower, on the order of hundreds of thousands of years or more. And so what’s happening is that because language changed so quickly, it provides essentially a moving target for the genes to follow, so to speak. And in fact, we’ve conducted all sort of computer simulations in order to underline that.

But we think they’ve sort of gotten the question the wrong way around. So the question that Chomsky and Pinker and Bloom are asking is how come the brain is so well adapted for language? What we are suggesting in that given the speed of cultural evolution, what we should ask instead is why is language so well adapted to the human brain? And here we have this idea of language as a self-organizing adaptive system that is adapting to the human brain, to the way we interact to the human body and so on. So over time, the reason why we see this reasonably good fit between our brains and the languages we speak, it’s not because our brains have adapted for that, but rather that languages themselves have adapted almost like little organisms to our brains.

Jim: That was a powerful lens when I read that. I go ah-ha, that’s a really big idea.

Nick: Yeah, no, and I think just to amplify that, if you think about the divergence of human population since we left Africa, we have been spreading out in a rather chaotic way across the globe for a long time. And there are many populations which have been dispersed and had little contact for 50,000 years or more. So if you take people who are native to Australasia and look at people who are native to Europe there are common ancestry would be going back 40 or 50,000 years at least. Now if you imagine that there was some kind of coevolution going on between culture and biology, which is the Pinker and Bloom story, there’s some kind of incremental ratchet like cultural interplay between the development of language and that’s somehow getting bedded down into the genes, we’re adapting to the language we’re speaking and the language changes more and then we adapt to that, this kind of co-evolution story.

If that were right, then we would expect that it should be really tricky for example, for people with the native Australasian heritage. They should struggle to learn English and vice versa. People with Australasian backgrounds should find it really difficult to learn aboriginal languages. Well that’s just totally wrong. Basically anyone can learn any language. Any baby brought up anywhere seems to be pretty much identically good at learning any language. And so it seems like if there was ever this cultural evolution and so biological and cultural interplay, it sort of mysteriously stopped. In fact, it stopped as soon as human population started to diverge. Because if it happened after any divergence in human population, we’d find there would be these weird difficulties in learning different kinds of languages because those different biological populations would’ve been evolved to those specific languages. And that’s just completely not the case.

So it seemed much more credible, in fact the only real story that makes sense, I think, from an evolutionary point of view, that there really hasn’t been substantial biological evolution, which is specific to the way languages have been created. So if we evolve to language as a cultural object, then we’d all be evolved to different things around the world. But we’re not, we’re all able to speak any language. So basically the biology comes first, the precursors, the biological precursors and foundations on which language is built, they come first. They’re not adapted to language, they’re emerged before language appeared. But then we can build this cultural machinery on top in the same way we obviously can, for example, with writing. We can obviously all read and write, but nobody thinks, ah, this is an amazing, reading and writing. There must be some kind of gene for it.

But obviously there can’t be because writing was only invented a few thousand years ago and hardly anybody in human history wrote or read until a few hundred years ago. So clearly that’s wrong. And then you could say the same story for mathematics or music or any other cultural form. And I think the same story is true for language. These cultural forms are building on biology. The biology is not being shaped around them or there’s no genetic endowment for language or for writing or for music or anything else.

Morten: And actually reading is a good example, our ability to read is a good example for thinking about some of the potential counter arguments to this notion that we don’t have a genetic or a brain-based endowment specifically for language. Because one of the arguments sometimes comes from that we see language breakdown, for example, following strokes in particular areas like brokers areas and so on. And we also see there’s certain genes like Fox B2 that seem to affect language ability.

But interestingly we see the same thing for reading. So in the case of reading, there are genes that have been suggested to play a role in affecting reading ability leading to dyslexia. And we also see that there can be damage in certain parts of the brain that can lead to dyslexia as well. Yet we also know that, as Nick was saying, because reading only been around for about 7000 years or so, during most of that times has only been a very small subset of the human population that’s been able to read. And so we are arguing something similar is true for language, that because language is a cultural product, like reading, the fact that we can see specific patterns of language breakdown, for example, in damage to certain parts of the brain or that we can see certain genetic conditions affecting language ability, that doesn’t necessarily mean that these genes or these areas of the brain have evolved specifically for language.

Jim: There’s some great arguments in the book on how the Fox B2 stuff and the mutations on it are a pretty strong argument that there isn’t much current at least, evolution going to support, especially the different languages. And I think that you guys did some great job of laying that research out. It convinced me and I’m a skeptical son of a bitch, right?

There’s some other very interesting topics here, which unfortunately we’re not going to have time for. N learning versus C learning, shelling points, why Danish is so goddamn hard for people to learn. I didn’t know that it was, but apparently it is. And some riffing on Sapir-Whorf, everybody’s favorite topic, but we got to leave a few minutes for the hot topic of the day. So anyway, read the book and you can see, read all those good things. But we’re going to have to leave 10 minutes here at the end to talk a little bit about the hot topic of the day, which is what you guys know about language and what does that tell us about things like ChatGPT and other approaches to AI meets language.

Nick: Yeah, I mean this is really interesting and obviously it’s breaking news almost as the book was coming out. We had some discussion of GPT3, which is the machinery inside ChatGTP. So the thing that really … I mean, it’s an incredible piece of technology. It’s just amazing that it’s possible to build a … try to train a neural network on what is essentially pretty close to the whole of the web and it is actually able to distill anything useful out of that. And to the extent that you can write, it can write bits of computer code, write stories, write … talk about poems in a moment. It can do all of this stuff at really quite a high level. It can to some extent translate from one language to another. It’s just an incredible technical achievement.

And it’s quite remarkable. It’s one of the relatively … it’s sort of the opposite of what used to happen in AI where there’d be a huge amount of over promising and nothing would actually happen. And now everyone’s thinking, well yeah, these neural networks are pretty cool, but they’re fairly limited and suddenly there’s a breakthrough piece of technology is created, which does all kinds of things that no one really expected it could, I suspect including its own creators. So it’s just astounding, but it’s worth remembering that what it is doing is something like an incredibly clever and incredibly sophisticated cutting and pasting and mutating of language that’s already lying around on the web.

So if you want a story in the style of Jerome K Jerome, which is an example we talk about in the book and you want it to be on the subject of Twitter, it will amazingly write you a story. And the first paragraph at least, it starts to degenerate a bit, but the first paragraph is it’s actually pretty funny and pretty well done. But what it’s doing when he’s doing that, it’s doing something which is … it purely involves reprocessing, repackaging, and re synthesizing chunks of material that are already lying around. So there’ll be bits of Jerome K Jerome, there’ll be lots of discussion of Twitter, it’s all being sort of re-utilized and blended and melded together in really a remarkable way. I mean, its astounding how it’s doing it. I mean literally the creators won’t know either. It’s mysterious and it’s astounding, but it’s doing that without any understanding of any of the underlying, the world, at all.

It doesn’t know what Twitter is. It doesn’t know who Jerome K Jerome is. It doesn’t know what the topics it talks about, it starts to talk about the Twitter’s the talk of all London, and on one of my recent trip trips to the coast, I heard everyone tweeting like starling cages or whatever it was. So it produces this lovely story, but it doesn’t have any understanding of any of this.

And we can see that when clever computer scientists start to probe what it does when you give it nonsense. If you give it nonsense, if you start to ask it questions like how many eyes does a spider have? I think the answer is eight and it tells you eight. No problem. If you answer how many eyes does, for example, a sock have or a foot or a star or whatever else you’d like, it’ll give you an answer to that too. It’ll have a go. He doesn’t think, well that’s nonsense, isn’t it? I mean a foot, it doesn’t have any eyes at all. What are you talking about? But he doesn’t know that. It’ll look around in the corpus, pull together the best it can. And similarly, you can ask it questions such as how many … isn’t quite the right phrases, but something like how many harrumphs does it take you to get to Hawaii? And it’ll say 15 or whatever. It may give you a nutty sentence, saying it takes about 15 harrumphs to get to Hawaii. This is like gibberish. But again, it makes sense when you see what it’s doing. It’s melding language in very clever ways.

Jim: Do a predictive sequence, right?

Nick: Yes.

Jim: It’s amazing what it does, but it’s also amazing what it doesn’t. Because it’s not grounded in any actual knowledge of the world. It doesn’t know how to … it doesn’t have any logic. It can only can do arithmetic by accident, basically. It can do two digit edition but not three or four. And it’s very interesting. But it’s funny, when I’ve been thinking about this show, we were talking about this, I figured there’s probably something that LLMs are saying that’s related to your ideas from The Mind is Flat. Have you given that any thought?

Nick: Yeah, I mean, I’ll just say a tiny bit on that. I feel I also should give some space for Morten to tell us about the GPT3 poems because that’s so cool, but on The Mind is Flat, yes, I think you’re absolutely right, Jim. The idea of The Mind is Flat is that you should be suspicious of the idea that we have deep models of the world. And in fact, a lot of the time we are improvising in a very superficial way. And I think what these large language models like GPT3, ChatGPT, are doing is they’re showing how far you can get with a really shallow approach.

Now we are not that shallow. We are reasoners, we do understand the aspects of the world, but we’re pretty shallow in the sense that we don’t have … certainly if you ask me to re-explain what I mean by a cup, but what’s a cup versus a mug or forces or what gravity is, I don’t really know the answer to any of these things. I can somehow blunder around my daily life. But if you prod anywhere, things collapse pretty quickly. I think ChatGPT is a kind of illustration of how far you can get actually with remarkable flatness, but it’s much, much flatter than we are, but-

Jim: Morten, take it away. Talk about large language models and related AI language technologies and what you know about language.

Morten: Well, so it’s completely true what Nick is saying, that they don’t really understand what they’re saying. And I think that is one of their fundamental shortcomings, yet they can produce very human-like text. I think one of the things that has really struck me with these models is that we can see, we can construe them or view them as a super learner. Somebody who learns from experience alone, but without the other constraints that humans have in terms of living in the world, understanding how it works and so on. And I think in that sense, one of the things I’ve noticed is that when you look at almost any example from GPT3 or ChatGPT, you’ll notice that what they say is always grammatical. So what we have here, we have an existence proof, I think, of it’s possible to learn grammatical language on par with human abilities from experience alone.

Now clearly it gets loads and loads of experience, much more than what people would normally do. Nonetheless, it’s in the limit and existence proof of that. Now, another thing that we’ve been playing around with, some of my students and colleagues, is can we get it to produce poetry? And we actually got a little grant to look at this. Now, initially when we started out doing this, we were very disappointed because it turned out that GPT3 couldn’t rhyme. And so that limits the poetry you can do. And we tried to do all sort of manipulation of it in order to get it to rhyme and it didn’t work. But then they created a new one, a new version called Instruct GPT that came out last year in February or so. And that model, they’re trying to fix some of the problems that it has.

Because it’s trained on the internet, it would oftentimes say things that were racist or sexist or in other way other ways upsetting to some people and to a lot of people in fact. And so what the company behind GPT3 tried to do was to have humans actually give it feedback on its productions and then to try to correct for that. Now, as I can only imagine, is an unintended consequence, it learned how to rhyme. And I have no idea why that is. Yeah, I was completely blown away. Suddenly it could rhyme. So suddenly what we could do is that we could give it some examples of Shakespeare, but then ask it to continue in the style of Shakespeare and it would do, including rhymes. We could get it to do Emily Dickinson, Lord Byron, and a number of other poets, and suddenly it could do that. And so what we try to do is doing a poetry touring test where we show some of these completions to … I’ve done that in several presentations and ask the audience now is this robo-Shakespeare or Shakespeare?

And people are essentially a chance, they can’t really distinguish between the two. Now that of course makes sense if there’s just a general audience of say, computer scientist or psychologists, but even when I presented it for people in the humanities, you oftentimes get that sometimes a particular fragment, they pretty sure that this is the author, it’s not. And so what we’re interested in is trying to see can we, by using all sort of computational tools, find out are there any differences in the kind of completions that GPT3 produces compared to the human authors? And can that tell us something about what may be unique to human poetry production compared to what we see in GPT3? But GPT3 is an amazing statistical learner, picking up on all sort of patterns. And a lot of the way we produce language is based on such patterns too. But we have the advantage of course, is that we have meaning. We understand what we are saying.

Jim: Yep. Yeah. Well you were saying that just for fun, I fired up ChatGPT and said, write a limerick that uses the name Chater and Christensen. There once was a man named Chater whose wit was both quick and greater. He met a fair lass, Christensen was her class, together they made quite the pair, later.

Nick: Better than I could have done.

Jim: Well, yeah, I was hoping it would be better and fouler than that, but it wasn’t. I probably should have said and beats as foul as, but it did rhyme-

Morten: It did rhyme.

Jim: With brute force and total disregard of logic with the last line.

Morten: Yeah, it can say nonsensical things, but it can still rhyme and it tends to be grammatical when it just writes in general, which is really amazing.

Jim: Now you mentioned Shakespeare. Now you mentioned rhyming and Shakespeare. I mean, I hear Shakespeare, I think of prosody right? Famously writes his plays in iambic pentameter. Why would somebody do that? Can ChatGPT do prosody as well?

Morten: To a certain degree it can pick up on some of it, not all of it. So my colleague who is working with me at this who is a poet himself, you certainly think, no, this is bad and so on. For me it’s because I’m not a big connoisseur of poetry, I can’t always see it. But for example, if you look at somebody like Emily Dickinson, she had a particular way of using punctuation and so on, and it’s doing that. It was actually doing that even before it could rhyme, which is quite interesting. It definitely has picked up on some of these things and it’s not always getting right and sometimes it’s getting it wrong, but it seems quite reasonable for the untrained observer as it were.

Jim: Got it. Well, I want to thank Nick Chater and Morten Christensen for a really interesting conversation. You want to go deeper, go get their book, The Language Game, How Improvisation Created Language and Change the World. That one’s got a Jim Rutt recommendation next to it. So thanks guys.

Nick: Thanks so much, Jim. It’s been a real pleasure.

Morten: Thank you so much. It was real fun.