The following is a rough transcript which has not been revised by The Jim Rutt Show or by Charles Hoskinson. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Charles Hoskinson, founder of Cardano and co-founder of Ethereum, I mean, that’s an amazing double hit, which are two of the world’s most popular cryptocurrency networks. He’s also founder and CEO of IOHK, a company building out the technologies around Cardano and doing other interesting work in the whole domain of cryptocurrencies and crypto protocols and things, that sort of stuff. Welcome, Charles.
Charles: It’s a please to be here, Jim. Thank you so much for having me on.
Jim: Yeah. It should be an interesting conversation. I think we got connected because somebody on your team heard my recent podcast with my good friend Ben Goertzel, where Ben brought up Cardano and we talked about it a little bit. They said, “Hey, why don’t we have Charles on?” I said, “Hell yes.” I’d love to hear more about what’s going on. As I told Charles in the pregame, I was somewhat involved in the crypto world back in 2017, 2018, helped give some advice at least to people designing a couple of the successful projects. I haven’t really followed very closely what’s been going on in the last 18 months or so, but it sounds like Cardano is an important part of that story, based on the research I did yesterday and today.
Jim: So maybe you could do a real quick recap for our audience, who by the way are mostly not crypto experts at all, but I’m going to assume that they’re smart folks, kind of like a scientific American type readership, so you don’t need to water it down too much, but maybe give just a very thumbnail sketch of the history of cryptocurrency. Start with Satoshi and Bitcoin, and take her up to Cardano and the third generation protocols.
Charles: All right, that’s going to be fun. Actually, it’s one of my favorite topics, and so I could talk forever. But I think I have a concise way of getting through it. So the cryptocurrency space to really understand that you kind of have to understand what the internet did. So pre internet, for those of us old enough to remember those days, it took time and effort to move information. So I remember when I was a kid I loved the library, and so I’d actually have to get on a bicycle and ride all the way over the library to be able to get books, and have to ride all the way back, and I grew up in Hawaii, so it rained a lot too. So you’d have to think god, is it really worth going all this distance, getting rained on just to be able to get some new books? And it was one of those cost benefit things.
Charles: Then this internet comes around and then suddenly you could move information instantaneously anywhere in the world. Initially it was text and then it became interactive, and then we had videos, and pictures, and all these other things. It inspired generations of entrepreneurs to start asking, wait a minute, why can’t we teleport value and other concepts like property, identity, these types of things instantaneously around the world? Because if you think about a wire transfer, it takes days to move that money. Some cases it’s a little faster but it’s still days for international transfers and it’s very expensive to do that.
Charles: So people said, “We should do digital currencies.” That’s probably a good starting point. So this started in the 1980s, and the cypherpunk movement got very interested in these things, and you got the usual suspects like the David Chaums, and the Nick Szabos and the Hal Finneys, and basically they were just looking can we create some sort of open sourced protocol that’s kind of like TCP/IP, it’s kind of like a backbone information protocol that can represent value and teleport that value like an email anywhere in the world, to any counterparty, with no middleman involved.
Charles: So there were a lot of attempts, things like DigiCash in the 1990s, that was David Chaum’s venture. Then you had things like Bcash and BitGold and so forth. Then Adam Back created something called Hashcash, and that was mostly for DDoS protection with emails, to try to get rid of spam. But none of them really came together until in 2009 an anonymous author, a pseudonymous author named Satoshi Nakamoto published the paper called Bitcoin and some companion code with it. What he basically did in that paper was achieve the holy grail that they were searching for the last 20 years, which was how do you create a decentralized system that allows you to teleport units of value anywhere in the world through a solid transaction.
Charles: Now, the instrument that was being transported around was also named after the protocol, Bitcoin. For a long time they were useless and worthless, nobody would accept them for anything. They would go high volatility, you’d have no liquidity, there were no exchanged, any of these types of things. So around 2013 that was the breakthrough year for Bitcoin. That was the year that Bitcoin actually grew up. People started taking it seriously and it went from nothing to about a billion dollars in capitalization for the entire market, and that resulted in the creation of an entire industry.
Charles: Then here’s what happens when people say, “Wait a minute, this is a really good idea.” Then suddenly you have competition and innovation. So the first thing that people said is, “Wait a minute, Bitcoins are blind, deaf and dumb.” They don’t really understand the outside world, they can’t see it, they can’t interact with it, and they can only do one thing, you can only teleport that value around. But what if you wanted to put terms and conditions in your transaction? For example, what if you wanted to do something like say hey, I’m only going to send you this if you mow my lawn, or hey, you only get the money if grandma dies because it’s an estate contract or something like that, an inheritance, okay. Well, then suddenly Bitcoin can’t do that, you have to build something beyond Bitcoin to do that.
Charles: So that was the story as of 2015. But then people said, “Hang on a second here, these systems have three very specific problems, and if we don’t solve those problems these systems will never grow beyond a small group of adherence who are very evangelistic, they won’t go mainstream.” So first there was this issue of scalability, and basically these are replicated systems right now. So the easiest way of visualizing that is imagine that you have a dinner at a table. Right now cryptocurrencies they have finite dinner. Okay, so there are some ribs, and there’s some stake, and some chicken, and if they’re vegans maybe there’s some stuff there, tofu fever, but what you’re doing is you’re adding more chairs around the table when you add more users. You’re not making a larger dinner, you’re just getting smaller portions for everybody, and that’s where we’re at with Bitcoin and Ethereum and these other systems, these second generation and first generation systems, is that they just simply don’t gain resources as they gain users, and as a consequence you run into bottlenecks very quickly.
Charles: Now, an example of a scalable protocol that’s the opposite is BitTorrent. So we know we can create protocols like that. So BitTorrent, more people downloading a movie, the faster you get it, so it’s the opposite relationship. It’s almost like a potlatch, where when people show up they bring more food than they eat, so the more people you have coming, the more food you have, eventually too much to eat.
Charles: So, that’s the first class of protocols, how do you go from these replicated to distributed system that is gains resources or keeps the same resource capacity as you gain users, then you can go to millions and eventually billions of users. Second, there’s this issue of interoperability. So there, there is over 8,000 cryptocurrencies and there are tons of legacy financial systems. Could you imagine Wi-Fi if your phone would only connect to your branded Wi-Fi router? So for example your Apple phone would only connect to an Apple router, your Google phone, your Samsung phone would only connect to a Samsung router, Wi-Fi just simply wouldn’t work, nor would Bluetooth or any of these other protocols. So we really need a Bluetooth or Wi-Fi moment for the movement of information value and people between all of these financial systems that exist, more important than the systems themselves.
Charles: So there’s a big interoperability demand, and you kind of have to build that into your protocol design because you need to have decentralization. It’s very easy to achieve interoperability if you trust somebody. It’s a lot harder to do that when you don’t trust anybody. Then finally there’s a governance problem, a sustainability problem. This is the who pays and who decides. So when you think of products and protocols, generally you think of a custodian of a product and a protocol. So for example like Windows to Microsoft, or like Android to Google, and even when it’s federated like Android is, because Samsung works on it, Google works on it. You kind of know that set of actors in the room and they don’t change. Well, the whole point of a cryptocurrency is to say, “Wait a minute, there’s no one in charge.” So how do you pay for stuff, really expensive stuff, how do you make these things resistant to quantum computers? Or how do we launch a bunch of satellites to build a second internet, so we have a censorship resistant transmission layer for our transactions? Or how do we do next generation protocols that can scale to a billion people? How do you pay for that stuff if you don’t have custodial entities? Where does that money come from and then how do you decide how to upgrade the system? It’s nontrivial.
Charles: For example, what if you do come up with a way to make your system immune to quantum computers? There’s probably five ways to do that that are good ways. So, which one of the five ways do you pick, because you can’t pick all of them, and who gets to decide that? So the second and first generation cryptocurrencies, Ethereum and Bitcoin respectively, they don’t really have good solutions to those particular problems.
Charles: There is a class of third generation cryptocurrencies, they’re designed around solving some or all of these particular problems. So we have things like Tezos, things like Cardano, things like Polkadot, things like Algorand and also Ethereum is upgrading itself and it’s creating Ethereum two, huzzah. In all these cases there are new protocols that are being brought to bear and each and every one of them have some trade-offs and some goals about how deep down the sustainability, scalability and interoperability pipeline that they want to get.
Charles: Now, the real exciting thing is that this is the last generation before a mainstream adoption, and at that point whoever wins the third generation will wake up and have, like Facebook, hundreds of millions to billions of users, and these aren’t just platforms for the movement of information, these are ultimately platforms for running a government. It is like a financial operating system for a nation state. You can run your stock market on it, you can run your property ledger on it, you can run your voting system on it, you can do social networks on it, you can do payment systems on this. You can do pretty much anything you want to, and if these protocols they have values baked in, they go from don’t be evil to can’t be evil, meaning that even if it’s inconvenient to a particular government or corporation, they can’t step up and change the terms of service and rules to deplatform or remove people from the system. So in other words, the system is equally fair to the greatest amongst us as they are to the least amongst us. So Bill Gates will have the same access as the poor farmer from Senegal. There’s really never been a time in human history where we’ve built technology and systems where that actually is the case by design. So it’s a really exciting time to be in the industry.
Jim: Yeah, it really is. As I mentioned earlier, I had my nose into it deeper a few years ago, and I noticed some of these bottlenecks, and I said, “That’s going to have some fairly serious limitations.” Because I recall Bitcoin had remarkably low transaction rates, somewhere between four and seven transactions a second as I recall. I go, “Wait a minute, this can’t possibly be used for everyday commerce.” Maybe you can use it to replace the equivalent of a gold backing reserve or something like that, but this damn thing is never going to be used to order fries at McDonald’s.
Jim: Then when Ethereum came on I dug into it and learned, I had to get a little program in Solidity and what have you, and I go, “This is really an amazingly deep idea, the ability to write a smart contract that lives on a blockchain and has around it a currency and all this stuff. But wait a minute, it’s basically a one core computer for the whole world. How could that possibly work?” Right?
Charles: Right, it’s a great proof of concept, but definitely not the horse you’re going to ride home on or build your economy on.
Jim: Yeah. I’ve been talking to folks that are still out there playing that world, one of the big choke points right now is the cost of doing transactions on this single core computer for the whole world has gotten ridiculous. I think last time I talked to Ben, Ben Goertzel who was on our show very recently, and we talked about some of these topics, it was like a dollar to do a single transaction. I go, “What?” It’s 2021, how can it possibly cost a dollar?
Charles: That’s a cheap transaction. It’s actually more expensive in many cases.
Jim: I go, “This is clearly broken.” Helping folks think about the BigchainDB project. Our aim was to get the price down to a millionth of a cent. I think it was unrealistic, but at least we get it down to a thousandth of a cent, right? No excuse for something to be a dollar for a transaction today. What the hell? So those things were obvious to me years ago. I will say, I am kind of surprised how far those projects have come nonetheless. It shows you how much demand there is for this class of functionality, that hey, it’s better than nothing.
Jim: It’s kind of like the internet. I was one of those guys who was there long before the internet. I actually worked for a company called The Source, which was the first consumer online service. We had most of what’s on the web today in 1980 at 300 BOD and $10 an hour. I think back about that, that’s kind of like Bitcoin and Ethereum. Why the hell would you use The Source for $10 an hour in 1980 dollars, makes it more like $20 now. The reason was because there was no alternative. If you wanted to do things of this sort, you used CompuServe, then a little later The Source, then later CompuServe.
Jim: So if to a degree you wanted to be able to do these things at all, you had to use amazingly inefficient things like Bitcoin or things with very, very minimal throughput put Ethereum, but people did amazingly creative stuff on top of these. So you’re now telling me that people have learned from this and there’s these third generation protocols. So let me ask you some specifics. We talked about transaction rates on Bitcoin. Ethereum they’re a bit higher, I don’t know what they are these days, but they’re still not super high. Then the single threaded computational model. Have you guys been able to find good workarounds for those? Cardano, how many transactions a second can you do across a mature Cardano network?
Charles: Yeah. So first, I’m not really convinced that TPS is the best way of measuring performance of a system, because what is a transaction? That’s the first thing that you really have to start thinking about. So, if you break it down, transactions carry value, and they’re between people, but then they also carry data payloads, like metadata for example, and then those transactions are also usually there’s a commercial understanding or a government understanding of why are you moving it. So there’s some notion of a contract. Then these transactions usually live within a regulated environment, implicit or explicit.
Charles: So here’s an example of a tale of two transactions. You could be building a micro-tipping application and you’re on Twitter and you see a tweet you really like, and you click a button to tip that person a penny. Well, that’s technically a transaction. You could be doing a multinational deal where you’re buying a goldmine and setting up something, building a hospital and investing in some infrastructure somewhere and there’s nation states and Fortune 500 companies involved. When you finally get to a contractual understanding and sign the contract there’s going to be value flows, and that’s kind of a transaction too. Well, these are two completely different transactions and they have two completely different sets of requirements, and value at risk, and actors and so forth.
Charles: So the first thing our industry needs to do is take a step back and really break it down of how do you quantify value? How do you quantify the users? How do you have a meaningful discussion about identity with those users? How do you have a discussion about the metadata that’s embedded? What metadata is on-chain versus off-chain? Because by the way, that’s a hugely important part of any transaction.
Charles: For example, let’s say you withdraw 200 euros from an ATM. If it’s next to an Italian restaurant or next to a brothel, there are probably different contexts as to why you’re doing that. Well, so metadata is very, very important because it tells you for example the location of where the transaction happened, not just the raw value, these types of things. Then that contractual understanding, is that machine understandable? For example, does your system have the ability to understand what the nature of a contract is and what that means? Nick Szabo writes a lot for example about wet code and dry code, and it’s kind of what he’s talking about in those articles he’s written.
Charles: Then the regulation is super complex too because what happens if your transaction is touching 55 nations? Think of a Zoom call, a large Zoom call where you are inviting a lot of people to come, and let’s say you end up having 53 participants and there’s 41 countries represented there. You’re implicitly agreeing to follow the privacy policies and broadcast policies of 41 countries from that one event. So transactions could touch many, many, many, many different things. So that’s one thing that I think the jury is not quite settled in. It’s kind of a pet peeve of mine because what the industry does is they create vanity metrics. Like oh, we can do 25,000 TPS and what they’re really saying is well, those micro-tipping transactions, we can do 25,000 of those, and when you go into complex transactions that have lots of rules and additions and big smart contracts, oh suddenly things are a little different.
Charles: Now, to answer your question, for an average Bitcoin like transaction our system could do about 150, 200 TPS in its current [inaudible 00:19:30] and with the design of the protocol we have, Ouroboros, the engine of Cardano, I think we can get that at the base layer up to about a 1,000. Then we have something called Hydra, where what we can do is run special channels in the system, and each channel can actually parallel process transactions. It’s almost like a multi threaded processor, and these channels can communicate with each other and they can kind of batch things, and they can load balance and so forth.
Charles: We think within the next three, four years, as this system rolls out, eventually we can get to a few 1,000 of these channels, at least a 1,000. The max throughput of a channel is about 1,000 TPS. So from a Bitcoin like transaction, there is definitely a possibility over a long arc, three to five year time horizon, we could see about a million TPS throughput capacity. But then there’s a question of okay, is history an economic actor or not? Right now it’s subsidized in the cryptocurrency space. So when you do a Bitcoin transaction it stays forever on the Bitcoin blockchain, and that kind of makes sense if you’re very narrow in your scope and you have kind of predictable growth. It makes no sense at all if you’re inviting nation states and whole economies to join your system because suddenly your blockchain is going to grow to exabytes of scale, and then either Google or the NSA is going to be the one who has your blockchain. Just pick which one you want to download from. So that’s a problem too. So how do you make data an economic agent, and if it doesn’t pay its rent, some point it gets pruned out of the system.
Charles: So there’s dozens of consequences of scale above and beyond theoretical peak throughput, and they have to go down with what do you define as a transaction. What does that payload look like? How long lived is a transaction? Is it final from the time you make it or do you want to do things like contingent settlement? For example, imagine you run a not-for-profit and you have an address that you put out there, and you say, “Okay, anybody can just push payments to me. However, they have to sign the donation agreement before they do it.” So they send a transaction, the transaction gets rejected because they didn’t sign the contract, so I have to go and find it and then hash it, sign it, and then if it’s a valid hash, valid signature, they put it in the transaction payload, then it settles. So do you count that as one transaction, the settled one? Do you count that as three transactions, the transaction that failed, the transaction to request the contract, then the transaction to send it. You see how it gets really nuanced as you start parsing it, and these are the kinds of things the industry still has to kind of settle on.
Jim: It’s just like the old computer industry, right? We used to talk about read/write rates on disc, right? And then well, is that a direct access block transfer? Is that hopping around a few times? Then well, what about a relational database transaction? Then they eventually standardize. All right, this is a standard one we’re going to use to measure relational database throughput for instance, and it had the following attributes, and so I’m sure this industry will get the same. So it sounds like the bottom line is you’re several, an order of magnitude better than the second generation today, and you’ve got three order of magnitude headroom above that probably within the constraints of the breadth of the architecture, and that gets you up probably to the micro-tipping model or just below it.
Charles: Yeah, it gets you to global scale. Then you can do a lot of things off chain as well. So for example, how are you using the system? Are you using it for auditability, transparency, integrity and immutability? In which case you’re a ledger of auditing. So for example, a lot of stuff happens. You take all that stuff, you represent it with some data structure. Then you hash it and store the hash on the chain. Now regardless if that data structure is gigabytes or terabytes of history, it’s always going to be constant size, it’s like 256 bits, okay? So that’s a whole different thing, because that’s logarithmic growth compared to using the blockchain as the actual database, which some people are proposing, and that’s pretty crazy.
Charles: So this is a service layer, and there’s a great hybridization where you’re taking existing infrastructure, existing servers, and what blockchain lets you do is it’s kind of a meta layer that lives above your infrastructure that keeps everybody honest, and even if you don’t fully trust people, it gives you guarantees that they’ve done things right, even with computation, like smart contracts, you can also do that. There’s a topic of computer science called outsourceable computation. So basically does it really matter if you do the computation in an open and transparent way with a trustless system if you can generate a proof that the computation was done correctly?
Charles: So for example, right now with Ethereum everybody, all of the miners they have to do that computation, that batch computing problem and in a very open way. So it creates a replicated system, and it’s very throughput limited as a consequence. But what if you could turn to a distributed computing problem and it’s a first come first serve? Any person can solve it. Whoever solves that creates a proof that they did it right, and the only thing the miners are doing is validating the proofs, not even the underlying computation. So, if that’s the case, you have massive scale because suddenly you can get millions of computers working together and they’re all doing different things, and it’s like BitTorrent in that case. It becomes a distributing computing protocol instead of that.
Charles: So there are some protocols out there that Microsoft played with, like Pinocchio and Geppetto came out of Microsoft research in 2014. There’s this whole zero knowledge SNARK, a movement that’s kind of pushing its way through. I think some combination of these concepts in zero knowledge computing and outsourceable computation and proof-carrying code, there’s something there that will fundamentally change the way we do smart contracts. I think economics will guide this. For whatever you can do, if it becomes too expensive to do it with that finite pool of resources on a blockchain, it’ll be outsourced. Then the more outsourcing we have to do, the more tools we’ll build to do outsourcing in a safe way.
Jim: You mentioned in passing miners. One of the distinguishing characteristics that at least the write-ups of Cardano and some of the other later generation blockchains talk about is that they no longer require the crazy proof of work. I mean, I read Satoshi’s paper, I don’t know, two months after it came out, and I got to say I was amazed. I slapped my forehead and go, “Damn, that’s brilliant.” And it’s not even that hard, right? I mean, if you really understand it, the essence of blockchain, Bitcoin style blockchain is pretty simple, but it stands or falls on the one-way nature of a certain class of encryption, and the discovery of these coins for finding the single, all the digits being one, except they’re longer and longer, harder and harder.
Jim: I remember back in 2015, I guess that was the last time I did a real deep dive into Bitcoin in particular, I discovered that sort of by the interaction between how the Bitcoin numbers worked and how the hardness algorithm worked and all that sort of stuff. At that point in time Bitcoin was kind of stuck in a range of five or 6% of its market cap per year just in electricity to do the mining, right? I go, “What?” As one of my friends at that conference where I spoke mentioned, he says, “Well, it might be the main role of Bitcoin is to haze in the heat death of the universe.” Now, of what I’ve read, Cardano’s found a way to get away from proof of work and this tremendous amount of electricity being spent to this day in mining of Bitcoin, Ethereum and other proof of work stages. Talk a little bit about proof of stake. How it differs and is it as robust and as secure?
Charles: Yeah, that’s a really good question, and actually the answer for that question is we now know yes, in fact more so on my view, but when we started Cardano and the research agenda there the answer was we didn’t know. So Cardano was a first principles kind of clean room approach to looking at things, kind of like the Clean Slate initiative that Stanford started when they were talking about hey, if we built the internet today, what would we do differently? Oh okay, that’s a pretty noble goal, let’s go chase that for a little bit, let’s see what we can do with that. Well, it’s the same for Cardano. We said, “Okay, well, we can kind of build a cryptocurrency today and knowing what we knew, what would we do differently and where does the science take us in that respect?” And the engine of a cryptocurrency is the consensus protocol. It’s basically how we reach a collective agreement on how to do things. There are really three stages to a consensus protocol.
Charles: First off, you have to decide on who gets to advance the network. So some process it’s either from god, that’s where you just pick a quorum and you say, “Okay, it’s this person, then this person, then this person.” You kind of rotate through, round-robin, or some other process where you prove something. For example, Intel created a consensus algorithm called proof of elapsed time, where everybody who is eligible will generate a random number and whoever has the lowest number is the winner, and then they get to go and advance the round. The only reason that’s secure is because they do it through trusted hardware, on Intel SGX, so you can’t rig the random number generator. Well, okay great. So you have different models of assumptions. So is it static and federated? Is it dynamic and decentralized? Is it Byzantine resistant? That’s a term we have for can it admit actors who are dishonest or is it not Byzantine tolerant? So for example, you own all the hardware, these are your computers, you’re not going to go screw them up. You’re just trying to wire them together, like the Googleplex or something like that.
Charles: In the case of proof of work, so that’s the first stage is mining, and that mining is meritocratic. So whoever does the most work has the highest probability of getting selected. Then once you’re selected you make a block and you have to follow a protocol for how to make that block, and then you broadcast it, and in the final stage is acceptance of that. So assuming that your election was legitimate, assuming that the block you’ve constructed follows the protocol rules, and assuming that the network accepts that, you have completed that round and then you begin the next round. Basically what you’re doing in a round is ordering of transactions and approval of transactions. That’s what those confirmations are about.
Charles: Okay, so when we look at proof of stake, proof of stake does the last two things, basically the same as Bitcoin. Not much big difference, there’s a lot of ways you could optimize things, but it’s still that somebody is making a block and somebody is broadcasting it. But instead of having a horrendously expensive digging holes and filling them back up again, a lottery system for selecting a leader, you create a synthetic lottery by proportion of the amount of stake you have in the system. So if you have 25% of the tokens, 25% of the time on average, you should be selected eligible to go make a block and then broadcast that. If you don’t want to do it, you can delegate that to some other actor.
Charles: So the beauty of this system is that you’re directly aligning the economic incentives of the people who are making blocks and broadcasting blocks, advancing the network, running the engine of the system to people who are financially entangled with the system. This is not the case with proof of work. With proof of work you have a situation where if you have two cryptocurrencies that have the same engine, the same consensus algorithm and they’re roughly the same rate of work in those same price point, then you actually have an incentive as a miner to destroy one and then go work on the other one. Why? Because if you do bad stuff and you behave dishonestly and kind of shut that network down, you can short sell it because your actions are going to damage the economic value of the network, and then you can reuse your mining hardware to go mine a different cryptocurrency. This is called a Goldfinger attack, and it’s pretty pernicious, and this is why there’s a lot of maximalism in the Bitcoin space, because if there were something like Bitcoin that was roughly the same hash rate value, we would see 51% attacks all the time, those nasty things that miners can do. Whereas proof of stake you can’t do that, because your mining hardware is effectively your stake in the system and is fundamentally incompatible.
Charles: So for example, Cardano’s market cap at the time of this recording is relatively the same as Polkadots, we’re right in the same neighborhood, but that has no bearing on our security in that respect. So, there’s this question of well, is that synthetic lottery that we’re doing, is that a good idea, is that secure? And that actually was a very big open question in the cryptocurrency space for a long time.
Charles: So how we approached it was that we went and looked at first principles academics. We first said, “What is a blockchain and can we derive a security definition for a blockchain?” And that’s exactly what we did. We wrote a paper called GKL15 and it appeared in Eurocrypt, a very prominent cryptography academic conference. Basically what the paper did is it rigorously built a model to describe how one should look at a blockchain. Then we prove proof of work creates a secure blockchain. So we had a security target, we kind of defined what you want to achieve and then we looked at the incumbent and said, “Well, does it meet these security properties?” That’s a good starting point. Then we asked, could we construct this synthetic lottery and these other things as kind of a straw man protocol just to demonstrate that there is a way to be provably secure? And that was the original Ouroboros protocol. Didn’t have all those properties that you really need in practice.
Charles: It was synchronous, it made some assumptions about random number generation, which were counterproductive to scale, but the point was to just show that it’s possible in general, because if it’s not, what you can do is do the opposite. You can create something called an impossibility theorem, and these are quite common in computer science. You have things like the Fischer-Lynch-Paterson impossibility theorem that makes some statements about limitations of distributed systems and the level of security you can get in the presence of an asynchronous network. There’s dozens of things like this in systems.
Charles: Okay, so we said, “Hey, not only can you do it, we actually created a roadmap for how to transform a straw man theory based protocol to a practical protocol that you could run in a real life network.” So we systemically attacked everything. The need for an external clock, the synchronized model going to a semi synchronous model, adaptive security, which means you don’t have to know who is going to win ahead of time, much faster random number generation, going from an MPC to a VRF. Every step of the way we did this through papers and every paper we wrote is submitted for peer review, and they appeared at major conferences like Crypto and Eurocrypt and CCS. So the whole of the cryptographic academic community showed up to basically validate that the work we’ve done is right. It took six years to do this. 10,000 citations, six years, a lot of papers, and it was a lot of fun. It was just pure intellectual creativity, but the proof is in the pudding. It’s not good enough to just write papers. Anybody can write a paper. Every now and then you can even get a person to read your paper. That’s a huge achievement in academia.
Charles: So the next step is actually implement it, build it. We chose to build them in a very rigorous way. So we said, “Hey, if we’re going to do this science right, we should do the engineering right.” So we looked to how do we know the protocol is correct and unambiguous, semantical correctness? Well, there’s a way of doing it, it’s called formal methods, and boy that’s hard. It’s only usually applied in things like braking software for trains, aerospace software, rocket software, medical software like pacemakers, these types of things. So, we had to go and talk to those weird people from Cambridge and Oxford and other places who fell in love with terms like dependent types and crazy languages like Coq, and Isabelle, and Idris, and Agda, and we had to build a formal methods group in addition to an academic group, and we wrote formal specifications for our protocols in our system, and then we were able to prove properties about those systems and actually show that the code that we’re writing and applying is following those specifications, and through property based testing and model checking, and refinements and these types of things.
Charles: So it’s been a hell of a lot of work in the last six years because we had to be very interdisciplinary. We had to kind of be commercial actors but at the same time we had to be good engineers, and then at the same time we had to be formal methods engineers, which is a very uncommon skillset, there’s like 5,000 of these guys amongst the 22 million engineers you have floating around. Then we also had to be getting academics and have papers survive peer review, which is not an easy process, and especially for top-tier conferences. They only accept about 10 to 15% of the papers submitted to them, and rightfully so because they have to have high standards.
Charles: So the short answer is yeah, we believe we sold it. The long-term benefit is we’re 1.6 million times more energy efficient than Bitcoin at the moment and we’ll continue to grow in energy efficiency. Second, as the price of Bitcoin goes up, its energy consumption increases. These are directly connected to each other. Then in our case when the price of Ada goes up, the energy consumption does not. However, we do get more decentralized by the way that we structured the parameters, whereas Bitcoin has the opposite relationship. As the price of Bitcoin goes up, the Bitcoin actually gets more centralized because of economy of scale. You have a situation where you get subsidized power dominating the system. So those winners take all, and then also private mining pools with proprietary ASICs that are patent protected. That’s a barrier to entry, and if you have patent protected proprietary hardware there’s simply no incentive to sell that on the open market. It’s better to mine with it because you make more money mining than selling the hardware, kind of like trading software. I get instantly skeptical of somebody who is trying to sell me a trading software that they insist is going to make me a lot of money. I say, “Why don’t you just trade with your own software instead of selling it to me?”
Jim: Yeah, I came out of that industry, the Wall Street trading systems and related things, and that was always the thing. The good stuff was never for sale, right?
Charles: Yeah, you had to pay top shelf prices for that. I guess that’s true with women and trading software.
Jim: Well it’s interesting, you in passing gave me a hint to the answer to one of my other questions, when you mentioned you went through the funnel of formal methods as how you decided to produce some provability about your software. That’s probably what led you to what the other seemingly peculiar decision, which was to choose Haskell as your main base language. I mean, that’s a beautiful language. I teach myself a new computer language every year. I don’t know, four, five years ago my friend Ben said a suggestion, I wrote the equivalent of a little bit more than Hello, World! and Haskell or a little bit. I said, “That’s beautiful, but man, it’s head hurtingly difficult and it leads you into dangerous territory of relying heavily on recursion.” Which in my days as a corporate CTO I would generally say, “Unless you have a goddamn good reason, don’t be going down the recursion road because none of you are smart enough to understand what you’re building.” Right? I would talk about people that would use Lisp in corporate America, and I’d say, “Sorry, to really do Lisp well you need an IQ of 130.” And they’re kind of thin on the ground in corporate IT departments. Apologies to all my employees from those days.
Jim: So, presumably it was reliance of formal methods that led you to the implementation decision to use Haskell.
Charles: Yeah, I mean, the two industrial languages that really work well with formal methods are Haskell and OCaml. I’m not French, so Haskell was an easier decision. There’s a huge OCaml community in the French world. There’s another cryptocurrency that does do formal methods work, and they happen to have been founded by a French guy, and lo and behold, they’re using OCaml, but there are other languages you can use as well like Clojure, Scala, F#, all of these too have a path to some degree of formalism. To your point, it is definitely a different development paradigm. I would argue however, when you’re talking about massive concurrent systems where you have bugs of emergence, it actually is a good idea to try to make as much of your system as possible stateless and all your variables immutable, these types of things, because you can reason about those systems and compose those systems with so much more certainty and the kinds of testability that you gain from that coding paradigm is, in my view, much, much better than what you would see in an imperative object oriented world, and that’s one of the reasons why I chose Haskell as the initial implementation language, but there was a lot of housekeeping we had to do.
Charles: The other thing is I’m a big fan of QuickCheck. I think it’s a great way of approaching testing of distributed systems, because you can find all kinds of weird gnarly bugs that we’re still finding in Bitcoin 12 years after the damn thing came out. So obviously their testing framework is not sufficient if you’re still finding bugs a decade later for the same protocol design.
Charles: So it’s actually nice to have math based specifications, because then you can not only perform all kinds of cool tests on them. They’re much easier to reason about with the scientists, because they’re not going to be so keen on your code, and a lot of your code will end up being very hard for non experts to understand because it has optimization built in, which obfuscates the clarity of what you’re trying to do. Whereas specifications, it’s just what you want to do and what you intend. So that should be a lingua franca between the engineers and the scientists, which was very important for us because semantical correctness really matters when you’re talking about implementing these protocols. It’s entirely possible you could be provably secure on the science end but actually have a broken protocol on the engineering end because the engineer just simply misimplemented it, and that bug persists for a long time, in some cases very difficult to correct.
Jim: On the other hand, again, this comes back from my old CTO days, and then later advising early stage companies, ecosystems matter and ecosystems include people, people who … How many people are there skilled with technology X or can learn technology Y in Z amount of time? But choosing something like Haskell puts some pressure on the ecosystem, to say the least. In fact, I was talking with the CEO of a interesting company the other day and they use Ethereum based coins for the work they do incorporating their own token in their social media platform actually. I said, “Have you thought about Cardano?” Because I had just talked to Ben, and he said, “Well, we tried, but man, it’s impossible.”
Charles: Yeah. Well, so let’s get into that, Jim. That’s a fair point, but I mean, the counterargument is there is a world of difference between the people who write the microkernel for the iPhone and the people who write iPhone applications. So it’s different boats for different floats. So the core protocol, because you’re talking about distributed systems, very, very high detailed to semantical correctness and cryptography, it’s probably a good idea that the people working on that are rockstar level programmers and really bright, because if they’re not, they’re going to make mistakes that have catastrophic impact on everybody who builds above it. It’s like you’re a plane. You probably want the people who are designing your flight control software and your engine software to be really good at what they do, or else you have a Boeing 737 MAX situation.
Charles: Now, on the other hand, these are platforms, and people need to be able to build on those platforms. So we’ve really put a lot of attention to detail to building a smart contract model that not only allows you to write high assurance code and be a formal methods person, but also write code in normal programming languages, because the paradigm is not just what you put on the blockchain. As I mentioned before, there’s a lot of off-chain infrastructure. There’s over 3,800 DApps that have been deployed in the cryptocurrency ecosystem. The vast majority of those DApps that are successful have an off-chain component, which means that they’re calling a central server. So first in our paradigm all that off-chain code you can write that in Node, C++, and Java, whatever language you care. So you don’t have to throw away React, you don’t have to throw away all of the libraries and tools that you’re used to as a professional developer.
Charles: Then for the on chain code we actually have three different paths that you can take to deploying stuff on Cardano. All of these will be in market by the second half of 2021. We call this the ocean, the island and the pond. So, there’s this amazing experimental pond that Vitalik has created with Ethereum that is Solidity and all that’s Solidity ecosystem. There’s all cool wonky tools like Truffle and Infura, and great libraries, and all these development patterns that people have developed. They should still do that, so we’re backward compatible with EVM. We actually have a specific sidechain we’ve connected to Cardano, and right now it’s in the testnet phase, but it will go live for mainnet applications this year, and that is going to forever be EVM compatible. So wherever Ethereum goes with Ethereum 2, your Solidity code and EVM bytecode will run on that system, just better, faster, and cheaper.
Charles: Okay, then we have the island, and it’s kind of like the mystical island that Odysseus gets shipwrecked on and Calypso tries to convince him to stay because it’s such a paradise. That is Yella, and we’ve been building that with University of Illinois Urbana-Champaign and a company called Runtime Verification. They work with NASA and DARPA. We said, “Why don’t we build a much more understandable virtual machine? Not this weirdo stack based virtual machine from Ethereum, but let’s build something based on LLVM, because that’s an industry standard. Apple created it, it’s been around for over 18 years. There’s a lot of compilers that target it. This is an easy thing to understand.” Okay, so we went and did that. We created Yella with them and then what we have is the ability, using something called the K Framework and a process called semantics-based compilation to basically write the semantics of a programming language so we can do that, or sophisticated people can do that. Once you’ve written the semantics it can automatically build a compiler for that language, and keep that compiler up to date through version changes of your virtual machine.
Charles: All right, so the island is basically Plutus and Marlowe, and that’s for high assurance code, and that’s where the jet engines and the medical software, because what happens when medical records go blockchain? What happens when IOT goes blocked and you have self-driving cars, and drones and these things? If these things get screwed up, people die. So that’s not about move fast and break things and let’s be a Silicon Valley guy and iterate, this is about I’m a Fortune 500 company and I really don’t like class action lawsuits and I’m going to pay extra money to make sure it’s secure and it’s correct and it doesn’t fail. So you need tools to be able to do that, and that’s the mystical island that we have with Calypso, and that’s what Plutus was built for, which is a variant of Haskell. The guys who actually created the Haskell programming language, we got them to come back and do it again, and they actually created Plutus. So Phil Wadler did that with Manuel Chakravarty. So, we have three different paradigms there. So we can do what the incumbents do, we can bring the mainstream developers in and let them write the language that they write in, and then finally we have a beautiful sandbox for high assurance applications, which are very important.
Charles: As a final point, we put special attention to the communication of on chain and off-chain, and our model was built in a way to make that very easy to reason about. So it’s a lot easier on the programmer and it’s a lot lower overhead for writing code in that paradigm. Ultimately, that emends much better to a service oriented architecture than most blockchains, and I think that’s where the industry is going.
Jim: You guys have put a lot of thought in it. Time will tell if you get the ecosystem right. Another project I stuck my nose into a little bit last summer was Holochain. I bought one of the little Holochain boxes and fooled around a little bit, and I had Art Brock on the podcast a couple of times. One of the things I learned was they may have killed their project by a decision midstream to switch from Go to Rust, maybe not. It may have been the most brilliant thing they did, but it certainly put them back a long way, and the fact that Rust is an interesting language. I taught myself Rust a couple of years ago, but it’s an annoying ass language in many ways, and they’re not nearly as many people as there were that knew Go. So one never knows. It sounds like you guys have thought it through extraordinarily deeply and I commend the quality and adept of your thinking, but these ecosystem questions frankly come down to will the dogs eat the food. Not only will they eat the food, but will they eat the food fast enough to keep you ahead, so you’re the one that is one of the top two or top three when this stuff finally does go mainstream.
Charles: Rust was created by Mozilla, and they said, “Hey, we really want to build better C, C as if it was made today.” And it’s really great for systems and infrastructure programming and cryptography. So you get the same performance profile of C or C++, but you get a lot more tools to make sure that your code is running correctly, and that’s super, super important when you’re really worried about protocol level correctness. One thing that Mozilla did super well with Rust was portability. So it’s very easy to get Rust code to run in the browser. In fact, we’ve done that with a Cardano product, Yoroi, was all the crypto code runs in the browser. It’s really easy to get it to work on cellphones and in low resource environments like Arduino microcontrollers or Raspberry Pis and so forth. So there’s definitely some huge values there, but to your point, it’s a heavier duty language than Go, and it’s harder to keep all of Rust in your brain than it is to keep Go in your brain. That is a big challenge, and it’s not as well supported as Go, although I’d say Rust is pretty damn good now.
Charles: If I was starting a project from scratch and they had to ask me, do you want to do C++ or Rust, I would be like, “Rust 10 times out of 10.” If those were the only two options that I have. But my view is that when you’re building the protocols for the first time it ought not be about optimization, it ought to be about correctness and it ought to be about a language that allows you to think around architecture well. That’s why I think a functional approach does make a lot more sense, and there are plenty of options in that space. Closure, Scala, F#, certainly Haskell and OCaml, but there’s all kinds of things you can do there. We’ve actually built a cryptocurrency in Scala as well. We did the Mantis project for Ethereum Classic, and that was very enjoyable. 14,000 lines of code for an Ethereum full node, the Go CLI was over 60,000 lines of code. Never underestimate the value of being concise when you’re writing code, because code is often read dozens of times for every line you write. So the less of it, the easier it is to manage that. Then the other thing is the more modular the things you construct are, the easier it is to kind of break it into pieces and really reason about those pieces carefully.
Charles: So a good code should be able to avoid those smells of technical debt in architectural monolithism, for lack of a better term, and certain languages aid that, other languages hinder that, like PHP for example.
Jim: PHP, the worst possible language, maybe Perl, maybe Perl.
Charles: Well, but then again, their domain was you’re going to write a quick script and you’re done with it, but then people decided to go off the reservations and go off label and start using it to build full-scale applications. Facebook was the worst offender with PHP. They even created hack as an attempt to try to unify their PHP stack because it was so bad.
Jim: It’s the most nauseating. It’s just like, who came up with this? What the fuck, right? Oh dear, yeah. My 2020 programming language was actually F# which is, as you know, a derivative of OCaml, and I really enjoyed F# I must say. I got farther into it than I did into Haskell, probably because it runs in my old favorite Visual Studio. So I actually wrote a few things in F#, and next time I have a greenfield project that I’m just dicking around with something, like a new kind of learning algorithm for a neural net or something, I may well do it in F#. So coming closer to a believer in these functional programming languages.
Charles: Oh yeah. F# is just absolutely great, and there’s a companion specification language with F# called F*, and Microsoft’s been systematically rewriting the entire web stack with it through a project called Project Everest, and they’ve had some great success there. What’s really cool about F# is that you do the functional stuff that you need to do, and you do that in F# and then the nonfunctional stuff you can do anything else in the .NET ecosystem and that kind of flexibility works really well for a real application, that polyglot approach and being able to say okay, it’s very clear that a functional approach makes sense, maybe for a gooey here or a network code here we’re going to go with a more object oriented approach, a more standard … write all that C# and you just roll it all together and it just works. F# has come a long way. It’s in its 5.0 release. It’s been crazy to see how quickly that language has evolved.
Jim: [inaudible 00:58:17] use these things for one man projects where being polyglot is an advantage, right? Oh, can I do my UI, can I do my database stuff, can I do my network stuff and my deep algorithmic stuff all in the same environment? If the answer is yes, that’s a big plus.
Jim: Let’s go back to something we talked about in passing. Again, another area where Cardano seems to have made some substantial steps up. That’s getting around the single threading of computation and actually executing the contracts. How have you guys done on that in terms of being able to get past these ridiculous bottlenecks that we’re seeing now in Ethereum?
Charles: Yeah. So the partial solution is change your accounting model. So Bitcoin actually got it right, but they didn’t go far enough. So the UTXO model is super easy to shard because you only have to have a local view to be able to process things. The accounts model you have to have this concept of global state. The minute you hear global state, oh god, could you imagine a system where you have a global variable and everybody has access to it?
Jim: Not just global in your source file but it’s literally global, right?
Charles: Yeah, it’s distributed global. It’s like global got herpes, it’s a bad deal. So, that’s a problem. UTXO is definitely the way to go. We created something called extended UTXO where you can carry state with the UTXO from one transition to another and then you can model the whole thing with a state machine. So it’s like oh, okay, that’s a good step forward. That’s kind of step one. Then after you’ve done that, what you do is you introduce this idea of state channels, which is the Ethereum community’s been thinking about that for a while, they have a project called Plasma. We have an equivalent project called Hydra, but because our accounting model and the way we run code is different, that means you can very easily take those batches of work and then distribute them through different channels, and each channel is running a different program in a certain respect. Usually batches of them.
Charles: So that’s probably the best way of doing it in the short-term, but then a longer term that you kind of break it down into different classes of transactions, so you separate your micro transactions from very complex, interactive stateful programs, and then there’s some combination of on chain, off chain interaction where you do that and then you introduce more complex computational primitives, like outsourceable computation for example with zero-knowledge proofs or these types of things.
Charles: So for the time being it’s just make sure you have really high throughput at the base layer. That’s a good way of temporarily solving it, and then make sure you have a path to do things off chain through state channels that allow you to turn it to a distributed problem, and make sure your accounting model, your programming model is quite amenable to that. We did that at every component in the system, not just for Ada but also when you issue a token on Cardano it’s not a smart contract, it’s actually what we call a native asset. So it’s treated the exact same way that Ada is treated in the system. It’s kind of like what the Colored Coins guys were trying to do with Bitcoin, but the protocol didn’t really understand or support it. It’s really a shame, so we kind of took that inspiration from that and took it to the next level and now because of that it means that those native assets have the same level of shard ability and off chain interaction as Ada itself does.
Charles: So that’s kind of the best high level answer I can give you, but there are certainly really deep topics that you can go into that take quite a bit of time to dissect. Then of course you can even formally model these things. There’s a whole field of formal methods called process calculus that allow you to talk about how concurrency works in a distributed system. It was created by a guy named Robin Milner, and we’ve certainly had our fair share of pi-calculus discussions internally. In fact, I think we even created our own and it was bi similar to a canonical form. A guy named Wolfgang does that.
Charles: So there’s a lot of science and theory behind it, and you get about 80% satisfying answer and then you have to go through like a swamp full of razor blades and cobras for the other 20%, and you have to ask, do we really care to go through that swamp? And it’s a matter of use and utility. We’re not quite at the level where that makes sense, but we do have teams of people that think about that stuff.
Jim: Bottom line, you believe you can run a lot more contract processing through your architecture?
Charles: Yeah, and we should be able to empirically show that apples to apples just by migration of applications like Celsius and other things.
Jim: One last platform question and then I want to turn to, just one because we’re kind of getting late here on time, application question. We’ve talked about the three generations, Bitcoin, Ethereum and Cardano and some of its peers. You also talked about maybe the time has come that this will go mainstream and mass production. Is generation three going to be good enough or is it going to take a generation four?
Charles: Well, I mean, you start working on generation four after you get frustrated with generation three. It’s kind of funny, in technology it’s like the old is new, the new is old. You get used to this idea of breakneck upgrading, but yeah. I used to live in Japan, I lived in Osaka and there are things in Japan, I mean, in the rural areas of Japan they’re still counting with an abacus. It’s just crazy. So sometimes if it’s good enough you just kind of go with it, so I don’t know. I mean, there will be a generation four attempt because people like money and they want to convince you that generation four is better than that, but we’re still using TCP/IP in the computer industry. They tried OSI, they tried different things. Obviously it didn’t work as well as they hoped, and there’s really good reasons to get rid of TCP/IP, but we still don’t.
Jim: [inaudible 01:03:57] is how TCP/IP beat OSI and what a fucking cost we’re still paying for it today.
Charles: Oh my god. Google is trying to correct that with QUIC over UDP, and they’ve made some progress. There’s actually some really cool protocols like ExpressVPN created Lightway and that’s really nifty. It’s like a VPN that connects in half a second and it’s more secure than traditional protocols. So there’s some progress on the edges, but you have the great ossified middle. The thing is that once one of these protocols gets a network effect it’s going to be exceedingly difficult to compete with it because the network effect is very sticky. Ethereum is like Myspace in that it’s got a momentum but it’s not sticky. In fact, 31% of new DApps created in 2020 were created on Ethereum. The other 69% of new DApps were created outside of Ethereum. The other thing is that if you look at the transaction volume and economic value, most of it’s still on Ethereum, but the vast majority of the leaders there are exploring multi chain. So they’re not getting loyal to that infrastructure, they’re migrating off. So, that’s much more Myspace than it is Android in that respect.
Charles: So I think the third generation will get very, very sticky and it’s going to be really, really difficult to migrate off of it after that because it’s going to do what you need and it’ll evolve at the pace that people are just enough comfortable and once enough infrastructure gets built you’ll have a great ossified middle, like TCP/IP.
Jim: So many things we could talk about, I got quite the topic list here. But one we talked about in our little pregame discussion is how technologies like Cardano could help solve the distributed social network problem? There’s been so much talk of late of the relatively sudden turn of the big social network, social media companies to become very aggressive and frankly arbitrary, capricious and incompetent moderators and censurers of their network. There’s lots of people that are really fed up with this. There’s plenty of libido now for some form of distributed, difficult to choke forms of social network and social media. What can Cardano do to help this revolution occur?
Charles: I was just imagining distributed libido. Okay. That’s like a whole new level of Austin Powers. Okay, so that’s a great question. It’s a very deep topic and it’s one that is definitely near and dear to my heart because I’m a Libertarian by political ideology. I firmly believe in free speech, and I hate this whole anti hate speech movement that we have floating through. It’s like oh, we have to be against hate speech. You say well, no, what you’re really saying is you would like to ban speech that you consider to be hate speech, but the people saying it obviously don’t think it is, so who gets to decide? That’s always that lingering question of who is the arbiter of obscenity and decency and truthiness. That becomes very difficult because the minute you hand that to a governing body, whether it be a social media platform or it be the government itself, then what happens is that governing body gets very quickly co-opted by other actors that you probably didn’t want to be involved in the conversation.
Charles: For example, I listened to an interview not too long ago with Ira. He was the former head of the American Civil Liberties Union, he was on the Joe Rogan podcast, and he mentioned this great example where a students union that was mostly Zionists, it was mostly composed of Orthodox Jews, decided to create some very strong code on conduct and anti hate speech previsions. They got all this stuff into the canon of the university. Then the next group of people who came in they were very anti Israel, so they started using those same policies to prevent Israelis and Jews from speaking, saying that they were purveyors of hate speech because they didn’t support free Palestine. So the people who create these very laws may end up becoming victims of them with the next wave of the political pendulum.
Charles: So in general I think you have to combat two things at the same time if you want to create a legitimate competitor to what we have. One, we have to recognize the algorithms of social media are built to radicalize, they’re not built to unify because they found out that they can make more money through radicalization, siloing and isolation than they could from unity. So there’s movies on this, and there’s books on this like Surveillance Capitalism and so forth, but the long and the short is that you get more clicks with more controversy and then mass media is also set up this way. Nobody goes on CNN to say, “The world is great today. Yeah, there are some problems but all things considered, people are doing good work and our leaders are great.” No, it’s like the sky is falling, we’re all going to die. The lizard people will be riding scorpions and having asteroids come from the sky. It’s like anything they can say to get you panicked they’ll try because ratings, ratings, ratings, ratings. That’s how you make the money.
Charles: So first, you have to find a way to reconjigger the algorithms so that they provide unification instead of siloing. They bring people together, even people who are diametrically opposed in viewpoint. By the way, that is possible. People say, “Oh, it’s not possible.” It is. Look at Daryl Davis. He’s a black singer and he’s made a profession out of going around to Ku Klux Klan members and convincing them to leave the Ku Klux Klan, and he’s convinced hundreds of them to do that, and as a consequence he’s got a whole room filled with Ku Klux Klan uniforms because when they leave they give them their uniforms. It’s kind of a rite of passage. So this 57-year-old black guy’s got a whole room with Klan’s gear. It’s a crazy thought but he’s been very successful, and cult deprogramming exists. So yeah, you can deradicalize people. So that’s one dimension of it, it’s like what level of incentives and interactions are required and algorithms are required to bring people together instead of silo them, to create good information flow, to use an Alex Pentland term.
Charles: Then, on the other side, information needs to be properly curated, and you need to really understand what you’re looking at, what you’re sharing. For example, I’ve seen … I’m a big aficionado of vaccines and I’ve spent a lot of time thinking about them and researching them, and mostly because mass vaccination is how we get back to work, we can travel again and I miss the rest of the world. Okay, so the other day I read an article that the AstraZeneca vaccine in Germany, a German paper was saying it was only 8% effective, 8% for people over the age of 65. Oh my god, is that true? Well, the German government in the media said, “No, no, no, no, that’s not true. It’s fake news.” Oh okay, but the Macron, the president of France comes out and says, “We have concerns about AstraZeneca’s vaccine of efficacy for those over the age of 65.” So definitely there’s more to that story, and there’s some debate.
Charles: So when you see something like that it would be so good to have a platform that massages and transforms and creates all kinds of new ways of analyzing information, it gives you different perspectives on it, and you can separate what you think is objectively true from the subjective interpretations of those things that are true. So that’s an AI problem, that’s an incentives problem, and also it would be nice when people share things that they don’t just click share, they actually vouch and vet and they’re willing to do things like maybe solve a computational problem to share something. Maybe bond something when they share it.
Charles: Could you imagine media, where when the journalist writes something, the organization they belong to has to put up a veracity bond for the article. So what does that mean? It means that if somebody on the outside proves that what they wrote was false or misleading, or biased, an independent board selected as part of this process, this veracity bond process, would be able to say yes, you violated it and actually take the bond behind the article. So how much fake news would we have? How many unverified articles would we have? And if it doesn’t have a veracity bond there’s no economic value behind the story, so we don’t take it seriously. You say, “If you were really serious about your reporting, why didn’t you put money on the table to back it up what you’ve said?”
Charles: So these are examples of different ways of looking at the transformation, the annotation, the metadata around the information that you’re sharing. Now, if you could accomplish both those things, then you put that backend on a blockchain because then you guarantee that the people can never be deplatformed and it’s resilient, and that’s just great. You can still do curation because curation is the interface to the blockchain. So certain ecosystems would be it’s almost like a boat to a river, your boat is Twitter, your river is the river that the boat sails on, so even if you get deplatformed from Twitter you still have access to that river. You can find another boat to go sail or build your own boat and bring your own boat, that’s great. The problem right now is the social media platforms own the boats and the river. So when you get deplatformed from Twitter you lose access to the river, meaning you can’t go and talk to your social network. You’ve been totally deplatformed and kind of shut out of everything. That’s a big, big problem.
Charles: So I think if you solve those two problems, the problem of radicalization and the problem of information curation, you create an incentives layer for it, then you put it onto a backend of a cryptocurrency, then you can come up with all kinds of beautiful interfaces that allow you to do microtargeted ads, and enjoy value, like what [Vats 01:13:51] is doing, or all kinds of things, and you could still keep your existing social media companies but you don’t have to worry about the existential risk, and also you don’t have to worry about the continued radicalization of people. Super complex topic, very deep topic, but I think it’s coming to a head.
Charles: They deplatformed the president of the United States, that’s probably fine, it’s a private company in that respect. Deplatforming Parler is a different matter entirely, it’s cartel behavior because the whole argument is well, if you don’t like it, create your own company. Sure, so they did. Then what do they do? They went and colluded and took them out of Google, and took them out of Apple, and Amazon to list it. So it basically says that a cartel has formed to control a commodity, information, the flow of information, and they get to decide the market dynamics and who is legitimate there or not. Last time we had that occur was with oil in the United States, with standard oil and their cartel, and we created the Sherman Antitrust Act to get rid of these actors, because we recognized the damage to society having a small group of unelected people basically get to control an entire marketplace.
Charles: So the separation of the interface from the protocol essential, and building useful and meaningful protocols I think is equally essential, and that’s a hard job, but it’s something we think a lot about. Now, I warn people, don’t just go say blockchain solves social media, it doesn’t. In many ways, if you don’t do it right, it’s actually going to amplify the worst situations. If you don’t solve the radicalization problem you’re creating a situation where not only are people going to stay radicalized, they’re going to continue to fracture, isolate and you can never be deplatformed because it’s on a resilient backbone. So you end up having these cancerous communities that continue to cause problems and get more and more damaging to society, and you can’t even entertain the thought of getting rid of them.
Jim: Well, that’s a good motivation. It’s good thinking about some ways to go. I really like the idea of the validity bond, verification bond. What did you call that?
Charles: Yeah, I call it a veracity bond, for truthiness, right.
Jim: I like that, and of course it was lots of questions about who decides and how, and all of that, but that would be great. As you say, don’t make it mandatory, but if a press platform puts out stories and doesn’t have it, we can make our own inferences from that, right?
Charles: Right, exactly. It can even become an insurance policy, like you can only get liability insurance to protect you against lawsuits as a journalistic organization if you do veracity bonding according to a community standard. It’s like why do warehouses have fences, and guard dogs, and armed security guards? Not a government mandate, it’s the insurance company says, “We’re only going to insure your warehouse if you have these security measures.” Well, similarly, if you have these veracity measures to guarantee your journalism has checks and balances, then you can get additional layers of protection in society. For example, you could mandate if a self-regulation occurs and community standards occur that you only get whistleblower protection if you engage in veracity bonding. That could be an example of something.
Jim: Very good. Well Charles, that’s Charles Hoskinson, founder of Cardano and CEO and founder of IOHK. I’d like to thank you for an incredibly interesting conversation. My 18 months or two years of not paying too much attention to the cryptocurrency and blockchain world certainly got filled in at least in part in today’s conversation.
Charles: It was a lot of fun, Jim. Thank you so much for having me on. I appreciate it.
Jim: That was great.
Production services and audio editing by Jared Janes Consulting, Music by Tom Muller at modernspacemusic.com.