Subscribe: Google Podcasts | Spotify | Stitcher | RSS | More
AI expert Roman Yampolskiy & Jim have a wide-ranging talk about simulation theory, types of intelligence, AI research & safety, the singularity, and much more…
This conversation with Jim and Dr. Roman V. Yampolskiy–author, tenured associate professor, founding director of Cyber Security Lab–starts by covering the vast variance of possible minds. They then go on to talk about Boltzmann brains, the implications of an infinite universe, simulation theory’s limits & if we could find its glitches, symbolic vs deep learning & the role of language understanding in AI, the Turing test, limitations of human intelligence, limits of AI safety, the singularity & if it would happen fast or slow, the paper clip maximizer, impacts of narrow AI, pros & cons of open-source AI development, game theory applied to AI, AGI timeframes, and the Fermi paradox.
Mentions & Recommendations
- The Space of Possible Mind Designs
- Types of Boltzmann Brains
- Glitch in the Matrix: Urban Legend or Evidence of the Simulation?
- Leakproofing the Singularity
- Predicting future AI failures from historic examples
- Open AI
- Artificial Intelligence Safety and Security
- Artificial Superintelligence: A Futuristic Approach
- Roman’s Google Scholar Page
Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. Dr. Yampolskiy is a Senior member of IEEE and AGI; Member of Kentucky Academy of Science, and Research Advisor for MIRI and Associate of GCRI. His main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, Cybersecurity, Digital Forensics, Games, Genetic Algorithms, and Pattern Recognition.