Subscribe: Google Podcasts | Spotify | Stitcher | RSS | More
Jim talks with Shivanshu Purohit about the world of open-source AI models and a significant open-source LLM coming soon from Stability AI and EleutherAI. They discuss the reasons for creating open-source models, the release of Facebook’s LLaMA model, the black box nature of current models, the scientific mystery of how they really work, an opportunity for liberal arts majors, OpenAI’s new plugin architecture, the analogy of the PC business around 1981, creating GPT-Neo & GPT-NeoX, the balance between data & architecture, the number of parameters in GPT-4, order of training’s non-effect on memorization, phase changes due to scaling, Stability AI and EleutherAI’s new collaboration & its specs, tradeoffs in price & size, the question of guardrails, reinforcement learning from human feedback, the missing economic model of generative AI, necessary hardware for the new suite, OpenAI’s decreasing openness, Jim’s commitment to help fund an open-source reinforcement learning dataset, the status of GPT-5 & other coming developments, and much more.
- Episode Transcript
- JRS Currents 038: Connor Leahy on Artificial Intelligence
- JRS Currents 033: Connor Leahy on Deep Learning
- ChatGPT Plugins Documentation
Shivanshu Purohit is head of engineering at Eleuther AI and a research engineer at Stability AI, the creators of Stable Diffusion.