
Doom Debates
Podcast de Liron Shapira
It's time to talk about the end of the world! lironshapira.substack.com
Empieza 7 días de prueba
$99.00 / mes después de la prueba.Cancela cuando quieras.
Todos los episodios
86 episodios
Amjad Masad is the founder and CEO of Replit, a full-featured AI-powered software development platform whose revenue reportedly just shot up from $10M/yr to $100M/yr+. Last week, he went on Joe Rogan [https://www.youtube.com/watch?v=WfmrEa0L08E] to share his vision that "everyone will become an entrepreneur" as AI automates away traditional jobs. In this episode, I break down why Amjad's optimistic predictions rely on abstract hand-waving rather than concrete reasoning. While Replit is genuinely impressive, his claims about AI limitations—that they can only "remix" and do "statistics" but can't "generalize" or create "paradigm shifts"—fall apart when applied to specific examples. We explore the entrepreneurial bias problem, why most people can't actually become successful entrepreneurs, and how Amjad's own success stories (like quality assurance automation) actually undermine his thesis. Plus: Roger Penrose's dubious consciousness theories, the "Duplo vs. Lego" problem in abstract thinking, and why Joe Rogan invited an AI doomer the very next day. 00:00 - Opening and introduction to Amjad Masad 03:15 - "Everyone will become an entrepreneur" - the core claim 08:45 - Entrepreneurial bias: Why successful people think everyone can do what they do 15:20 - The brainstorming challenge: Human vs. AI idea generation 22:10 - "Statistical machines" and the remixing framework 28:30 - The abstraction problem: Duplos vs. Legos in reasoning 35:50 - Quantum mechanics and paradigm shifts: Why bring up Heisenberg? 42:15 - Roger Penrose, Gödel's theorem, and consciousness theories 52:30 - Creativity definitions and the moving goalposts 58:45 - The consciousness non-sequitur and Silicon Valley "hubris" 01:07:20 - Ahmad George success story: The best case for Replit 01:12:40 - Job automation and the 50% reskilling assumption 01:18:15 - Quality assurance jobs: Accidentally undermining your own thesis 01:23:30 - Online learning and the contradiction in AI capabilities 01:29:45 - Superintelligence definitions and learning in new environments 01:35:20 - Self-play limitations and literature vs. programming 01:41:10 - Marketing creativity and the Think Different campaign 01:45:45 - Human-machine collaboration and the prompting bottleneck 01:50:30 - Final analysis: Why this reasoning fails at specificity 01:58:45 - Joe Rogan's real opinion: The Roman Yampolskiy follow-up 02:02:30 - Closing thoughts Show Notes Source video: Amjad Masad on Joe Rogan - July 2, 2025 [https://www.youtube.com/watch?v=WfmrEa0L08E] Roman Yampolskiy on Joe Rogan - https://www.youtube.com/watch?v=j2i9D24KQ5k [https://www.youtube.com/watch?v=j2i9D24KQ5k] Replit - https://replit.com Amjad’s Twitter - https://x.com/amasad Doom Debates episode where I react to Emmett Shear’s Softmax - https://www.youtube.com/watch?v=CBN1E1fvh2g [https://www.youtube.com/watch?v=CBN1E1fvh2g] Doom Debates episode where I react to Roger Penrose - https://www.youtube.com/watch?v=CBN1E1fvh2g [https://www.youtube.com/watch?v=CBN1E1fvh2g] --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates] This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com [https://lironshapira.substack.com?utm_medium=podcast&utm_campaign=CTA_1]

Liam Robins is a math major at George Washington University who recently had his own "AGI awakening" after reading Leopold Aschenbrenner's Situational Awareness. I met him at my Manifest 2025 talk about stops on the Doom Train. In this episode, Liam confirms what many of us suspected: pretty much everyone in college is cheating with AI now, and they're completely shameless about it. We dive into what college looks like today: how many students are still "rawdogging" lectures, how professors are coping with widespread cheating, how the social life has changed, and what students think they’ll do when they graduate. * 00:00 - Opening * 00:50 - Introducing Liam Robins * 05:27 - The reality of college today: Do they still have lectures? * 07:20 - The rise of AI-enabled cheating in assignments * 14:00 - College as a credentialing regime vs. actual learning * 19:50 - "Everyone is cheating their way through college" - the epidemic * 26:00 - College social life: "It's just pure social life" * 31:00 - Dating apps, social media, and Gen Z behavior * 36:21 - Do students understand the singularity is near? Show Notes Guest: * Liam Robins [https://substack.com/profile/161471959-liam-robins] on Substack - https://thelimestack.substack.com/ [https://thelimestack.substack.com/] * Liam's Doom Train post - https://thelimestack.substack.com/p/my-pdoom-is-276-heres-why [https://thelimestack.substack.com/p/my-pdoom-is-276-heres-why] * Liam’s Twitter - @liamrobins [https://x.com/liamrobins] Key References: * Leopold Aschenbrenner - "Situational Awareness" [https://situational-awareness.ai] * Bryan Caplan - "The Case Against Education" [https://press.princeton.edu/books/hardcover/9780691174075/the-case-against-education] * Scott Alexander - Astral Codex Ten [https://astralcodexten.substack.com] * Jeffrey Ding - ChinAI Newsletter [https://chinai.substack.com] * New York Magazine - "Everyone Is Cheating Their Way Through College" [https://nymag.com/intelligencer/article/ai-cheating-college-students.html] Events & Communities: * Manifest Conference [https://manifest.is] * LessWrong [https://lesswrong.com] * Eliezer Yudkowsky - "Harry Potter and the Methods of Rationality" [https://hpmor.com] Previous Episodes: * Doom Debates Live at Manifest 2025 - https://www.youtube.com/watch?v=detjIyxWG8M [https://www.youtube.com/watch?v=detjIyxWG8M] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates] This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com [https://lironshapira.substack.com?utm_medium=podcast&utm_campaign=CTA_1]

Carl Feynman got his Master’s in Computer Science and B.S. in Philosophy from MIT, followed by a four-decade career in AI engineering. He’s known Eliezer Yudkowsky since the ‘90s, and witnessed Eliezer’s AI doom argument taking shape before most of us were paying any attention! He agreed to come on the show because he supports Doom Debates’s mission of raising awareness of imminent existential risk from superintelligent AI. 00:00 - Teaser 00:34 - Carl Feynman’s Background 02:40 - Early Concerns About AI Doom 03:46 - Eliezer Yudkowsky and the Early AGI Community 05:10 - Accelerationist vs. Doomer Perspectives 06:03 - Mainline Doom Scenarios: Gradual Disempowerment vs. Foom 07:47 - Timeline to Doom: Point of No Return 08:45 - What’s Your P(Doom)™ 09:44 - Public Perception and Political Awareness of AI Risk 11:09 - AI Morality, Alignment, and Chatbots Today 13:05 - The Alignment Problem and Competing Values 15:03 - Can AI Truly Understand and Value Morality? 16:43 - Multiple Competing AIs and Resource Competition 18:42 - Alignment: Wanting vs. Being Able to Help Humanity 19:24 - Scenarios of Doom and Odds of Success 19:53 - Mainline Good Scenario: Non-Doom Outcomes 20:27 - Heaven, Utopia, and Post-Human Vision 22:19 - Gradual Disempowerment Paper and Economic Displacement 23:31 - How Humans Get Edged Out by AIs 25:07 - Can We Gaslight Superintelligent AIs? 26:38 - AI Persuasion & Social Influence as Doom Pathways 27:44 - Riding the Doom Train: Headroom Above Human Intelligence 29:46 - Orthogonality Thesis and AI Motivation 32:48 - Alignment Difficulties and Deception in AIs 34:46 - Elon Musk, Maximal Curiosity & Mike Israetel’s Arguments3 6:26 - Beauty and Value in a Post-Human Universe 38:12 - Multiple AIs Competing 39:31 - Space Colonization, Dyson Spheres & Hanson’s “Alien Descendants” 41:13 - What Counts as Doom vs. Not Doom? 43:29 - Post-Human Civilizations and Value Function 44:49 - Expertise, Rationality, and Doomer Credibility 46:09 - Communicating Doom: Missing Mood & Public Receptiveness 47:41 - Personal Preparation vs. Belief in Imminent Doom48:56 - Why Can't We Just Hit the Off Switch? 50:26 - The Treacherous Turn and Redundancy in AI 51:56 - Doom by Persuasion or Entertainment 53:43 - Differences with Eliezer Yudkowsky: Singleton vs. Multipolar Doom 55:22 - Why Carl Chose Doom Debates 56:18 - Liron’s Outro Show Notes Carl’s Twitter — https://x.com/carl_feynman [https://x.com/carl_feynman] Carl’s LessWrong — https://www.lesswrong.com/users/carl-feynman [https://www.lesswrong.com/users/carl-feynman] Gradual Disempowerment — https://gradual-disempowerment.ai [https://gradual-disempowerment.ai] The Intelligence Curse — https://intelligence-curse.ai [https://intelligence-curse.ai] AI 2027 — https://ai-2027.com [https://ai-2027.com] Alcor cryonics — https://www.alcor.org [https://www.alcor.org] The LessOnline Conference — https://less.online [https://less.online] Watch the Lethal Intelligence Guide [https://www.youtube.com/watch?v=9CUFbqh16Fg], the ultimate introduction to AI x-risk! PauseAI, the volunteer organization I’m part of: https://pauseai.info [https://pauseai.info] Join the PauseAI Discord — https://discord.gg/2XXWXvErfA [https://discord.gg/2XXWXvErfA] — and say hi to me in the #doom-debates-podcast channel! Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates] This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com [https://lironshapira.substack.com?utm_medium=podcast&utm_campaign=CTA_1]

Richard Hanania is the President of the Center for the Study of Partisanship and Ideology. His work has been praised by Vice President JD Vance, Tyler Cowen, and Bryan Caplan among others. In his influential newsletter [https://richardhanania.com], he’s written about why he finds AI doom arguments unconvincing [https://www.richardhanania.com/p/ai-doomerism-as-science-fiction]. He was gracious enough to debate me on this topic. Let’s see if one of us can change the other’s P(Doom)! 0:00 Intro 1:53 Richard's politics 2:24 The state of political discourse 3:30 What's your P(Doom)?™ 6:38 How to stop the doom train 8:27 Statement on AI risk 9:31 Intellectual influences 11:15 Base rates for AI doom 15:43 Intelligence as optimization power 31:26 AI capabilities progress 53:46 Why isn't AI yet a top blogger? 58:02 Diving into Richard's Doom Train 58:47 Diminishing Returns on Intelligence 1:06:36 Alignment will be relatively trivial 1:15:14 Power-seeking must be programmed 1:21:27 AI will simply be benevolent 1:27:17 Superintelligent AI will negotiate with humans 1:33:00 Super AIs will check and balance each other out 1:36:54 We're mistaken about the nature of intelligence 1:41:46 Summarizing Richard's AI doom position 1:43:22 Jobpocalypse and gradual disempowerment 1:49:46 Ad hominem attacks in AI discourse Show Notes Subscribe to Richard Hanania's Newsletter: https://richardhanania.com [https://richardhanania.com] Richard's blogpost laying out where he gets off the AI "doom train": https://www.richardhanania.com/p/ai-doomerism-as-science-fiction [https://www.richardhanania.com/p/ai-doomerism-as-science-fiction] Richard's interview with Steven Pinker: https://www.richardhanania.com/p/pinker-on-alignment-and-intelligence [https://www.richardhanania.com/p/pinker-on-alignment-and-intelligence] Richard's interview with Robin Hanson: https://www.richardhanania.com/p/robin-hanson-says-youre-going-to [https://www.richardhanania.com/p/robin-hanson-says-youre-going-to] My Doom Debate with Robin Hanson: https://www.youtube.com/watch?v=dTQb6N3_zu8 [https://www.youtube.com/watch?v=dTQb6N3_zu8] My reaction to Steven Pinker's AI doom position, and why his arguments are shallow: https://www.youtube.com/watch?v=-tIq6kbrF-4 [https://www.youtube.com/watch?v=-tIq6kbrF-4] "The Betterness Explosion" by Robin Hanson: https://www.overcomingbias.com/p/the-betterness-explosionhtml [https://www.overcomingbias.com/p/the-betterness-explosionhtml] --- Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/watch?v=9CUFbqh16Fg [https://www.youtube.com/watch?v=9CUFbqh16Fg] PauseAI, the volunteer organization I’m part of: https://pauseai.info [https://pauseai.info] Join the PauseAI Discord — https://discord.gg/2XXWXvErfA [https://discord.gg/2XXWXvErfA] — and say hi to me in the #doom-debates-podcast channel! --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at https://DoomDebates.com [https://DoomDebates.com] and to https://youtube.com/@DoomDebates [https://youtube.com/@DoomDebates] This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com [https://lironshapira.substack.com?utm_medium=podcast&utm_campaign=CTA_1]

Emmett Shear is the cofounder and ex-CEO of Twitch, ex-interim-CEO of OpenAI, and a former Y Combinator partner. He recently announced Softmax, a new company researching a novel solution to AI alignment. In his recent interview [https://www.youtube.com/watch?v=_3m2cpZqvdw], Emmett explained “organic alignment”, drawing comparisons to biological systems and advocating for AI to be raised in a community-like setting with humans. Let’s go through his talk, point by point, to see if Emmett’s alignment plan makes sense… 00:00 Episode Highlights 00:36 Introducing Softmax and its Founders 01:33 Research Collaborators and Ken Stanley's Influence 02:16 Softmax's Mission and Organic Alignment 03:13 Critique of Organic Alignment 05:29 Emmett’s Perspective on AI Alignment 14:36 Human Morality and Cognitive Submodules 38:25 Top-Down vs. Emergent Morality in AI 44:56 Raising AI to Grow Up with Humanity 48:43 Softmax's Incremental Approach to AI Alignment 52:22 Convergence vs. Divergence in AI Learning 55:49 Multi-Agent Reinforcement Learning 01:12:28 The Importance of Storytelling in AI Development 01:16:34 Living With AI As It Grows 01:20:19 Species Giving Birth to Species 01:23:23 The Plan for AI's Adolescence 01:26:53 Emmett's Views on Superintelligence 01:31:00 The Future of AI Alignment 01:35:10 Final Thoughts and Criticisms 01:44:07 Conclusion and Call to Action Show Notes Emmett Shear’s interview on BuzzRobot with Sophia Aryan (source material) — https://www.youtube.com/watch?v=_3m2cpZqvdw [https://www.youtube.com/watch?v=_3m2cpZqvdw] BuzzRobot’s YouTube channel — https://www.youtube.com/@BuzzRobot [https://www.youtube.com/@BuzzRobot] BuzzRobot’s Twitter — https://x.com/buZZrobot/ [https://x.com/buZZrobot/] SoftMax’s website — https://softmax.com [https://softmax.com] My Doom Debate with Ken Stanley (Softmax advisor) — https://www.youtube.com/watch?v=GdthPZwU1Co [https://www.youtube.com/watch?v=GdthPZwU1Co] My Doom Debate with Gil Mark on whether aligning AIs in groups is a more solvable problem — https://www.youtube.com/watch?v=72LnKW_jae8 [https://www.youtube.com/watch?v=72LnKW_jae8] Watch the Lethal Intelligence Guide [https://www.youtube.com/watch?v=9CUFbqh16Fg], the ultimate introduction to AI x-risk! PauseAI, the volunteer organization I’m part of: https://pauseai.info [https://pauseai.info] Join the PauseAI Discord — https://discord.gg/2XXWXvErfA [https://discord.gg/2XXWXvErfA] — and say hi to me in the #doom-debates-podcast channel! Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates] This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com [https://lironshapira.substack.com?utm_medium=podcast&utm_campaign=CTA_1]
Empieza 7 días de prueba
$99.00 / mes después de la prueba.Cancela cuando quieras.
Podcasts exclusivos
Sin anuncios
Podcast gratuitos
Audiolibros
20 horas / mes