Cover image of show Doom Debates

Doom Debates

Podcast door Liron Shapira

Engels

True crime

Tijdelijke aanbieding

2 maanden voor € 1

Daarna € 9,99 / maandElk moment opzegbaar.

  • 20 uur luisterboeken / maand
  • Podcasts die je alleen op Podimo hoort
  • Gratis podcasts
Begin hier

Over Doom Debates

It's time to talk about the end of the world! lironshapira.substack.com

Alle afleveringen

127 afleveringen
episode Liron Enters Bannon's War Room to Explain Why AI Could End Humanity artwork

Liron Enters Bannon's War Room to Explain Why AI Could End Humanity

I joined Steve Bannon’s War Room Battleground to talk about AI doom. Hosted by Joe Allen, we cover AGI timelines, raising kids with a high p(doom), and why improving our survival odds requires a global wake-up call. 00:00:00 — Episode Preview 00:01:17 — Joe Allen opens the show and introduces Liron Shapira 00:04:06 — Liron: What’s Your P(Doom)? 00:05:37 — How Would an AI Take Over? 00:07:20 — The Timeline to AGI 00:08:17 — Benchmarks & AI Passing the Turing Test 00:14:43 — Liron Is Typically a Techno-Optimist 00:18:00 — Raising a Family with a High P(Doom) 00:23:48 — Mobilizing a Grassroots AI Survival Campaign 00:26:45 — Final Message: A Wake-Up Call 00:29:23 — Joe Allen’s Closing Message to the War Room Posse Links: Joe’s Substack — https://substack.com/@joebot [https://substack.com/@joebot] Joe’s Twitter — https://x.com/JOEBOTxyz [https://x.com/JOEBOTxyz] Bannon’s War Room Twitter — https://x.com/Bannons_WarRoom [https://x.com/Bannons_WarRoom] WarRoom Battleground EP 922: AI Doom Debates with Liron Shapira on Rumble — https://rumble.com/v742oo4-warroom-battleground-ep-922-ai-doom-debates-with-liron-shapira.html [https://rumble.com/v742oo4-warroom-battleground-ep-922-ai-doom-debates-with-liron-shapira.html] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

13 jan 2026 - 30 min
episode Noah Smith vs. Liron Shapira — Will AI spare our lives AND our jobs? artwork

Noah Smith vs. Liron Shapira — Will AI spare our lives AND our jobs?

Economist Noah Smith is the author of Noahpinion [https://www.noahpinion.blog/], one of the most popular Substacks in the world. Far from worrying about human extinction from superintelligent AI, Noah is optimistic AI will create a world where humans still have plentiful, high-paying jobs! In this debate, I stress-test his rosy outlook. Let’s see if Noah can instill us with more confidence about humanity’s rapidly approaching AI future. Timestamps 00:00:00 - Episode Preview 00:01:41 - Introducing Noah Smith 00:03:19 - What’s Your P(Doom)™ 00:04:40 - Good vs. Bad Transhumanist Outcomes 00:15:17 - Catastrophe vs. Total Extinction 00:17:15 - Mechanisms of Doom 00:27:16 - The AI Persuasion Risk 00:36:20 - Instrumental Convergence vs. Peace 00:53:08 - The “One AI” Breakout Scenario 01:01:18 - The “Stoner AI” Theory 01:08:49 - Importance of Reflective Stability 01:14:50 - Orthogonality & The Waymo Argument 01:21:18 - Comparative Advantage & Jobs 01:27:43 - Wealth Distribution & Robot Lords 01:34:34 - Supply Curves & Resource Constraints 01:43:38 - Policy of Reserving Human Resources 01:48:28 - Closing: The Case for Optimism Links Noah’s Substack — https://noahpinion.blog [https://noahpinion.blog] “Plentiful, high-paying jobs in the age of AI” — https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the [https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the] “My thoughts on AI safety” — https://www.noahpinion.blog/p/my-thoughts-on-ai-safety [https://www.noahpinion.blog/p/my-thoughts-on-ai-safety] Noah’s Twitter — https://x.com/noahpinion [https://x.com/noahpinion] --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

05 jan 2026 - 1 h 55 min
episode I Debated Beff Jezos and His "e/acc" Army artwork

I Debated Beff Jezos and His "e/acc" Army

In September of 2023, when OpenAI’s GPT-4 was still a fresh innovation and people were just beginning to wrap their heads around large language models, I was invited to debate Beff Jezos, Bayeslord, and other prominent “effective accelerationists” a.k.a. “e/acc” folks on an X Space. E/acc’s think building artificial superintelligence is unlikely to disempower humanity and doom the future, because that’d be an illegal exception to the rule that accelerating new technology is always the highest-expected-value for humanity. As you know, I disagree — I think doom is extremely likely and imminent possibility. This debate took place 9 months before I started Doom Debates, and was one of the experiences that made me realize debating AI doom was my calling. It’s also the only time Beff Jezos has ever not been too chicken to debate me. Timestamps 00:00:00 — Liron’s New Intro 00:04:15 — Debate Starts Here: Litigating FOOM 00:06:18 — Defining the Recursive Feedback Loop 00:15:05 — The Two-Part Doomer Thesis 00:26:00 — When Does a Tool Become an Agent? 00:44:02 — The Argument for Convergent Architecture 00:46:20 — Mathematical Objections: Ergodicity and Eigenvalues 01:03:46 — Bayeslord Enters: Why Speed Doesn’t Matter 01:12:40 — Beff Jezos Enters: Physical Priors vs. Internet Data 01:13:49 — The 5% Probability of Doom by GPT-5 01:20:09 — Chaos Theory and Prediction Limits 01:27:56 — Algorithms vs. Hardware Constraints 01:35:20 — Galactic Resources vs. Human Extermination 01:54:13 — The Intelligence Bootstrapping Script Scenario 02:02:13 — The 10-Megabyte AI Virus Debate 02:11:54 — The Nuclear Analogy: Noise Canceling vs. Rubble 02:37:39 — Controlling Intelligence: The Roman Empire Analogy 02:44:53 — Real-World Latency and API Rate Limits 03:03:11 — The Difficulty of the Off Button 03:24:47 — Why Liron is “e/acc at Heart” Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

30 dec 2025 - 3 h 52 min
episode Doom Debates LIVE Call-In Show! Listener Q&A about AGI, evolution vs. engineering, shoggoths & more artwork

Doom Debates LIVE Call-In Show! Listener Q&A about AGI, evolution vs. engineering, shoggoths & more

AGI timelines, offense/defense balance, evolution vs engineering, how to lower P(Doom), Eliezer Yudkowksy, and much more! Timestamps: 00:00 Trailer 03:10 Is My P(Doom) Lowering? 11:29 First Caller: AI Offense vs Defense Balance 16:50 Superintelligence Skepticism 25:05 Agency and AI Goals 29:06 Communicating AI Risk 36:35 Attack vs Defense Equilibrium 38:22 Can We Solve Outer Alignment? 54:47 What is Your P(Pocket Nukes)? 1:00:05 The “Shoggoth” Metaphor Is Outdated 1:06:23 Should I Reframe the P(Doom) Question? 1:12:22 How YOU Can Make a Difference 1:24:43 Can AGI Beat Biology? 1:39:22 Agency and Convergent Goals 1:59:56 Viewer Poll: What Content Should I Make? 2:26:15 AI Warning Shots 2:32:12 More Listener Questions: Debate Tactics, Getting a PhD, Specificity 2:53:53 Closing Thoughts Links: Support PauseAI — https://pauseai.info/ [https://pauseai.info/] Support PauseAI US — https://www.pauseai-us.org/ [https://www.pauseai-us.org/] Support LessWrong / Lightcone Infrastructure — LessWrong is fundraising! [https://www.lesswrong.com/posts/eKGdCNdKjvTBG9i6y] Support MIRI — MIRI’s 2025 Fundraiser [https://intelligence.org/2025/12/01/miris-2025-fundraiser/] About the show: Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

24 dec 2025 - 2 h 54 min
episode DOOMER vs. BUILDER — AI Doom Debate with Devin Elliot, Software Engineer & Retired Pro Snowboarder artwork

DOOMER vs. BUILDER — AI Doom Debate with Devin Elliot, Software Engineer & Retired Pro Snowboarder

Devin Elliot is a former pro snowboarder turned software engineer who has logged thousands of hours building AI systems. His P(Doom) is a flat ⚫. He argues that worrying about an AI takeover is as irrational as fearing your car will sprout wings and fly away. We spar over the hard limits of current models: Devin insists LLMs are hitting a wall, relying entirely on external software “wrappers” to feign intelligence. I push back, arguing that raw models are already demonstrating native reasoning and algorithmic capabilities. Devin also argues for decentralization by claiming that nuclear proliferation is safer than centralized control. We end on a massive timeline split: I see superintelligence in a decade, while he believes we’re a thousand years away from being able to “grow” computers that are truly intelligence. Timestamps 00:00:00 Episode Preview 00:01:03 Intro: Snowboarder to Coder 00:03:30 "I Do Not Have a P(Doom)" 00:06:47 Nuclear Proliferation & Centralized Control 00:10:11 The "Spotify Quality" House Analogy 00:17:15 Ideal Geopolitics: Decentralized Power 00:25:22 Why AI Can't "Fly Away" 00:28:20 The Long Addition Test: Native or Tool? 00:38:26 Is Non-Determinism a Feature or a Bug? 00:52:01 The Impossibility of Mind Uploading 00:57:46 "Growing" Computers from Cells 01:02:52 Timelines: 10 Years vs. 1,000 Years 01:11:40 "Plastic Bag Ghosts" & Builder Intuition 01:13:17 Summary of the Debate 01:15:30 Closing Thoughts Links Devin’s Twitter — https://x.com/devinjelliot [https://x.com/devinjelliot] --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

17 dec 2025 - 1 h 17 min
Super app. Onthoud waar je bent gebleven en wat je interesses zijn. Heel veel keuze!
Super app. Onthoud waar je bent gebleven en wat je interesses zijn. Heel veel keuze!
Makkelijk in gebruik!
App ziet er mooi uit, navigatie is even wennen maar overzichtelijk.

Kies je abonnement

Tijdelijke aanbieding

Premium

20 uur aan luisterboeken

  • Podcasts die je alleen op Podimo hoort

  • Gratis podcasts

  • Elk moment opzegbaar

2 maanden voor € 1
Daarna € 9,99 / maand

Begin hier

Premium Plus

Onbeperkt luisterboeken

  • Podcasts die je alleen op Podimo hoort

  • Gratis podcasts

  • Elk moment opzegbaar

Probeer 30 dagen gratis
Daarna € 11,99 / maand

Probeer gratis

Alleen bij Podimo

Populaire luisterboeken

Begin hier

2 maanden voor € 1. Daarna € 9,99 / maand. Elk moment opzegbaar.