Imagen de portada del programa Doom Debates

Doom Debates

Podcast de Liron Shapira

inglés

True crime & misterio

Empieza 7 días de prueba

$99 / mes después de la prueba.Cancela cuando quieras.

  • 20 horas de audiolibros al mes
  • Podcasts solo en Podimo
  • Podcast gratuitos
Prueba gratis

Acerca de Doom Debates

It's time to talk about the end of the world! lironshapira.substack.com

Todos los episodios

122 episodios
episode PhD AI Researcher Says P(Doom) is TINY — Debate with Michael Timothy Bennett artwork

PhD AI Researcher Says P(Doom) is TINY — Debate with Michael Timothy Bennett

Dr. Michael Timothy Bennett, Ph.D, is an award-winning young researcher who has developed a new formal framework for understanding intelligence. He has a TINY P(Doom) because he claims superintelligence will be resource-constrained and tend toward cooperation. In this lively debate, I stress-test Michael’s framework and debate whether its theorized constraints will actually hold back superintelligent AI. Timestamps * 00:00 Trailer * 01:41 Introducing Michael Timothy Bennett * 04:33 What’s Your P(Doom)?™ * 10:51 Michael’s Thesis on Intelligence: “Abstraction Layers”, “Adaptation”, “Resource Efficiency” * 25:36 Debate: Is Einstein Smarter Than a Rock? * 39:07 “Embodiment”: Michael’s Unconventional Computation Theory vs Standard Computation * 48:28 “W-Maxing”: Michael’s Intelligence Framework vs. a Goal-Oriented Framework * 59:47 Debating AI Doom * 1:09:49 Debating Instrumental Convergence * 1:24:00 Where Do You Get Off The Doom Train™ — Identifying The Cruxes of Disagreement * 1:44:13 Debating AGI Timelines * 1:49:10 Final Recap Links Michael’s website — https://michaeltimothybennett.com [https://michaeltimothybennett.com] Michael’s Twitter — https://x.com/MiTiBennett [https://x.com/MiTiBennett] Michael’s latest paper, “How To Build Conscious Machines” — https://osf.io/preprints/thesiscommons/wehmg_v1?view_only [https://osf.io/preprints/thesiscommons/wehmg_v1?view_only] Doom Debates' Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

Ayer - 1 h 52 min
episode Nobel Prizewinner SWAYED by My AI Doom Argument — Prof. Michael Levitt, Stanford University artwork

Nobel Prizewinner SWAYED by My AI Doom Argument — Prof. Michael Levitt, Stanford University

My guest today achieved something EXTREMELY rare and impressive: Coming onto my show with an AI optimist position, then admitting he hadn’t thought of my counterarguments before, and updating his beliefs in realtime! Also, he won the 2013 Nobel Prize in computational biology. I’m thrilled that Prof. Levitt understands the value of raising awareness about imminent extinction risk from superintelligent AI, and the value of debate as a tool to uncover the truth — the dual missions of Doom Debates! Timestamps 0:00 — Trailer 1:18 — Introducing Michael Levitt 4:20 — The Evolution of Computing and AI 12:42 — Measuring Intelligence: Humans vs. AI 23:11 — The AI Doom Argument: Steering the Future 25:01 — Optimism, Pessimism, and Other Existential Risks 34:15 — What’s Your P(Doom)™ 36:16 — Warning Shots and Global Regulation 55:28 — Comparing AI Risk to Pandemics and Nuclear War 1:01:49 — Wrap-Up 1:06:11 — Outro + New AI safety resource Show Notes Michael Levitt’s Twitter — https://x.com/MLevitt_NP2013 [https://x.com/MLevitt_NP2013] -- Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

05 dic 2025 - 1 h 11 min
episode Facing AI Doom, Lessons from Daniel Ellsberg (Pentagon Papers) — Michael Ellsberg artwork

Facing AI Doom, Lessons from Daniel Ellsberg (Pentagon Papers) — Michael Ellsberg

Michael Ellsberg, son of the legendary Pentagon Papers leaker Daniel Ellsberg, joins me to discuss the chilling parallels between his father’s nuclear war warnings and today’s race to AGI. We discuss Michael’s 99% probability of doom, his personal experience being “obsoleted” by AI, and the urgent moral duty for insiders to blow the whistle on AI’s outsize risks. Timestamps 0:00 Intro 1:29 Introducing Michael Ellsberg, His Father Daniel Ellsberg, and The Pentagon Papers 5:49 Vietnam War Parallels to AI: Lies and Escalation 25:23 The Doomsday Machine & Nuclear Insanity 48:49 Mutually Assured Destruction vs. Superintelligence Risk 55:10 Evolutionary Dynamics: Replicators and the End of the “Dream Time” 1:10:17 What’s Your P(doom)?™ 1:14:49 Debating P(Doom) Disagreements 1:26:18 AI Unemployment Doom 1:39:14 Doom Psychology: How to Cope with Existential Risk 1:50:56 The “Joyless Singularity”: Aligned AI Might Still Freeze Humanity 2:09:00 A Call to Action for AI Insiders Show Notes: Michael Ellsberg’s website — https://www.ellsberg.com/ [https://www.ellsberg.com/] Michael’s Twitter — https://x.com/MichaelEllsberg [https://x.com/MichaelEllsberg] Daniel Ellsberg’s website — https://www.ellsberg.net/ [https://www.ellsberg.net/] The upcoming book, “Truth and Consequence” — https://geni.us/truthandconsequence [https://geni.us/truthandconsequence] Michael’s AI-related substack “Mammalian Wetware” — https://mammalianwetware.substack.com/ [https://mammalianwetware.substack.com/] Daniel’s debate with Bill Kristol in the run-up to the Iraq war — https://www.youtube.com/watch?v=HyvsDR3xnAg [https://www.youtube.com/watch?v=HyvsDR3xnAg] -- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

29 nov 2025 - 2 h 15 min
episode Max Tegmark vs. Dean Ball: Should We BAN Superintelligence? artwork

Max Tegmark vs. Dean Ball: Should We BAN Superintelligence?

Today's Debate: Should we ban the development of artificial superintelligence until scientists agree it is safe and controllable? Arguing FOR banning superintelligence until there’s a scientific consensus that it’ll be done safely and controllably and with strong public buy-in: Max Tegmark. He is an MIT professor, bestselling author, and co-founder of the Future of Life Institute whose research has focused on artificial intelligence for the past 8 years. Arguing AGAINST banning superintelligent AI development: Dean Ball. He is a Senior Fellow at the Foundation for American Innovation who served as a Senior Policy Advisor at the White House Office of Science and Technology Policy under President Trump, where he helped craft America’s AI Action Plan. Two of the leading voices on AI policy engaged in high-quality, high-stakes debate for the benefit of the public! This is why I got into the podcast game — because I believe debate is an essential tool for humanity to reckon with the creation of superhuman thinking machines. Timestamps 0:00 - Episode Preview 1:41 - Introducing The Debate 3:38 - Max Tegmark’s Opening Statement 5:20 - Dean Ball’s Opening Statement 9:01 - Designing an “FDA for AI” and Safety Standards 21:10 - Liability, Tail Risk, and Biosecurity 29:11 - Incremental Regulation, Timelines, and AI Capabilities 54:01 - Max’s Nightmare Scenario 57:36 - The Risks of Recursive Self‑Improvement 1:08:24 - What’s Your P(Doom)?™ 1:13:42 - National Security, China, and the AI Race 1:32:35 - Closing Statements 1:44:00 - Post‑Debate Recap and Call to Action Show Notes Statement on Superintelligence released by Max’s organization, the Future of Life Institute — https://superintelligence-statement.org/ [https://superintelligence-statement.org/] Dean’s reaction to the Statement on Superintelligence — https://x.com/deanwball/status/1980975802570174831 [https://x.com/deanwball/status/1980975802570174831] America’s AI Action Plan — https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/ [https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/] “A Definition of AGI” by Dan Hendrycks, Max Tegmark, et. al. —https://www.agidefinition.ai/ [https://www.agidefinition.ai/] Max Tegmark’s Twitter — https://x.com/tegmark [https://x.com/tegmark] Dean Ball’s Twitter — https://x.com/deanwball [https://x.com/deanwball] Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

21 nov 2025 - 1 h 50 min
episode The AI Corrigibility Debate: MIRI Researchers Max Harms vs. Jeremy Gillen artwork

The AI Corrigibility Debate: MIRI Researchers Max Harms vs. Jeremy Gillen

Max Harms and Jeremy Gillen are current and former MIRI researchers who both see superintelligent AI as an imminent extinction threat. But they disagree on whether it’s worthwhile to try to aim for obedient, “corrigible” AI as a singular target for current alignment efforts. Max thinks corrigibility is the most plausible path to build ASI without losing control and dying, while Jeremy is skeptical that this research path will yield better superintelligent AI behavior on a sufficiently early try. By listening to this debate, you’ll find out if AI corrigibility is a relatively promising effort that might prevent imminent human extinction, or an over-optimistic pipe dream. Timestamps 0:00 — Episode Preview 1:18 — Debate Kickoff 3:22 — What is Corrigibility? 9:57 — Why Corrigibility Matters 11:41 — What’s Your P(Doom)™ 16:10 — Max’s Case for Corrigibility 19:28 — Jeremy’s Case Against Corrigibility 21:57 — Max’s Mainline AI Scenario 28:51 — 4 Strategies: Alignment, Control, Corrigibility, Don’t Build It 37:00 — Corrigibility vs HHH (”Helpful, Harmless, Honest”) 44:43 — Asimov’s 3 Laws of Robotics 52:05 — Is Corrigibility a Coherent Concept? 1:03:32 — Corrigibility vs Shutdown-ability 1:09:59 — CAST: Corrigibility as Singular Target, Near Misses, Iterations 1:20:18 — Debating if Max is Over-Optimistic 1:34:06 — Debating if Corrigibility is the Best Target 1:38:57 — Would Max Work for Anthropic? 1:41:26 — Max’s Modest Hopes 1:58:00 — Max’s New Book: Red Heart 2:16:08 — Outro Show Notes Max’s book Red Heart — https://www.amazon.com/Red-Heart-Max-Harms/dp/108822119X Learn more about CAST: Corrigibility as Singular Target — https://www.lesswrong.com/s/KfCjeconYRdFbMxsy/p/NQK8KHSrZRF5erTba Max’s Twitter — https://x.com/raelifin Jeremy’s Twitter — https://x.com/jeremygillen1 --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [http://DoomDebates.com] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

14 nov 2025 - 2 h 17 min
Muy buenos Podcasts , entretenido y con historias educativas y divertidas depende de lo que cada uno busque. Yo lo suelo usar en el trabajo ya que estoy muchas horas y necesito cancelar el ruido de al rededor , Auriculares y a disfrutar ..!!
Muy buenos Podcasts , entretenido y con historias educativas y divertidas depende de lo que cada uno busque. Yo lo suelo usar en el trabajo ya que estoy muchas horas y necesito cancelar el ruido de al rededor , Auriculares y a disfrutar ..!!
Fantástica aplicación. Yo solo uso los podcast. Por un precio módico los tienes variados y cada vez más.
Me encanta la app, concentra los mejores podcast y bueno ya era ora de pagarles a todos estos creadores de contenido

Elige tu suscripción

Premium

20 horas de audiolibros

  • Podcasts solo en Podimo

  • Podcast gratuitos

  • Cancela cuando quieras

Empieza 7 días de prueba
Después $99 / month

Prueba gratis

Sólo en Podimo

Audiolibros populares

Prueba gratis

Empieza 7 días de prueba. $99 / mes después de la prueba. Cancela cuando quieras.