
inglés
True crime & misterio
4,99 € / mes después de la prueba.Cancela cuando quieras.
Acerca de Doom Debates!
It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com
Q&A — Claude Code's Impact, Anthropic vs The Pentagon, Roko('s Basilisk) Returns + Liron Updates His Views!
Multiple live callers join this month's Q&A as we cover the imminent demise of programming as a profession, the Anthropic/Pentagon showdown, and debate the finer details of wireheading. I clarify my recent AI doom belief updates, and then the man behind Roko's Basilisk crashes the stream to argue I haven't updated nearly far enough. Timestamps 00:00:00 — Cold Open 00:00:56 — Welcome to the Livestream & Taking Questions from Chat 00:12:44 — Anonymous Caller Asks If Rationalists Should Prioritize Attention-Grabbing Protests 00:18:30 — The Good Case Scenario 00:26:00 — Hugh Chungus Joins the Stream 00:30:54 — Producer Ori, Liron's Recent Alignment Updates 00:43:47 — We're In an Era of Centaurs 00:47:40 — Noah Smith's Updates on AGI and Alignment 00:48:44 — Co Co Chats Cybersecurity 00:57:32 — The Attacker's Advantage in Offense/Defense Balance 01:02:55 — Anthropic vs The Pentagon 01:06:20 — "We're Getting Frog Boiled" 01:11:06 — Stoner AI & Debating the Finer Points of Wireheading 01:25:00 — A Caller Backs the Penrose Argument 01:34:01 — Greyson Dials In 01:40:21 — Surprise Guest Joins & Says Alignment Isn't a Problem 02:05:15 — More Q&A with Chat 02:14:26 — Closing Thoughts Links * Liron on X — https://x.com/liron [https://x.com/liron] * AI 2027 — https://ai-2027.com/ [https://ai-2027.com/] * “Good Luck, Have Fun, Don’t Die” (film) — https://www.imdb.com/title/tt38301748/ [https://www.imdb.com/title/tt38301748/] * “The AI Doc” (film) — https://www.focusfeatures.com/the-ai-doc-or-how-i-became-an-apocaloptimist [https://www.focusfeatures.com/the-ai-doc-or-how-i-became-an-apocaloptimist] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [http://DoomDebates.com] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
AI Will Take Our Jobs But SPARE Our Lives —Top AI Professor Moshe Vardi (Rice University)
Professor Moshe Vardi thinks AI will kill us with kindness by automating away our jobs. I think they'll just kill us for real. Who’s right? Tune into this episode and decide where you get off the Doom Train™. Some highlights of Professor Vardi’s impressive CV: * University Professor at Rice — a rare distinction that lets him teach in any department. * 65,000+ citations, an H-index above 100, and nearly 50 years spent mechanizing reasoning, which makes him one of the most decorated computer scientists alive. * He ran the ACM’s flagship publication for a decade, and now bridges CS and policy at Rice’s Baker Institute. * He has been sounding the alarm on AI-driven job automation for over ten years. * He signed the 2023 AI extinction risk statement, and calls himself “part of the resistance.” Links * Moshe Vardi’s Wikipedia — https://en.wikipedia.org/wiki/Moshe_Vardi [https://en.wikipedia.org/wiki/Moshe_Vardi] * Moshe Vardi, Rice University — https://profiles.rice.edu/faculty/moshe-y-vardi [https://profiles.rice.edu/faculty/moshe-y-vardi] * Baker Institute for Public Policy — https://www.bakerinstitute.org/ [https://www.bakerinstitute.org/] * Nick Bostrom, Deep Utopia: Life and Meaning in a Solved World — https://www.amazon.com/Deep-Utopia-Life-Meaning-Solved/dp/1646871642 [https://www.amazon.com/Deep-Utopia-Life-Meaning-Solved/dp/1646871642] * Joseph Aoun, Robot-Proof: Higher Education in the Age of Artificial Intelligence — https://www.amazon.com/Robot-Proof-Higher-Education-Artificial-Intelligence/dp/0262535971 [https://www.amazon.com/Robot-Proof-Higher-Education-Artificial-Intelligence/dp/0262535971] Timestamps 00:00:00 — Cold Open 00:00:54 — Introducing Professor Vardi 00:02:01 — Professor Vardi’s Academic Focus: CS, AI, & Public Policy 00:07:18 — What’s Your P(Doom)™? 00:12:28 — We’re Not Doomed, “We’re Screwed” 00:16:44 — AI’s Impact on Meaning & Purpose 00:27:47 — Let’s Ride the Doom Train ™ 00:35:43 — The Future of Jobs 00:39:24 — A Country of Geniuses in a Data Center 00:41:04 — Corporations as Superintelligence 00:45:49 — Agency, Consciousness, and the Limits of AI 00:50:07 — The Mad Scientist Scenario 00:54:02 — Could a Data Center of Geniuses Destroy Humanity? 01:03:13 — The WALL-E Meme and Fun Theory 01:04:01 — Why Professor Vardi Signed the AI Extinction Risk Statement 01:06:02 — Wrap-Up + 1 Way Ticket to Doom Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Destiny's Fans Challenged Me to an AI Doom Debate
Fresh off my debate with Destiny, his Discord community invited me into their voice chat to talk about AI doom. Just like the man himself, his fans are sharp. Let's find out where they get off The Doom Train™. My recent debate with Destiny — https://www.youtube.com/watch?v=rNgffLZTeWw [https://www.youtube.com/watch?v=rNgffLZTeWw] Timestamps 00:00:00 — Cold Open 00:00:54 — Liron Joins Destiny’s Discord 00:02:21 — The AI Doom Premise 00:03:27 — Defining Intelligence and Is An LLM Really AI? 00:07:12 — Will AI Become Uncontrollable? 00:12:44 — The AI Alignment Problem 00:24:11 — The Difficulty of Pausing AI 00:26:01 — AI vs The Human Brain 00:32:41 — Future AI Capabilities, Steering Toward Goals, & Philosophical Disagreements Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Doomsday Clock Physicist Warns AI Is Major Threat to Humanity!
Renowned scientists just set The Doomsday Clock closer than ever to midnight. I agree our collective doom seems imminent, but why do the clocksetters point to climate change—not rogue AI—as an existential threat? UChicago Professor Daniel Holz, a top physicist who sets the Doomsday Clock, joins me to debate society’s top existential risks. 00:00:00 — Cold Open 00:00:51 — Introducing Professor Holz 00:02:08 — The Doomsday Clock is at 85 Seconds to Midnight! 00:04:37 — What's Your P(Doom)?™ 00:08:09 — Making A Probability of Doomsday, or P(Doom), Equation 00:12:07 — How We All Die: Nuclear vs Climate vs AI 00:21:08 — Nuclear Close Calls from The Cold War 00:28:38 — History of The Doomsday Clock 00:30:18 — The Threat of Biological Risks Like Mirror Life 00:33:40 — Professor Holz’s Position on AI Misalignment Risk 00:44:49 — Does The Doomsday Clock Give Short Shrift to AI Misalignment Risk? 00:59:09 — Why Professor Holz Founded UChicago’s Existential Risk Laboratory (XLab) 01:06:22 — The State of Academic Research on AI Safety & Existential Risks 01:12:32 — The Case for Pausing AI Development 01:17:11 — Debate: Is Climate Change an Existential Threat? 01:28:48 — Call to Action: How to Reduce Our Collective Threat Links Professor Daniel Holz’s Wikipedia — https://en.wikipedia.org/wiki/Daniel_Holz [https://en.wikipedia.org/wiki/Daniel_Holz] XLab (Existential Risk Laboratory) — https://xrisk.uchicago.edu/ [https://xrisk.uchicago.edu/] 2026 Doomsday Clock Statement — https://thebulletin.org/doomsday-clock/2026-statement/ [https://thebulletin.org/doomsday-clock/2026-statement/] The Bulletin of the Atomic Scientists (Nonprofit that produces the clock)— https://thebulletin.org/ [https://thebulletin.org/] UChicago Magazine features Prof. Holz’s class — “Are We Doomed?”: https://mag.uchicago.edu/science-medicine/are-we-doomed [https://mag.uchicago.edu/science-medicine/are-we-doomed] The Doomsday Machine by Daniel Ellsberg — https://www.amazon.com/Doomsday-Machine-Confessions-Nuclear-Planner/dp/1608196704 [https://www.amazon.com/Doomsday-Machine-Confessions-Nuclear-Planner/dp/1608196704] If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky & Nate Soares — https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640 [https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640] Learn more about pausing frontier AI development from PauseAI — https://pauseai.info [https://pauseai.info] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Why I Started Doom Debates & How to Succeed in AI Risk Communications
Liron was invited to give a presentation to some of the most promising creators in AI safety communications. Get a behind-the-scenes look at Doom Debates and how the channel has grown so quickly. Learn more about the Frame Fellowship: https://framefellowship.com/ [https://framefellowship.com/] Timestamps 00:00:00 — Liron tees up the presentation 00:02:28 — Liron’s Frame Fellowhip Presentation 00:06:03 — Introducing Doom Debates 00:07:44 — Meeting the Frame Fellows 00:19:38 — Why I Started Doom Debates 00:30:20 — Handling Hate and Criticism 00:31:56 — The Talent Stack 00:39:34 — Finding Your Unique Niche 00:40:58 — Q&A 00:42:04 — On Funding the Show 00:48:05 — Audience Demographics & Gender Strategy 00:51:17 — How to communicate AI Risk Effectively 00:56:22 — Social Media Strategy 00:58:10 — Closing Remarks Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Elige tu suscripción
Premium
20 horas de audiolibros
Podcasts solo en Podimo
Podcast gratuitos
Cancela cuando quieras
Disfruta 30 días gratis
Después 4,99 € / mes
Premium Plus
100 horas de audiolibros
Podcasts solo en Podimo
Podcast gratuitos
Cancela cuando quieras
Disfruta 30 días gratis
Después 9,99 € / mes
Disfruta 30 días gratis. 4,99 € / mes después de la prueba. Cancela cuando quieras.