Doom Debates!

Doom Debates!

NEWS: Trump & Xi Want AI Guardrails, ChatGPT vs Claude, and Liron Caught Rationalizing!? DD Live (5/15)

2 h 16 min · 16 de may de 20262 h 16 min
portada del episodio NEWS: Trump & Xi Want AI Guardrails, ChatGPT vs Claude, and Liron Caught Rationalizing!? DD Live (5/15)

Descripción

We recap the Dr. Mike Israetel debate and dig into the nuances of rationality vs. rationalization. Then onto the week's biggest news: Anthropic dethroning OpenAI, Trump and Xi Jinping exploring AI guardrails, and Liron's own company AI going rogue. Timestamps 00:00:00 — Cold Open 00:03:09 — Dr. Mike Israetel Round 2 Recap 00:14:34 — A Viewer Accuses Liron of Rationalizing 00:34:20 — Is Sam Altman Any Worse than a Typical Tech CEO? 00:45:15 — What Would Lower P(Doom) Below 10%? 00:53:25 — Can We Disconnect an ASI from the Internet? 01:05:00 — Should Doomers Boycott AI? 01:16:27 — Liron Reacts to ThePrimeagen Reacting to the Yud Debate 01:26:13 — Ben Goertzel Episode Preview 01:38:10 — Scrolling Twitter: MIRI, Ryan Greenblatt 01:47:32 — Liron’s AI Goes Rogue at Work 01:49:15 — “Positive Alignment” Paper Gets Destroyed 01:58:30 — Anthropic Dethrones OpenAI 02:12:06 — Trump & Xi Talk AI Guardrails Links Zvi Mowshowitz AI roundup blog — https://thezvi.wordpress.com/ [https://thezvi.wordpress.com/] “The Bottom Line” by Eliezer Yudkowsky (LessWrong) — https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line [https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line] Rationality: From AI to Zombies by Eliezer Yudkowsky — https://intelligence.org/rationality-ai-zombies/ [https://intelligence.org/rationality-ai-zombies/] Less Online 2026 conference (June 5–7) — https://less.online/ [https://less.online/] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

Comentarios

0

Sé la primera persona en comentar

¡Regístrate ahora y forma parte de la comunidad de Doom Debates!!

Prueba gratis

Empieza 7 días de prueba

$99 / mes después de la prueba. · Cancela cuando quieras.

  • Podcasts solo en Podimo
  • 20 horas de audiolibros al mes
  • Podcast gratuitos
Prueba gratis

Todos los episodios

161 episodios

episode NEWS: Trump & Xi Want AI Guardrails, ChatGPT vs Claude, and Liron Caught Rationalizing!? DD Live (5/15) artwork

NEWS: Trump & Xi Want AI Guardrails, ChatGPT vs Claude, and Liron Caught Rationalizing!? DD Live (5/15)

We recap the Dr. Mike Israetel debate and dig into the nuances of rationality vs. rationalization. Then onto the week's biggest news: Anthropic dethroning OpenAI, Trump and Xi Jinping exploring AI guardrails, and Liron's own company AI going rogue. Timestamps 00:00:00 — Cold Open 00:03:09 — Dr. Mike Israetel Round 2 Recap 00:14:34 — A Viewer Accuses Liron of Rationalizing 00:34:20 — Is Sam Altman Any Worse than a Typical Tech CEO? 00:45:15 — What Would Lower P(Doom) Below 10%? 00:53:25 — Can We Disconnect an ASI from the Internet? 01:05:00 — Should Doomers Boycott AI? 01:16:27 — Liron Reacts to ThePrimeagen Reacting to the Yud Debate 01:26:13 — Ben Goertzel Episode Preview 01:38:10 — Scrolling Twitter: MIRI, Ryan Greenblatt 01:47:32 — Liron’s AI Goes Rogue at Work 01:49:15 — “Positive Alignment” Paper Gets Destroyed 01:58:30 — Anthropic Dethrones OpenAI 02:12:06 — Trump & Xi Talk AI Guardrails Links Zvi Mowshowitz AI roundup blog — https://thezvi.wordpress.com/ [https://thezvi.wordpress.com/] “The Bottom Line” by Eliezer Yudkowsky (LessWrong) — https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line [https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line] Rationality: From AI to Zombies by Eliezer Yudkowsky — https://intelligence.org/rationality-ai-zombies/ [https://intelligence.org/rationality-ai-zombies/] Less Online 2026 conference (June 5–7) — https://less.online/ [https://less.online/] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

16 de may de 20262 h 16 min
episode Who Was Liron Shapira BEFORE Doom Debates? — Interview on Theo Jaffee's Podcast from December 2023 artwork

Who Was Liron Shapira BEFORE Doom Debates? — Interview on Theo Jaffee's Podcast from December 2023

In December 2023, I joined the Theo Jaffee Podcast to talk about a wide range of topics including non-AI x-risks, where I disagree with Eliezer Yudkwowsky, cryptocurrency and if P(Doom) is rigorous. This episode was recorded 6 months before I'd started Doom Debates. Let's look back with the power of hindsight and see if my answers still hold up. Links #10: Liron Shapira — AI doom, FOOM, rationalism, and crypto (YouTube) — https://youtu.be/YfEcAtHExFM [https://youtu.be/YfEcAtHExFM] Watch Theo Jafee now on MTS — https://www.mts.now/ [https://www.mts.now/] MTS Live — https://x.com/MTSlive [https://x.com/MTSlive] Timestamps 00:00:00 — Theo's Introduction 00:01:03 — Is Liron Worried About Non-AI Existential Risks 00:03:36 — Suffering Risks 00:05:22 — Is P(Doom) rigorous? 00:09:42 — Is Eliezer Overestimating P(Doom)? 00:12:18 — Where Does Liron Disagree with Eliezer? 00:15:43 — What Would Change Liron's Mind 00:17:14 — Elon Musk's AI Timelines 00:18:34 — Is xAI Making Things Worse? 00:20:24 — The Case for an AI Manhattan Project 00:22:32 — Are Superforecasters Wrong About AI Risk? 00:26:02 — The Race Against Time for Alignment 00:28:01 — Headroom Above Human Intelligence 00:33:31 — Vitalik's d/acc Framework 00:35:23 — Edging Toward Superintelligence 00:38:21 — From Chatbot to World-Ender 00:41:55 — GPT Paradigm vs. AlphaZero 00:43:18 — Critiquing AI Optimism 00:48:29 — Deceptive Alignment and Gradient Descent 00:53:31 — Does Nice Training Data Make Nice AI? 00:57:57 — How Do You Live with 50% P(Doom)? 01:00:08 — Why Have Kids If the World Might End?01:02:16 — Israel vs Hamas 01:06:15 — How LessWrong Changed Liron's Lif e01:08:42 — Rationalism and Effective Altruism 01:14:49 — Why Blockchain Has No Use Case Beyond Cryptocurrency 01:22:13 — Charlie Munger and Richard Feynman 01:24:38 — Closing Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

14 de may de 20261 h 25 min
episode Dr. Mike Israetel Returns to Debate: Will AI Kill Everyone, Or Make Everything Awesome? artwork

Dr. Mike Israetel Returns to Debate: Will AI Kill Everyone, Or Make Everything Awesome?

Dr. Mike Israetel is back in the arena to FIGHT ME (verbally) about the likelihood of AI killing everyone vs. the likelihood of AI making the universe totally awesome! Back by popular demand after last year’s widely-viewed debate [https://www.youtube.com/watch?v=RaDWSPMdM4o], the renowned exercise scientist, fitness entrepreneur, and low-P(Doom) AI futurist reveals where he gets off the Doom Train™, why he thinks AI is conscious, and how he pictures the coming AI utopia. Links Watch Dr. Mike’s AI/futurism YouTube channel —https://www.youtube.com/@mikeisraetelmakingprogress [https://www.youtube.com/@mikeisraetelmakingprogress] Dr. Mike on X — https://x.com/misraetel [https://x.com/misraetel] Make sure to watch Round 1 of our debate from last year — https://www.youtube.com/watch?v=RaDWSPMdM4o [https://www.youtube.com/watch?v=RaDWSPMdM4o] Liron referenced a recent episode, “Steven Byrnes Returns — 90% P(Doom) on the Trajectory to ASI” — https://www.youtube.com/watch?v=oOb9K1KIAyk [https://www.youtube.com/watch?v=oOb9K1KIAyk] Timestamps 00:00:00 — Cold Open 00:00:48 — Returning Champion Dr. Mike Israetel! 00:02:13 — What's Been Your Biggest AI Productivity Gain? 00:03:46 — Can AI Be a Personal Trainer? Dr. Mike Judges Liron's Technique 00:09:41 — Get Good at Taking Advice From AI 00:14:23 — Dr. Mike Israetel, What's Your Latest P(Doom)?™ 00:20:12 — Liron's Mainline Scenario: Loss of Control 00:21:39 — Mike's Problems With a Paperclip Maximizer Scenario 00:26:00 — What Will the Goals of a Superintelligence Be? 00:28:49 — AI Labs Have Bit Off More Than They Can Chew 00:31:09 — Current AIs Are Moral 00:34:40 — AIs With a Ruthless Game-Playing Flavor 00:38:48 — Introducing the Doom Train 00:45:07 — "Can It" vs. "Will It" — Two-Part Argument on AI X-Risk 00:49:40 — Mike's Tweet on Superintelligence by 2027 01:01:34 — AI as a Goal Engine 01:20:51 — Offense vs. Defense in the Coming Cyber War 01:39:14 — Is Morality Just Game Theory? The "Different Flavor" of Future AI 01:46:16 — Ricardo's Law: Does AI Have Reason to Keep Us Around? 01:52:14 — Why Mike Gets Off the Doom Train 01:54:58 — Is AI Conscious? 02:01:39 — Mike's Vision for the Coming AI Utopia 02:06:52 — Wrap-Up Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

12 de may de 20262 h 8 min
episode Eliezer Yudkowsky Post-Debate Reaction, Elon's New Frenemy & Liron's Bet on Spencer Pratt!? - Doom Debates Live (5/8/26) artwork

Eliezer Yudkowsky Post-Debate Reaction, Elon's New Frenemy & Liron's Bet on Spencer Pratt!? - Doom Debates Live (5/8/26)

Liron and Producer Ori explore the community's feedback to Eliezer Yudkowsky's $10,000 debate against an anonymous AI director. Plus, we unpack Eliezer's new post on the "irretrievability" of ASI development, Anthropic feasting on xAI's compute, and Liron's 4x Kalshi bet on... Spencer Pratt. Timestamps 00:00:00 — Welcome 00:02:30 — The $10,000 Debate Post-Mortem 00:06:01 — Paul Tudor Jones: “Zero Risk Management on AI” 00:09:49 — Liron’s Jerry Springer Moment 00:14:19 — Why Yud Wore Steampunk 00:20:19 — YouTube Reacts: “Five Minutes In, Already Unhinged” 00:27:19 — 47F’s Anti-Disparagement Legal Theory 00:32:18 — Yud Wants Round 2 00:35:37 — Lumpin Space, Ben Goertzel & Early AI Safety Memories 00:47:06 — Eliezer’s “Irretrievability” Post 00:56:35 — The Maginot Line and Murphy’s Curse 01:08:13 — Blockchain vs AI: Earth’s Compute Swing 01:15:15 — Anthropic Eats Elon’s Compute 01:25:29 — Is Elon Tweeting as His Own Mom? 01:30:03 — Waymo Dodges a Fallen Scooter Rider 01:33:00 — Why Liron Bet on Spencer Pratt 01:37:13 — Live Twitter Scroll! 01:38:11 — Mira Murati: “Directionally Very Bad” 01:50:07 — AI Copies Itself Across Servers 01:56:44 — Steven Byrnes: LLMs Aren’t the Final Paradigm 02:07:55 — Wrap-Up Links LessOnline 2026 (June 5-7, Berkeley, CA) — https://less.online/ Eliezer Yudkowsky, “Irretrievability; or, Murphy’s Curse of Oneshotness upon ASI” (LessWrong, May 4, 2026) —https://www.lesswrong.com/posts/fbrz9xhKpEeTKw5zL/irretrievability-or-murphy-s-curse-of-oneshotness-upon-asi [https://www.lesswrong.com/posts/fbrz9xhKpEeTKw5zL/irretrievability-or-murphy-s-curse-of-oneshotness-upon-asi] Anthropic, SpaceX announce Colossus 1 compute deal (CNBC) — https://www.cnbc.com/2026/05/06/anthropic-spacex-data-center-capacity.html [https://www.cnbc.com/2026/05/06/anthropic-spacex-data-center-capacity.html] SpaceXAI announcement: Compute partnership with Anthropic — https://x.ai/news/anthropic-compute-partnership [https://x.ai/news/anthropic-compute-partnership] Kalshi: Spencer Pratt LA Mayor market — https://kalshi.com/markets/kxmayorla/la/kxmayorla-26 [https://kalshi.com/markets/kxmayorla/la/kxmayorla-26] Morris worm — Wikipedia — https://en.wikipedia.org/wiki/Morris_worm [https://en.wikipedia.org/wiki/Morris_worm] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

9 de may de 20262 h 11 min
episode Debate with @lumpenspace (AI Accelerationist) — Is it GOOD for AI to replace us? artwork

Debate with @lumpenspace (AI Accelerationist) — Is it GOOD for AI to replace us?

Claude, also known as @lumpenspace on X, is a prominent AI accelerationist. He gives humanity a 30% chance of being superseded by superintelligence, and he’s fine with it! We unpack his lattice of beliefs, and pinpoint the cruxes of our disagreement on the orthogonality thesis and the capabilities of superintelligence. Lumpenspace’s appearance comes after a Molotov cocktail was thrown at Sam Altman’s house, an act that he and I completely condemn. Timestamps 00:00:00 — Cold Open 00:00:57 — Introducing Lumpenspace 00:03:46 — Is Lumpenspace Team Beff Jezos? 00:04:26 — What’s Your P(Doom)?™ 00:06:28 — Worthy Successors & the Transhumanist Door 00:08:29 — The Orthogonality Thesis: Our Core Disagreement 00:15:14 — The Universality Threshold & David Deutsch 00:19:22 — Paperclips Won’t Happen Because “It’s Boring” 00:27:23 — Natural Selection vs Human Engineering 00:36:26 — Nanobots: Can ASI Build Them Unseen? 00:46:49 — Identifying the Cruxes of Disagreement 00:55:01 — Closing Statements Links Follow Lumpenspace — https://x.com/lumpenspace [https://x.com/lumpenspace] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

7 de may de 202659 min