
Engels
True crime
Tijdelijke aanbieding
Daarna € 9,99 / maandElk moment opzegbaar.
Over Doom Debates!
It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com
I'm Watching AI Take Everyone's Job | Liron on Robert Wright's NonZero Podcast
My new interview on Robert Wright's Nonzero Podcast where we dive into the agentic AI explosion. Bob is an exceptionally sharp interviewer who connects the dots between job displacement and AI doom. I highly recommend becoming a premium subscriber to Bob’s Nonzero Newsletter so you can watch the Overtime segment in every interview he does — https://nonzero.org Our discussion continues in Overtime for premium subscribers — https://www.nonzero.org/p/early-access-the-allure-and-danger [https://www.nonzero.org/p/early-access-the-allure-and-danger] Links Nonzero Podcast on YouTube — https://www.youtube.com/@nonzero [https://www.youtube.com/@nonzero] Robert Wright, The God Test (book, Amazon) — https://www.amazon.com/God-Test-Artificial-Intelligence-Reckoning/dp/1668061651 [https://www.amazon.com/God-Test-Artificial-Intelligence-Reckoning/dp/1668061651] Timestamps 00:00:00 — Introduction and Today's Topics 00:03:22 — Vibe Coding and the Agentic Revolution 00:08:57 — The Future of Employment 00:17:57 — Agents and What They Can Do 00:27:59 — The "Can It" and "Will It" Framework for AI Doom 00:30:27 — OpenClaw and Liron's Experience with AI Agents 00:36:45 — The Case for Slowing Down AI Development 00:43:28 — Anthropic, the Pentagon, and AI Politics 00:48:37 — AI Safety Leadership Concerns 00:52:06 — Closing and Overtime Tease Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
This Top Economist's P(Doom) Just Shot Up 10x! Noah Smith Returns To Explain His Update
Noah Smith is an economist and author of Noahpinion, one of the most popular Substacks in the world. He returns to Doom Debates to share a massive update to his P(Doom), and things get a little heated. Timestamps 00:00:00 — Cold Open 00:00:41 — Welcome Back Noah Smith! 00:01:40 — Noah's P(Doom) Update 00:03:57 — The Chatbot-Genie-God Framework 00:05:14 — What's Your P(Doom)™ 00:09:59 — Unpacking Noah's Update 00:16:56 — Why Incidents of Rogue AI Lower P(Doom) 00:20:04 — Noah's Mainline Doom Scenario: Much Worse Than COVID-19 00:23:29 — Society Responds After Growing Pains 00:29:25 — Agentic AI Contributed to Noah's Position 00:31:35 — Should Yudkowsky Get Bayesian Credit? 00:33:59 — Are We Communicating the Right Way with Policymakers? 00:40:16 — Finding Common Ground on AI Policy 00:47:07 — Wrap-Up: People Need to Be More Scared Links Doom Debate with Noah Smith — Part 1: https://www.youtube.com/watch?v=AwmJ-OnK2I4 [https://www.youtube.com/watch?v=AwmJ-OnK2I4] Noah’s Twitter — https://x.com/noahpinion [https://x.com/noahpinion] Noah’s Substack — https://noahpinion.blog [https://noahpinion.blog] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Talking AI Doom with Dr. Claire Berlinski & Friends
Dr. Claire Berlinski is a journalist, Oxford PhD, and author of The Cosmopolitan Globalist. She invited me to her weekly symposium to make the case for AI as an existential risk. Can we convince her sharp, skeptical audience that P(Doom) is high? Subscribe to The Cosmopolitan Globalist: https://claireberlinski.substack.com/ [https://claireberlinski.substack.com/] Follow Claire on X: https://x.com/ClaireBerlinski [https://x.com/ClaireBerlinski] “If Anyone Builds It, Everyone Dies” by Eliezer Yudkowsky & Nate Soares — https://ifanyonebuildsit.com [https://ifanyonebuildsit.com] Timestamps 00:00:00 — Introduction 00:02:10 — Welcome and Setting the Stage 00:06:16 — Outcome Steering: The Magic of Intelligence 00:10:40 — Collective Intelligence and the Path to ASI 00:12:53 — The Five-Point Argument 00:14:56 — The Alignment Problem and Control 00:17:56 — The Genie Problem and Recursive Self-Improvement 00:20:38 — Timeline: Five Years or Fifty? 00:26:14 — Social Revolution and Pausing AI 00:28:54 — Energy Constraints and Resource Limits 00:31:23 — Morality, Empathy, and Superintelligence 00:37:45 — How AI Is Actually Built 00:38:31 — Computational Irreducibility and Co-Evolution 00:44:57 — Foom and the Discontinuity Question 00:46:44 — US-China Rivalry and the Arms Race 00:49:36 — The Co-Evolution Argument 00:55:36 — Alignment as Psychoanalysis 00:57:24 — Anthropic’s “Harmless Slop” Paper 01:00:33 — Policy Solutions: The Pause Button 01:04:47 — Military AI and the Singularity 01:07:10 — Cognitive Obstacles and Doom Fatigue 01:09:07 — Why People Don’t Act 01:13:00 — Reaching Representatives and Building a Platform 01:17:12 — Sam Altman and the Manhattan Project Parallel 01:19:14 — Community Building and Pause AI 01:22:07 — Call to Action and Closing Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
How Friendly AI Will Become Deadly — Dr. Steven Byrnes (AGI Safety Researcher, Harvard Physics Postdoc) Returns!
Fan favorite Dr. Steven Byrnes returns to discuss recent AI progress and the concerning paradigm shift to "ruthless sociopath AI" he sees on the horizon. Steven Byrnes, UC Berkeley physics PhD and Harvard physics postdoc, is an AI safety researcher at the Astera Institute and one of the most rigorous thinkers working on the technical AI alignment problem. Timestamps 00:00:00 — Cold Open 00:00:48 — Welcoming Back the Returning Champion 00:02:38 — Research Update: What's New in The Last 6 Months 00:04:31 — The Rise of AI Agents 00:07:49 — What's Your P(Doom)?™ 00:13:42 — "Brain-Like AGI": The Next Generation of AI 00:17:01 — Can LLMs Ever Match the Human Brain? 00:31:51 — Will AI Kill Us Before It Takes Our Jobs? 00:36:12 — Country of Geniuses in a Data Center 00:41:34 — Why We Should Expect "Ruthless Sociopathic" ASI 00:54:15 — Post-Training & RLVR — A "Thin Layer" of Real Intelligence 01:02:32 — Consequentialism and the Path to Superintelligence 01:17:02 — Airplanes vs. Rockets: An Analogy for AI 01:24:33 — FOOM and Recursive Self-Improvement Links Steven Byrnes’ Website & Research— https://sjbyrnes.com/ [https://sjbyrnes.com/] Steve’s X—https://x.com/steve47285 [https://x.com/steve47285] Astera Institute—https://astera.org/ [https://astera.org/] “Why We Should Expect Ruthless Sociopath ASI” — https://www.lesswrong.com/posts/ZJZZEuPFKeEdkrRyf/why-we-should-expect-ruthless-sociopath-asi [https://www.lesswrong.com/posts/ZJZZEuPFKeEdkrRyf/why-we-should-expect-ruthless-sociopath-asi] Intro to Brain-Like-AGI Safety—https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8 [https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8] Steve on LessWrong—https://www.lesswrong.com/users/steve2152 [https://www.lesswrong.com/users/steve2152] AI 2027 — Scenario Timeline — https://ai-2027.com/ [https://ai-2027.com/] Part 1: “The Man Who Might SOLVE AI Alignment”— https://www.youtube.com/watch?v=_ZRUq3VEAc0 [https://www.youtube.com/watch?v=_ZRUq3VEAc0] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Q&A — Claude Code's Impact, Anthropic vs The Pentagon, Roko('s Basilisk) Returns + Liron Updates His Views!
Multiple live callers join this month's Q&A as we cover the imminent demise of programming as a profession, the Anthropic/Pentagon showdown, and debate the finer details of wireheading. I clarify my recent AI doom belief updates, and then the man behind Roko's Basilisk crashes the stream to argue I haven't updated nearly far enough. Timestamps 00:00:00 — Cold Open 00:00:56 — Welcome to the Livestream & Taking Questions from Chat 00:12:44 — Anonymous Caller Asks If Rationalists Should Prioritize Attention-Grabbing Protests 00:18:30 — The Good Case Scenario 00:26:00 — Hugh Chungus Joins the Stream 00:30:54 — Producer Ori, Liron's Recent Alignment Updates 00:43:47 — We're In an Era of Centaurs 00:47:40 — Noah Smith's Updates on AGI and Alignment 00:48:44 — Co Co Chats Cybersecurity 00:57:32 — The Attacker's Advantage in Offense/Defense Balance 01:02:55 — Anthropic vs The Pentagon 01:06:20 — "We're Getting Frog Boiled" 01:11:06 — Stoner AI & Debating the Finer Points of Wireheading 01:25:00 — A Caller Backs the Penrose Argument 01:34:01 — Greyson Dials In 01:40:21 — Surprise Guest Joins & Says Alignment Isn't a Problem 02:05:15 — More Q&A with Chat 02:14:26 — Closing Thoughts Links * Liron on X — https://x.com/liron [https://x.com/liron] * AI 2027 — https://ai-2027.com/ [https://ai-2027.com/] * “Good Luck, Have Fun, Don’t Die” (film) — https://www.imdb.com/title/tt38301748/ [https://www.imdb.com/title/tt38301748/] * “The AI Doc” (film) — https://www.focusfeatures.com/the-ai-doc-or-how-i-became-an-apocaloptimist [https://www.focusfeatures.com/the-ai-doc-or-how-i-became-an-apocaloptimist] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [http://DoomDebates.com] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Kies je abonnement
Meest populair
Tijdelijke aanbieding
Premium
20 uur aan luisterboeken
Podcasts die je alleen op Podimo hoort
Geen advertenties in Podimo shows
Elk moment opzegbaar
1 maand voor € 1
Daarna € 9,99 / maand
Premium Plus
Onbeperkt luisterboeken
Podcasts die je alleen op Podimo hoort
Geen advertenties in Podimo shows
Elk moment opzegbaar
Probeer 30 dagen gratis
Daarna € 11,99 / maand
1 maand voor € 1. Daarna € 9,99 / maand. Elk moment opzegbaar.