Engels
True crime
Tijdelijke aanbieding
Daarna € 9,99 / maandElk moment opzegbaar.
Over Doom Debates!
It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com
154 afleveringen
Justin Helps (@Primer on YouTube) is Worried about AI Takeover
Justin Helps is the science educator behind Primer Learning with 2M subscribers. We cover how he got into AI safety, debate AGI timelines, and why he calculates p(doom) to be 70% by 2100 😱. Timestamps 00:00:00 — Cold Open 00:00:38 — Introducing Justin Helps 00:02:03 — What's Your P(Doom)?™ 00:03:38 — Justin's First Exposure to AI X-Risk 00:04:49 — Major Disagreements with Eliezer Yudkowsky 00:09:46 — Debating the Timeline to AGI 00:12:24 — Metaculus Prediction Market Estimates AGI by 2032 00:20:06 — Misguided Conceptions of AI's Limitations 00:25:23 — Only a 5% P(Doom) by 2040 00:28:40 — AIs Will Not Care About the Human Species 00:31:00 — Summarizing Justin's Position So Far 00:36:17 — High P(Doom), but We're Not Depressed 00:40:14 — Justin's "Computer Man" Thought Experiment 00:51:16 — Should We Pause AGI Development? 00:54:15 — AI Doom Is a Serious Concern Links Primer’s Video on AI Doom — https://www.youtube.com/watch?v=Qg5QXY_qZuI [https://www.youtube.com/watch?v=Qg5QXY_qZuI] Primer on YouTube — https://youtube.com/@PrimerBlobs [https://youtube.com/@PrimerBlobs] Primer’s Website — https://primerlearning.org/ [https://primerlearning.org/] Justin Helps on X — https://x.com/Helpsypoo [https://x.com/Helpsypoo] Harry Potter and the Methods of Rationality — https://hpmor.com/ [https://hpmor.com/] Feeling Rational by Eliezer Yudkowsky —https://www.lesswrong.com/posts/SqF8cHjJv43mvJJzx/feeling-rational [https://www.lesswrong.com/posts/SqF8cHjJv43mvJJzx/feeling-rational] Pause AI — https://pauseai.info [https://pauseai.info] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Live Q&A: Bernie Sanders Wakes Up to AI Doom, Dwarkesh's $20,000 Questions, Caller Debates the Alignment Problem!
Multiple live callers join this month's Q&A as I react to Dwarkesh Patel's $20K blog prize, debate the orthogonality thesis from first principles, and welcome Bernie Sanders aboard the Doom Train. Timestamps 00:00:00 — Cold Open 00:01:00 — Welcome to Doom Debates Live! 00:01:30 — What Do You Think of Open Source Models Out-Benchmarking OpenAI and Anthropic? 00:04:55 — Michael Cheers Joins: What If We Don't Give AIs Full Situational Awareness? 00:11:55 — Thoughts on Mythos' Hacking Abilities? 00:15:43 — Liron Reacts to Dwarkesh Patel's $20K AI Questions 00:23:28 — Pretraining Goals vs RL Training Goals 00:28:58 — Mental Model of Yudkowsky-ians & the IABIED Claim 00:37:24 — You Can't Hide Reality from a Superintelligence (The Truman Show Analogy) 00:42:57 — Back to Dwarkesh's Questions: When Do AI Labs Start Making Money? 00:48:50 — Upcoming Guests Reveal! 00:51:35 — Will Lancer Joins: Is The Yudkowskian Thesis Credible? 01:27:03 — Back to Answering Questions from the Chat 01:33:28 — The Cameraman Always Survives Analogy 01:40:52 — Liron's Banger Response to Roon's Tweet 01:47:00 — Nuance About Pausing AI Development 01:50:57 — Capitalism Isn't Going to Steer Us to an Alignment Solution 01:53:10 — Is Optimization Equivalent to Intelligence? 01:57:21 — BREAKING: Bernie Sanders on the Existential Threat of AI 02:01:12 — Spoiler for the Upcoming Mike Israetel Episode 02:01:57 — $500 Bet on AI Unemployment 02:05:46 — Misuse, Surveillance, and the Real Costs of Pausing AI 02:11:04 — Wrap-Up Links Nick Bostrom, Deep Utopia: Life and Meaning in a Solved World (Amazon) — https://www.amazon.com/Deep-Utopia-Meaning-Solved-World/dp/1646871642 [https://www.amazon.com/Deep-Utopia-Meaning-Solved-World/dp/1646871642] Yudkowsky & Soares, If Anyone Builds It, Everyone Dies (book) — https://www.amazon.com/If-Anyone-Builds-Everyone-Dies/dp/0316571253 [https://www.amazon.com/If-Anyone-Builds-Everyone-Dies/dp/0316571253] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Emad Mostaque Has A 50% P(Doom) & A Plan To Lower It
Emad Mostaque helped kick off the modern AI revolution as the head of Stability AI, the company behind Stable Diffusion. Unlike most AI CEOs, he doesn't sugarcoat the risks of AGI development. He explains his 50% P(Doom), why we have less than 1,000 days to get our act together, and how his new startup, Intelligent Internet, aims to be a countervailing force against doom of all kinds. Timestamps 00:00:00 — Cold Open 00:00:39 — Introducing Emad Mostaque 00:02:08 — How Emad Got Involved in AI Development 00:05:46 — The 60-Second Pitch for Intelligent Internet 00:09:29 — What’s Your P(Doom)?™ 00:13:32 — Why ASI Doesn’t Need Massive Compute 00:15:56 — AGI Timelines: Cognitive Labor Going to Zero 00:17:29 — Is There a Ceiling Above Human Intelligence? 00:41:22 — Corporations as Slow, Dumb AIs 00:50:19 — Emad’s Mainline Doom Scenario 00:55:28 — Jailbreaks and “Mecha-Hitler” Latent Spaces 00:59:56 — The Last Economy: How to Navigate Economic Doom 01:08:57 — The Coming Unemployment Spike 01:15:13 — Why Isn’t Google Stock Mooning? 01:25:05 — Intelligent Internet as a Solution: Bitcoin for the Intelligence Age 01:33:13 — Can an Aligned Network Stop a Rogue ASI? 01:36:20 — Is the Pause AI Proposal Too Late? 01:40:50 — Are We Facing Russian Roulette Odds with AI? 01:42:29 — Wrap-Up Links Emad Mostaque, The Last Economy (Amazon) — https://www.amazon.com/Last-Economy-Guide-Intelligent-Economics/dp/103693411X [https://www.amazon.com/Last-Economy-Guide-Intelligent-Economics/dp/103693411X] Intelligent Internet (ii.inc) —https://ii.inc [https://ii.inc] Emad Mostaque, Wikipedia — https://en.wikipedia.org/wiki/Emad_Mostaque [https://en.wikipedia.org/wiki/Emad_Mostaque] Emad Mostaque on X — https://x.com/EMostaque [https://x.com/EMostaque]Pause Giant AI Experiments open letter (Future of Life Institute) — https://futureoflife.org/open-letter/pause-giant-ai-experiments/ [https://futureoflife.org/open-letter/pause-giant-ai-experiments/] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Did Eliezer Yudkowsky Really Call for VIOLENCE? — Debate with John Alioto
My guest, John Alioto, is an independent AI engineer with a computer science degree from UC Berkeley and 25 years building real-world AI systems at companies like Microsoft and Google. In the wake of an attack on Sam Altman’s property, John Alioto came on the show to argue that Yudkowsky’s words are violent rhetoric that helped create this moment. Since I completely disagree with that characterization, we had a lot of fuel for a passionate debate. For the record, here’s my position on why AI doomers are NOT “calling for violence”: Are we acting like we actually think there’s an urgent extinction risk? Yes. Are we calling for lawless violence? Absolutely not. At least not me, or the leaders of the movement, or anyone I’ve ever personally interacted with. Are we calling for violence as a last resort if a government policy has been established and then egregiously violated? Yes… but that’s just standard for any governance proposal! A proposal for a strictly enforced treaty isn’t a call for violence — it’s a call for doing everything we can to make sure no one breaks the treaty, with zero violence, unless rogue actors decide to break the treaty and bring the consequences on themselves. Thanks to John for having an extremely respectful and good-faith debate on this heated subject. Timestamps 00:00:00 — Cold Open 00:00:37 — Introducing John Alioto 00:03:02 — Setting the Stage: Recent Acts of Violence & AI Discourse 00:05:53 — Eliezer Yudkowsky's 2023 TIME Article 00:11:16 — John's Two-Part Argument 00:14:37 — Conditional on High P(Doom), Is Eliezer's Policy Bad? 00:17:46 — Be Like Carl Sagan — Win in the Arena of Ideas 00:21:12 — No Carve-Outs for Non-Signatories 00:26:15 — Hypothetical: What If the UN Voted for a Treaty? 00:30:46 — What's the Correct Interpretation of Eliezer's TIME Article? 00:32:42 — Liron's Interpretation: Same Structure as Any International Law Proposal 00:42:23 — What Should Eliezer Have Written? "Airstrikes" vs "Strong Deterrent" 00:49:57 — How John Would Rewrite the TIME Piece 00:50:54 — Carve-Outs: Allies, Civilians, Consequences 00:52:52 — Debate Wrap-Up 00:56:27 — Last Q: Does High P(Doom) Imply Violence? 00:59:40 — Closing Thoughts Links John P. Alioto on X (Twitter) — https://x.com/jpalioto [https://x.com/jpalioto] Eliezer Yudkowsky, “Pausing AI Developments Isn’t Enough. We Need to Shut It All Down” (Time Magazine, March 2023) — https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ [https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Are AI Doomers “Calling for Violence”? Debate with Steven Balik
Are AI safety advocates like Eliezer Yudkowsky at fault for the recent attacks on Sam Altman because they are “calling for violence”? I invited Steven Balik to join me on this emergency episode to hash it out. Steven is an activist short seller and data engineering professional whose Substack is popular among Silicon Valley VCs and hedge funds. Links Steven Balik on X (Twitter) — https://x.com/laurenbalik [https://x.com/laurenbalik] Steven Balik, “The Talmudic Stock Bubble, AI Psychosis, & Esoterrorism” (Substack, October 2025) — https://laurenbaliksalmanacandrevue.substack.com/p/the-talmudic-stock-bubble-ai-psychosis [https://laurenbaliksalmanacandrevue.substack.com/p/the-talmudic-stock-bubble-ai-psychosis] Eliezer Yudkowsky, “Pausing AI Developments Isn’t Enough. We Need to Shut It All Down” (Time, March 2023) — https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ [https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/] Timestamps 00:00:00 — Cold Open 00:00:52 — Introducing Steven Balik 00:01:24 — Setting the Stage: Molotov Cocktail Incident 00:03:31 — Steven’s Opening Position 00:06:10 — Is Eliezer Yudkowsky “Calling for Violence”? 00:07:25 — Steven on AI, Yudkowsky, the Zizians & Escalating Rhetoric 00:12:16 — Focusing on the Time Article 00:18:51 — Who’s Responsible for the Violence? 00:25:33 — Debating the Key Quote in Yudkowsky’s Time Article 00:31:07 — Liron Passes the Ideological Turing Test 00:45:42 — Liron & Steven Find Common Ground 00:46:57 — Why Does Steven Call Eliezer Yudkowsky an “Esoterrorist”? 00:48:51 — Wrapping Up: Deescalating the Situation Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Kies je abonnement
Meest populair
Tijdelijke aanbieding
Premium
20 uur aan luisterboeken
Podcasts die je alleen op Podimo hoort
Geen advertenties in Podimo shows
Elk moment opzegbaar
1 maand voor € 1
Daarna € 9,99 / maand
Premium Plus
Onbeperkt luisterboeken
Podcasts die je alleen op Podimo hoort
Geen advertenties in Podimo shows
Elk moment opzegbaar
Probeer 30 dagen gratis
Daarna € 13,99 / maand
1 maand voor € 1. Daarna € 9,99 / maand. Elk moment opzegbaar.