
Doom Debates
Podcast door Liron Shapira
Tijdelijke aanbieding
1 maand voor € 1
Daarna € 9,99 / maandElk moment opzegbaar.

Meer dan 1 miljoen luisteraars
Je zult van Podimo houden en je bent niet de enige
4.7 sterren in de App Store
Over Doom Debates
It's time to talk about the end of the world! lironshapira.substack.com
Alle afleveringen
112 afleveringenFormer Machine Intelligence Research Institute (MIRI) researcher Tsvi Benson-Tilsen [https://foresight.org/people/tsvi-benson-tilsen/] is championing an audacious path to prevent AI doom: engineering smarter humans to tackle AI alignment. I consider this one of the few genuinely viable alignment solutions, and Tsvi is at the forefront of the effort. After seven years at MIRI, he co-founded the Berkeley Genomics Project to advance the human germline engineering approach. In this episode, Tsvi lays out how to lower P(doom), arguing we must stop AGI development and stigmatize it like gain-of-function virus research. We cover his AGI timelines, the mechanics of genomic intelligence enhancement, and whether super-babies can arrive fast enough to save us. I’ll be releasing my full interview with Tsvi in 3 parts. Stay tuned for part 2 next week! Timestamps 0:00 Episode Preview & Introducing Tsvi Benson-Tilsen 1:56 What’s Your P(Doom)™ 4:18 Tsvi’s AGI Timeline Prediction 6:16 What’s Missing from Current AI Systems 10:05 The State of AI Alignment Research: 0% Progress 11:29 The Case for PauseAI 15:16 Debate on Shaming AGI Developers 25:37 Why Human Germline Engineering 31:37 Enhancing Intelligence: Chromosome Vs. Sperm Vs. Egg Selection 37:58 Pushing the Limits: Head Size, Height, Etc. 40:05 What About Human Cloning? 43:24 The End-to-End Plan for Germline Engineering 45:45 Will Germline Engineering Be Fast Enough? 48:28 Outro: How to Support Tsvi’s Work Show Notes Tsvi’s organization, the Berkeley Genomics Project — https://berkeleygenomics.org [https://berkeleygenomics.org/] If you’re interested to connect with Tsvi about germline engineering, you can reach out to him at BerkeleyGenomicsProject@gmail.com. --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Today I'm sharing my interview on Robert Wright's Nonzero Podcast where we unpack Eliezer Yudkowsky's AI doom arguments from his bestselling book, "If Anyone Builds It, Everyone Dies." Bob is an exceptionally thoughtful interviewer who asks sharp questions and pushes me to defend the Yudkowskian position, leading to a rich exploration of the AI doom perspective. I highly recommend getting a premium subscription to his podcast: 0:00 Episode Preview 2:43 Being a "Stochastic Parrot" for Eliezer Yudkowsky 5:38 Yudkowsky's Book: "If Anyone Builds It, Everyone Dies" 9:38 AI Has NEVER Been Aligned 12:46 Liron Explains "Intellidynamics" 15:05 Natural Selection Leads to Maladaptive Behaviors — AI Misalignment Foreshadowing 29:02 We Summon AI Without Knowing How to Tame It 32:03 The "First Try" Problem of AI Alignment 37:00 Headroom Above Human Capability 40:37 The PauseAI Movement: The Silent Majority 47:35 Going into Overtime Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Today I’m taking a rare break from AI doom to cover the dumbest kind of doom humanity has ever created for itself: climate change. We’re talking about a problem that costs less than $2 billion per year to solve. For context, that’s what the US spent on COVID relief every 7 hours during the pandemic. Bill Gates could literally solve this himself. My guest Andrew Song runs Make Sunsets [https://makesunsets.com/], which launches weather balloons filled with sulfur dioxide (SO₂) into the stratosphere to reflect sunlight and cool the planet. It’s the same mechanism volcanoes use—Mount Pinatubo cooled Earth by 0.5°C for a year in 1991. The physics is solid, the cost is trivial, and the coordination problem is nonexistent. So why aren’t we doing it? Because people are squeamish about “playing God” with the atmosphere, even while we’re building superintelligent AI. Because environmentalists would rather scold you into turning off your lights than support a solution that actually works. This conversation changed how I think about climate change. I went from viewing it as this intractable coordination problem to realizing it’s basically already solved—we’re just LARPing that it’s hard! 🙈 If you care about orders of magnitude, this episode will blow your mind. And if you feel guilty about your carbon footprint: you can offset an entire year of typical American energy usage for about 15 cents. Yes, cents. Timestamps * 00:00:00 - Introducing Andrew Song, Cofounder of Make Sunsets * 00:03:08 - Why the company is called “Make Sunsets” * 00:06:16 - What’s Your P(Doom)™ From Climate Change * 00:10:24 - Explaining geoengineering and solar radiation management * 00:16:01 - The SO₂ dial we can turn * 00:22:00 - Where to get SO₂ (gas supply stores, sourcing from oil) * 00:28:44 - Cost calculation: Just $1-2 billion per year * 00:34:15 - “If everyone paid $3 per year” * 00:42:38 - Counterarguments: moral hazard, termination shock * 00:44:21 - Being an energy hog is totally fine * 00:52:16 - What motivated Andrew (his kids, Luke Iseman) * 00:59:09 - “The stupidest problem humanity has created” * 01:11:26 - Offsetting CO₂ from OpenAI’s Stargate * 01:13:38 - Playing God is good Show Notes Make Sunsets * Website: https://makesunsets.com [https://makesunsets.com] * Tax-deductible donations (US): https://givebutter.com/makesunsets [https://givebutter.com/makesunsets] People Mentioned * Casey Handmer: https://caseyhandmer.wordpress.com/ [https://caseyhandmer.wordpress.com/] * Emmett Shear: https://twitter.com/eshear [https://twitter.com/eshear] * Palmer Luckey: https://twitter.com/PalmerLuckey [https://twitter.com/PalmerLuckey] Resources Referenced * Book: Termination Shock by Neal Stephenson * Book: The Rational Optimist by Matt Ridley * Book: Enlightenment Now by Steven Pinker * Harvard SCoPEx project (the Bill Gates-funded project that got blocked) * Climeworks (direct air capture company): https://climeworks.com [https://climeworks.com] Data/Monitoring * NOAA (National Oceanic and Atmospheric Administration): https://www.noaa.gov [https://www.noaa.gov] * ESA Sentinel-5P TROPOMI satellite data --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
I’ve been puzzled by David Deutsch’s AI claims for years. Today I finally had the chance to hash it out: Brett Hall, one of the foremost educators of David Deutsch’s ideas around epistemology & science, was brave enough to debate me! Brett has been immersed in Deutsch’s philosophy since 1997 and teaches it on his Theory of Knowledge podcast [https://www.bretthall.org/tokcast/tokcast], which has been praised by tech luminary Naval Ravikant. He agrees with Deutsch on 99.99% of issues, especially the dismissal of AI as an existential threat. In this debate, I stress-test the Deutschian worldview, and along the way we unpack our diverging views on epistemology, the orthogonality thesis, and pessimism vs optimism. Timestamps 0:00 — Debate preview & introducing Brett Hall 4:24 — Brett’s opening statement on techno-optimism 13:44 — What’s Your P(Doom)?™ 15:43 — We debate the merits of Bayesian probabilities 20:13 — Would Brett sign the AI risk statement? 24:44 — Liron declares his “damn good reason” for AI oversight 35:54 — Debate milestone: We identify our crux of disagreement! 37:29 — Prediction vs prophecy 44:28 — The David Deutsch CAPTCHA challenge 1:00:41 — What makes humans special? 1:15:16 — Reacting to David Deutsch’s recent statements on AGI 1:24:04 — Debating what makes humans special 1:40:25 — Brett reacts to Roger Penrose’s AI claims 1:48:13 — Debating the orthogonality thesis 1:56:34 — The powerful AI data center hypothetical 2:03:10 — “It is a dumb tool, easily thwarted” 2:12:18 — Clash of worldviews: goal-driven vs problem-solving 2:25:05 — Ideological Turing test: We summarize each other’s positions 2:30:44 — Are doomers just pessimists? Show Notes Brett’s website — https://www.bretthall.org [https://www.bretthall.org] Brett’s Twitter — https://x.com/TokTeacher [https://x.com/TokTeacher] The Deutsch Files by Brett Hall and Naval Ravikant * https://nav.al/deutsch-files-i [https://nav.al/deutsch-files-i] * https://nav.al/deutsch-files-ii [https://nav.al/deutsch-files-ii] * https://nav.al/deutsch-files-iii [https://nav.al/deutsch-files-iii] * https://nav.al/deutsch-files-iv [https://nav.al/deutsch-files-iv] Books: * The Fabric of Reality [https://www.amazon.com/Fabric-Reality-Parallel-Universes-Implications/dp/014027541X] by David Deutsch * The Beginning of Infinity [https://www.amazon.com/Beginning-Infinity-Explanations-Transform-World/dp/0143121359] by David Deutsch * Superintelligence [https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111] by Nick Bostrom * If Anyone Builds It, Everyone Dies [https://ifanyonebuildsit.com/] by Eliezer Yudkowsky --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [http://xn--doom%20debates%20mission%20is%20to%20raise%20mainstream%20awareness%20of%20imminent%20extinction%20from%20agi%20and%20build%20the%20social%20infrastructure%20for%20high-quality%20debate-xh24j.%20%20support%20the%20mission%20by%20subscribing%20to%20my%20substack%20at%20doomdebates.com%20and%20to%20youtube.com/@DoomDebates] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
We took Eliezer Yudkowsky and Nate Soares’s new book, If Anyone Builds It, Everyone Dies, on the streets to see what regular people think. Do people think that artificial intelligence is a serious existential risk? Are they open to considering the argument before it’s too late? Are they hostile to the idea? Are they totally uninterested? Watch this episode to see the full spectrum of reactions from a representative slice of America! --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [http://xn--doom%20debates%20mission%20is%20to%20raise%20mainstream%20awareness%20of%20imminent%20extinction%20from%20agi%20and%20build%20the%20social%20infrastructure%20for%20high-quality%20debate-xh24j.%20%20support%20the%20mission%20by%20subscribing%20to%20my%20substack%20at%20doomdebates.com%20and%20to%20youtube.com/@DoomDebates] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]

Meer dan 1 miljoen luisteraars
Je zult van Podimo houden en je bent niet de enige
4.7 sterren in de App Store
Tijdelijke aanbieding
1 maand voor € 1
Daarna € 9,99 / maandElk moment opzegbaar.
Exclusieve podcasts
Advertentievrij
Gratis podcasts
Luisterboeken
20 uur / maand

































