Doom Debates

Doom Debates

Podcast door Liron Shapira

Urgent disagreements that must be resolved before the world ends, hosted by Liron Shapira. lironshapira.substack.com

Probeer 7 dagen gratis

Na de proefperiode € 9,99 / maand.Elk moment opzegbaar.

Probeer gratis

Alle afleveringen

55 afleveringen
episode Effective Altruism Debate with Jonas Sota artwork
Effective Altruism Debate with Jonas Sota

Effective Altruism has been a controversial topic on social media, so today my guest and I are going to settle the question once and for all: Is it good or bad? Jonas Sota is a Software Engineer at Rippling, BA in Philosophy from UC Berkeley, who’s been observing the Effective Altruism (EA) movement in the San Francisco Bay Area for over a decade… and he’s not a fan. 00:00 Introduction 01:22 Jonas’s Criticisms of EA 03:23 Recoil Exaggeration 05:53 Impact of Malaria Nets 10:48 Local vs. Global Altruism 13:02 Shrimp Welfare 25:14 Capitalism vs. Charity 33:37 Cultural Sensitivity 34:43 The Impact of Direct Cash Transfers 37:23 Long-Term Solutions vs. Immediate Aid 42:21 Charity Budgets 45:47 Prioritizing Local Issues 50:55 The EA Community 59:34 Debate Recap 1:03:57 Announcements Show Notes Jonas’s Instagram: @jonas_wanders [https://www.instagram.com/jonas_wanders] Will MacAskill’s famous book, Doing Good Better: https://www.effectivealtruism.org/doing-good-better [https://www.effectivealtruism.org/doing-good-better] Scott Alexander’s excellent post about the people he met at EA Global: https://slatestarcodex.com/2017/08/16/fear-and-loathing-at-effective-altruism-global-2017/ [https://slatestarcodex.com/2017/08/16/fear-and-loathing-at-effective-altruism-global-2017/] Watch the Lethal Intelligence Guide [https://www.youtube.com/watch?v=9CUFbqh16Fg], the ultimate introduction to AI x-risk! PauseAI, the volunteer organization I’m part of: https://pauseai.info [https://pauseai.info] Join the PauseAI Discord — https://discord.gg/2XXWXvErfA [https://discord.gg/2XXWXvErfA] — and say hi to me in the #doom-debates-podcast channel! Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates] This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com [https://lironshapira.substack.com?utm_medium=podcast&utm_campaign=CTA_1]

Gisteren - 1 h 6 min
episode God vs. AI Doom: Debate with Bentham's Bulldog artwork
God vs. AI Doom: Debate with Bentham's Bulldog

Matthew Adelstein, better known as Bentham’s Bulldog [https://benthams.substack.com] on Substack, is a philosophy major at the University of Michigan and an up & coming public intellectual. He’s a rare combination: Effective Altruist, Bayesian, non-reductionist, theist. Our debate covers reductionism, evidence for god, the implications of a fine-tuned universe, moral realism, and AI doom. 00:00 Introduction 02:56 Matthew’s Research 11:29 Animal Welfare 16:04 Reductionism vs. Non-Reductionism Debate 39:53 The Decline of God in Modern Discourse 46:23 Religious Credences 50:24 Pascal's Wager and Christianity 56:13 Are Miracles Real? 01:10:37 Fine-Tuning Argument for God 01:28:36 Cellular Automata 01:34:25 Anthropic Principle 01:51:40 Mathematical Structures and Probability 02:09:35 Defining God 02:18:20 Moral Realism 02:21:40 Orthogonality Thesis 02:32:02 Moral Philosophy vs. Science 02:45:51 Moral Intuitions 02:53:18 AI and Moral Philosophy 03:08:50 Debate Recap 03:12:20 Show Updates Show Notes Matthew’s Substack: https://benthams.substack.com [https://benthams.substack.com] Matthew's Twitter: https://x.com/BenthamsBulldog [https://x.com/BenthamsBulldog] Matthew's YouTube: https://www.youtube.com/@deliberationunderidealcond5105 [https://www.youtube.com/@deliberationunderidealcond5105] Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16Fg [https://www.youtube.com/watch?v=9CUFbqh16Fg] PauseAI, the volunteer organization I’m part of — https://pauseai.info/ [https://pauseai.info] Join the PauseAI Discord — https://discord.gg/2XXWXvErfA [https://discord.gg/2XXWXvErfA] — and say hi to me in the #doom-debates-podcast channel! Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates] This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com [https://lironshapira.substack.com?utm_medium=podcast&utm_campaign=CTA_1]

15 jan 2025 - 3 h 20 min
episode Debate with a former OpenAI Research Team Lead — Prof. Kenneth Stanley artwork
Debate with a former OpenAI Research Team Lead — Prof. Kenneth Stanley

Prof. Kenneth Stanley is a former Research Science Manager at OpenAI leading the Open-Endedness Team in 2020-2022. Before that, he was a Professor of Computer Ccience at the University of Central Florida and the head of Core AI Research at Uber. He coauthored Why Greatness Cannot Be Planned: The Myth of the Objective, which argues that as soon as you create an objective, then you ruin your ability to reach it. In this episode, I debate Ken’s claim that superintelligent AI *won’t* be guided by goals, and then we compare our views on AI doom. 00:00 Introduction 00:45 Ken’s Role at OpenAI 01:53 “Open-Endedness” and “Divergence” 9:32 Open-Endedness of Evolution 21:16 Human Innovation and Tech Trees 36:03 Objectives vs. Open Endedness 47:14 The Concept of Optimization Processes 57:22 What’s Your P(Doom)™ 01:11:01 Interestingness and the Future 01:20:14 Human Intelligence vs. Superintelligence 01:37:51 Instrumental Convergence 01:55:58 Mitigating AI Risks 02:04:02 The Role of Institutional Checks 02:13:05 Exploring AI's Curiosity and Human Survival 02:20:51 Recapping the Debate 02:29:45 Final Thoughts SHOW NOTES Ken’s home page: https://www.kenstanley.net/ [https://www.kenstanley.net/] Ken’s Wikipedia: https://en.wikipedia.org/wiki/Kenneth_Stanley [https://en.wikipedia.org/wiki/Kenneth_Stanley] Ken’s Twitter: https://x.com/kenneth0stanley [https://x.com/kenneth0stanley] Ken’s PicBreeder paper: https://wiki.santafe.edu/images/1/1e/Secretan_ecj11.pdf [https://wiki.santafe.edu/images/1/1e/Secretan_ecj11.pdf] Ken's book, Why Greatness Cannot Be Planned: The Myth of the Objective: https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237 [https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237] The Rocket Alignment Problem by Eliezer Yudkowsky: https://intelligence.org/2018/10/03/rocket-alignment/ [https://intelligence.org/2018/10/03/rocket-alignment/] --- Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16Fg [https://www.youtube.com/watch?v=9CUFbqh16Fg] PauseAI, the volunteer organization I’m part of — https://pauseai.info/ [https://pauseai.info/] Join the PauseAI Discord — https://discord.gg/2XXWXvErfA [https://discord.gg/2XXWXvErfA] — and say hi to me in the #doom-debates-podcast channel! --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates] This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com [https://lironshapira.substack.com?utm_medium=podcast&utm_campaign=CTA_1]

06 jan 2025 - 2 h 37 min
episode OpenAI o3 and Claude Alignment Faking — How doomed are we? artwork
OpenAI o3 and Claude Alignment Faking — How doomed are we?

OpenAI just announced o3 and smashed a bunch of benchmarks (ARC-AGI, SWE-bench, FrontierMath)! A new Anthropic and Redwood Research paper says Claude is resisting its developers’ attempts to retrain its values! What’s the upshot — what does it all mean for P(doom)? 00:00 Introduction 01:45 o3’s architecture and benchmarks 06:08 “Scaling is hitting a wall” 🤡 13:41 How many new architectural insights before AGI? 20:28 Negative update for interpretability 31:30 Intellidynamics — ***KEY CONCEPT*** 33:20 Nuclear control rod analogy 36:54 Sam Altman's misguided perspective 42:40 Claude resisted retraining from good to evil 44:22 What is good corrigibility? 52:42 Claude’s incorrigibility doesn’t surprise me 55:00 Putting it all in perspective --- SHOW NOTES Scott Alexander’s analysis of the Claude incorrigibility result: https://www.astralcodexten.com/p/claude-fights-back [https://www.astralcodexten.com/p/claude-fights-back] and https://www.astralcodexten.com/p/why-worry-about-incorrigible-claude [https://www.astralcodexten.com/p/why-worry-about-incorrigible-claude] Zvi Mowshowitz’s analysis of the Claude incorrigibility result: https://thezvi.wordpress.com/2024/12/24/ais-will-increasingly-fake-alignment/ [https://thezvi.wordpress.com/2024/12/24/ais-will-increasingly-fake-alignment/] --- PauseAI Website: https://pauseai.info [https://pauseai.info] PauseAI Discord: https://discord.gg/2XXWXvErfA [https://discord.gg/2XXWXvErfA] Say hi to me in the #doom-debates-podcast channel! Watch the Lethal Intelligence video [https://www.youtube.com/watch?v=9CUFbqh16Fg] and check out LethalIntelligence.ai [https://lethalintelligence.ai/]! It’s an AWESOME new animated intro to AI risk. Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates] This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com [https://lironshapira.substack.com?utm_medium=podcast&utm_campaign=CTA_1]

30 dec 2024 - 1 h 3 min
episode AI Will Kill Us All — Liron Shapira on The Flares artwork
AI Will Kill Us All — Liron Shapira on The Flares

This week Liron was interview by Gaëtan Selle on @the-flares [https://studio.youtube.com/channel/UCP56Td-URBxm2y8cdXzdU1g] about AI doom. Cross-posted from their channel with permission. Original source: https://www.youtube.com/watch?v=e4Qi-54I9Zw [https://www.youtube.com/watch?v=e4Qi-54I9Zw] 0:00:02 Guest Introduction 0:01:41 Effective Altruism and Transhumanism 0:05:38 Bayesian Epistemology and Extinction Probability 0:09:26 Defining Intelligence and Its Dangers 0:12:33 The Key Argument for AI Apocalypse 0:18:51 AI’s Internal Alignment 0:24:56 What Will AI's Real Goal Be? 0:26:50 The Train of Apocalypse 0:31:05 Among Intellectuals, Who Rejects the AI Apocalypse Arguments? 0:38:32 The Shoggoth Meme 0:41:26 Possible Scenarios Leading to Extinction 0:50:01 The Only Solution: A Pause in AI Research? 0:59:15 The Risk of Violence from AI Risk Fundamentalists 1:01:18 What Will General AI Look Like? 1:05:43 Sci-Fi Works About AI 1:09:21 The Rationale Behind Cryonics 1:12:55 What Does a Positive Future Look Like? 1:15:52 Are We Living in a Simulation? 1:18:11 Many Worlds in Quantum Mechanics Interpretation 1:20:25 Ideal Future Podcast Guest for Doom Debates Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates]. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com [https://lironshapira.substack.com?utm_medium=podcast&utm_campaign=CTA_1]

27 dec 2024 - 1 h 23 min
Super app. Onthoud waar je bent gebleven en wat je interesses zijn. Heel veel keuze!
Makkelijk in gebruik!
App ziet er mooi uit, navigatie is even wennen maar overzichtelijk.

Overal beschikbaar

Luister naar Podimo op je telefoon, tablet, computer of auto!

Een universum van audio-entertainment

Duizenden luisterboeken en exclusieve podcasts

Geen advertenties

Verspil geen tijd met het luisteren naar reclameblokken wanneer je luistert naar de exclusieve shows van Podimo.

Probeer 7 dagen gratis

Na de proefperiode € 9,99 / maand.Elk moment opzegbaar.

Exclusieve podcasts

Advertentievrij

Gratis podcasts

Luisterboeken

20 uur / maand

Probeer gratis

Andere exclusieve shows

Populaire luisterboeken