
Engels
True crime
Tijdelijke aanbieding
Daarna € 9,99 / maandElk moment opzegbaar.
Over Doom Debates!
It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com
URGENT — Someone Is Funding YOU To Help Lower P(Doom) Right Now!
Did you know there’s something called the Survival and Flourishing Fund [http://survivalandflourishing.fund/], and right now it’s giving away $20 to $40 million in grants [https://survivalandflourishing.fund/2026/application]. The application deadline is April 22nd. 😱 I’m personally involved as a recommender in the 2026 round, and I’m here to make sure you don’t miss this. If you have a project that could help the world — particularly around AI existential risk — you should apply ASAP! What Is the Survival and Flourishing Fund? The Survival and Flourishing Fund (SFF) is a grantmaking program funded by Jaan Tallinn [https://en.wikipedia.org/wiki/Jaan_Tallinn], one of the most prolific funders of charitable causes in the AI safety and effective altruism ecosystem. Jaan was a core team member on Kazaa, then a founding engineer at Skype, then got into crypto early and did well. He started an investment arm called Metaplanet and led the Series A for a couple companies you might have heard of: “DeepMind” and “Anthropic”. He’s one of the largest shareholders of Anthropic right now. There’s a delicious irony to this whole funding round. As we watch Anthropic take off and generate enormous value in AI capabilities, some of that wealth is flowing back through one of their earliest investors into charitable causes — including existential risk reduction. You could, in a very real sense, take some of Anthropic’s money and use it for good. What Gets Funded? It all comes down to Jaan’s philanthropic priorities [https://jaan.info/philanthropy/]. AI Extinction Risk This is priority number one — reducing humanity’s risk of destroying itself with AI. As Jaan writes on his website: “I wish more people would wake up to this issue since literally everyone under the age of 60 is personally at risk.” He breaks AI extinction risk work into two categories: Restrictive efforts — things like certifications on large data centers, speed limits on training runs, liability laws, labeling requirements (disclosing whether you’re interacting with a human or a machine), veto committees for large-scale model deployments, and global off switches. There is so much technology and policy work to be done across all of these. Constructive efforts — approaches that accept AI may be coming regardless and try to make it go as well as possible. This includes collective intelligence enhancement, AI health tech, “protective moralities” that could help an uncontrollable AI treat us well (even as a last-ditch effort, it’s better than nothing), guaranteed safe AI through human-legible quantitative safety guidelines, and hardware-level controls like automatic shutdown and reporting conditions. Some of these constructive efforts look a lot like restrictive efforts, just resigned to AI already arriving — running behind the train instead of standing in front of it holding a stop sign. But the philosophy is simple: throw everything at the problem. Let a thousand flowers bloom. The Tracks: Freedom and Fairness Beyond the main track (where I’m a recommender, alongside five others), SFF runs two specialized tracks: The Freedom Track asks: how can we avoid concentrations of authority and support uses of AI that strengthen freedom for humans and humanity? This includes protecting meaningful freedom of speech, ensuring the continuation of individual liberties like privacy and private property, and maintaining sovereignty for self-governing territories. (If you’ve heard Vitalik Buterin’s episode on Doom Debates about d/acc, this will resonate.) The Fairness Track starts from the premise that AI is a force multiplier for those who wish to control others. It asks: how can we support the use of AI to empower the disempowered? This means empowering the global majority with regard to AI technology, resisting monopolistic practices in AI development, diffusing conflicts and abuses of power, and fostering inclusivity in AI governance. Each of these tracks has three dedicated recommenders evaluating applications. If you’ve ever thought, “Why is everyone focused on the technology of preventing AI from killing us? What about the scenario where we’re all still alive but dealing with massive unfairness?” — well, SFF has an entire track with $3 million-plus in funding just for that. Theme Rounds SFF also runs theme rounds in climate change, animal welfare, and human self-enhancement. Jaan has thought of everything — spreading his bets like a good investor across different possible futures and different mechanisms of impact. How the Process Works You submit an application. That’s it. It’s not a super hard application — you detail what you’re doing, how much money you need, and what you’ll spend it on. Once submitted, there’s a two-phase process: Phase 1: Speculation Grants. About 40 speculators — including names like Eliezer Yudkowsky, Nate Soares, Andrew Critch, David Kruger, Oliver Habryka, Roman Yampolskiy, Tsvi Benson-Tilsen, and Zvi Mowshowitz — evaluate applications and may issue smaller initial grants. An individual speculator’s budget is on the order of $250,000, so these grants are smaller but fast: you could hear back as early as May 6th. Phase 2: The S-Process. A dozen recommenders (including me) each direct roughly $3–4 million in funding recommendations. Getting a speculation grant in Phase 1 actually helps your case in Phase 2 — it’s a signal that someone knowledgeable found your project promising, and it reduces the amount you’re asking for. The bulk of the money from this phase arrives around late 2026. The whole thing runs on an impact market mechanism. Speculators and recommenders who identify great projects get rewarded with bigger budgets in future rounds. It’s a prediction-market-style incentive system designed by the kind of rational, technical people you’d expect — Jaan Tallinn, Andrew Critch, Oliver Habryka. But honestly, you don’t need to worry about these mechanics. Just do the application and get the money. Who Can Apply? For-profits, nonprofits, and even individuals with a fiscal sponsor can all apply. If you don’t have a charity or nonprofit, that’s not a blocker. Fiscal sponsors are easy to find. Doom Debates is just my own organization — a sole proprietorship, hardly even an organization — and I’m fiscally sponsored by Manifund. I went to manifund.org, signed up, got a quick approval, and now people can make tax-deductible donations through them. The whole thing took about an hour. There are plenty of organizations that will fiscally sponsor you. If the lack of an organizational structure has been the thing stopping you, that excuse just evaporated. The Deadline Is April 22nd — and I Already Procrastinated for You I’ll be blunt: this video is coming late. I already procrastinated most of the time you had to prepare your application, which means you don’t get to procrastinate. There’s no more procrastinating left. You need to get your application out now. If you miss the April 22nd deadline by even a few days, your chances of getting accepted drop dramatically. They’re really not doing the late application thing. And on the flip side, early applications get a bit more attention and slightly better odds. 👉 CLICK HERE TO APPLY FOR SFF FUNDING [https://survivalandflourishing.fund/2026/application] 👈 Everything else I’m saying is overkill compared to the advice to apply for funding. One Tip for Your Application Zvi Mowshowitz recently wrote about this, and it echoes what I’ve seen reviewing Y Combinator applications in the past: please make it crystal clear what you’re actually doing. You’d be surprised how many applications are nearly impossible to understand from the first few sentences. Don’t bury your project description in jargon or vague framing. State what you’re doing, clearly, in the opening lines. If you want, email me at liron@doomdebates.com [liron@doomdebates.com] and ask, “Is this a clear description of what we’re doing?” I’ll tell you yes or no, and you can tweak it before you submit. Happy to do that. Why I Care About This I think the Survival and Flourishing Fund is wildly under-hyped. This is human civilization actually getting its act together — deploying real money to real causes, evaluated by people who know what they’re talking about. These aren’t random, tangential causes. These are shots on goal for some of the biggest problems I can imagine, funded at a scale that can actually make the future go better. Some of you watching this might be thinking: “Funding is great for people who already run charities, but I’m just a person with an idea.” I’m here to tell you — just do the idea. You can get the money. The money is one application away. Get a fiscal sponsor (one hour), fill out the application, hit submit. You’ll either get a grant or a polite rejection. That’s it. If your project is valuable to humanity, this fund exists to make it happen. Don’t let this deadline pass you by. 👉 CLICK HERE TO APPLY FOR SFF FUNDING [https://survivalandflourishing.fund/2026/application] 👈 Good luck! Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
I Challenged DON’T LOOK UP’s Screenwriter to Look Up At AGI
David Sirota helped create “Don’t Look Up” sometimes feels like we’re living inside his movie. Does he share my belief that the looming planetary threat is rogue AI? Sirota is an award-winning investigative journalist, bestselling author, and former speechwriter for Bernie Sanders. He was nominated for an Oscar for co-writing the story of Don’t Look Up. Find our more about David’s work at The Lever: https://www.levernews.com/ [https://www.levernews.com/] Timestamps 00:00:00 — Cold Open 00:01:20 — Introducing David Sirota 00:04:34 — Why David Fights Against Power and the Concentration of Power 00:13:46 — From NAFTA to AI: The Warnings We Ignored 00:22:05 — How Big Will the AI “Jobpocalypse” Be? 00:25:28 — Superintelligence & the Parallel to Don’t Look Up 00:28:37 — What’s Your P(Doom)™? 00:31:44 — The Speed of the AI Threat 00:36:26 — Society Is Losing a Collective Capacity to Focus 00:38:34 — Is Climate Change David’s Biggest Existential Concern? 00:45:01 — David Reacts to Bernie Sanders’ Data Center Moratorium Proposal 00:49:11 — Can We Build The “Off Button”? 00:52:08 — “Don’t Look Up” x AGI Mashup 00:54:35 — Why There’s Still Hope 00:58:14 — Living in “Don’t Look Up” 00:59:46 — Wrap-Up: Where to Follow Major AI News Links Watch Don't Look Up — https://www.netflix.com/gb/title/81252357 [https://www.netflix.com/gb/title/81252357] The Lever, investigative news outlet — https://www.levernews.com/ [https://www.levernews.com/] David Sirota on X — https://x.com/davidsirota [https://x.com/davidsirota] David Sirota, Wikipedia — https://en.wikipedia.org/wiki/David_Sirota [https://en.wikipedia.org/wiki/David_Sirota] Master Plan podcast — https://the.levernews.com/master-plan/ [https://the.levernews.com/master-plan/] David Sirota, “Hostile Takeover” on Amazon — https://www.amazon.com/Hostile-Takeover-Corruption-Conquered-Government/dp/0307237354 [https://www.amazon.com/Hostile-Takeover-Corruption-Conquered-Government/dp/0307237354] The Three-Body Problem (novel), Wikipedia — https://en.wikipedia.org/wiki/The_Three-Body_Problem_(novel [https://en.wikipedia.org/wiki/The_Three-Body_Problem_(novel]) WarGames (1983 film), Wikipedia — https://en.wikipedia.org/wiki/WarGames [https://en.wikipedia.org/wiki/WarGames] Adam McKay, Wikipedia — https://en.wikipedia.org/wiki/Adam_McKay [https://en.wikipedia.org/wiki/Adam_McKay] Watch Don’t Look Up — https://www.netflix.com/title/81252357 [https://www.netflix.com/title/81252357] AI 2027 scenario — https://ai-2027.com/ [https://ai-2027.com/] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
I Went On Jubilee's Middle Ground To Warn About AI Extinction!
The popular debate show, Middle Ground by Jubilee, invited me on to take the "anti-AI" side back in April 2024. This highlight reel of the episode shows my experience of the discussion. I'm a lifelong techno-optimist, and it's unnatural for me to represent an anti-tech position. It's just that our AI labs are admitting they're on the path to superintelligence they don't know how to control, which implies we're all about to die and the universe will be robbed of all value forever before our kids grow up. Other than that one little consideration, I'm normally pro-tech! Timestamps 00:00:00 — Why Liron is Worried about AI 00:02:13 — The Nuclear Analogy 00:02:43 — Human Evolution and Neuralink 00:03:14 — The AI Labs’ Own Warnings Links Full episode on YouTube —https://www.youtube.com/watch?v=47fGrqzoFr8 [https://www.youtube.com/watch?v=47fGrqzoFr8] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Liron’s 700% Productivity Increase, Bernie & AOC’s Datacenter Ban, Are We In Full Takeoff? — Live Q&A
Multiple live callers join this month's Q&A as we react to Bernie Sanders and AOC's data center moratorium, the sudden shutdown of SORA 2, and the record breaking "Stop the AI Race" protest. I explain why Claude Code has me claiming a 700% productivity boost, what that means for takeoff timelines and debate instrumental convergence. Timestamps 00:00:00 — Cold Open 00:01:08 — Can AI Train on Its Own Data to Reach Superintelligence? 00:03:42 — Are We in the Takeoff? 700% Faster with Claude Code 00:04:27 — EJJ Joins: Is Instrumental Convergence Really That Dangerous? 00:16:44 — The Positive Feedback Loop Problem 00:20:09 — S-Risk, Consciousness, and Objective Morality 00:22:27 — Futarchy and Prediction Markets 00:24:31 — Low P(Doom) Arguments and Bayesian Updates 00:31:05 — Lee Cyrano Joins: Superintelligence Won’t Matter for Decades 01:02:45 — Lesaun Joins: Are There Adults in the Room? 01:17:39 — Connor Leahy: “There Are No Adults in the Room” 01:19:51 — Bernie Sanders Calls for a Data Center Moratorium 01:24:23 — Claude Code Anecdotes and Audience Q&A 01:35:49 — The Stop the AI Race Protest in San Francisco 01:41:38 — Known Unknowns and Risk Assessment 01:45:03 — From Waymo to Existential Risk 01:51:28 — Closing: The Road to One Million Subscribers Links Quintin Pope vs Liron Shapira debate on Doom Debates — https://lironshapira.substack.com/p/ai-alignment-is-solved-phd-researcher [https://lironshapira.substack.com/p/ai-alignment-is-solved-phd-researcher] CAP theorem, Wikipedia — https://en.wikipedia.org/wiki/CAP_theorem [https://en.wikipedia.org/wiki/CAP_theorem] Google Cloud Spanner, Wikipedia — https://en.wikipedia.org/wiki/Spanner_(database [https://en.wikipedia.org/wiki/Spanner_(database]) Newcomb’s problem, Wikipedia — https://en.wikipedia.org/wiki/Newcomb%27s_paradox [https://en.wikipedia.org/wiki/Newcomb%27s_paradox] Gödel’s incompleteness theorems, Wikipedia — https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems [https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
AI Alignment Is Solved?! PhD Researcher Quintin Pope vs Liron Shapira (2023 Twitter Debate)
Dr. Quintin Pope, PhD, is one of the few critics of AI doomerism who is truly fluent in the concepts and arguments. In Oct, 2023 he joined me for a debate in Twitter Spaces where he argued that AI alignment was basically already solved. His “inside view” on machine-learning forced me to update my position, but could he knock me off the doom train? Timestamps 00:00:00 — Cold Open 00:00:43 — Introductions 00:01:22 — Quintin's Opening Statement 00:02:32 — Liron's Opening Statement 00:05:10 — Has RLHF Solved the Alignment Problem? 00:07:52 — AI Capabilities Are Constrained by Training Data 00:10:52 — Defining ASI and Could RLHF Align a Superintelligence? 00:13:13 — Quintin Is More Optimistic Than OpenAI 00:14:16 — What Is ASI in Your Mind? 00:15:57 — AI in 5 Years (2028) & AI Coding Agents 00:19:05 — Continuous or Discontinuous Capability Gains? 00:19:39 — DEBATE: General Intelligence Algorithm in Humans 00:30:02 — The Only Coherent Explanation of Humans Going to the Moon 00:34:01 — Are We "Fully Cooked" as a General Optimizer? 00:35:53 — Common Mistake in Forecasting Superintelligence 00:42:22 — 'Neat' vs 'Scruffy': Will Interpretable Structure Emerge Inside Neural Nets? 00:48:57 — Does This Disagreement Actually Matter for P(Doom)? 00:54:33 — Thought Experiment: Could You Have Predicted a Species Would Go to the Moon? 00:57:26 — The Basin of Attraction for Superintelligence 00:59:35 — Does a Superintelligence Even Exist in Algorithm Space? 01:09:59 — Closing Statements 01:12:40 — Audience Q&A 01:19:35 — Wrap Up Links Original Twitter Spaces debate (Quintin Pope vs. Liron Shapira) — https://x.com/i/spaces/1YpJkwOzOqEJj/peek [https://x.com/i/spaces/1YpJkwOzOqEJj/peek] Quintin Pope on Twitter/X — https://twitter.com/QuintinPope5 [https://twitter.com/QuintinPope5] Quintin Pope, Alignment Forum profile — https://www.alignmentforum.org/users/quintin-pope [https://www.alignmentforum.org/users/quintin-pope] InstructGPT, Wikipedia — https://en.wikipedia.org/wiki/InstructGPT [https://en.wikipedia.org/wiki/InstructGPT] AIXI, Wikipedia — https://en.wikipedia.org/wiki/AIXI [https://en.wikipedia.org/wiki/AIXI] AlphaZero, Wikipedia — https://en.wikipedia.org/wiki/AlphaZero [https://en.wikipedia.org/wiki/AlphaZero] MuZero, Wikipedia — https://en.wikipedia.org/wiki/MuZero [https://en.wikipedia.org/wiki/MuZero] DeepMind AlphaZero and MuZero page — https://deepmind.google/research/alphazero-and-muzero/ [https://deepmind.google/research/alphazero-and-muzero/] Midjourney — https://www.midjourney.com/ [https://www.midjourney.com/] DALL-E, Wikipedia — https://en.wikipedia.org/wiki/DALL-E [https://en.wikipedia.org/wiki/DALL-E] OpenAI Superalignment announcement — https://openai.com/index/introducing-superalignment/ [https://openai.com/index/introducing-superalignment/] Shard Theory sequence on LessWrong — https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX [https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX] “Evolution Provides No Evidence for the Sharp Left Turn” — https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn [https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn] “My Objections to ‘We’re All Gonna Die with Eliezer Yudkowsky’” — https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky [https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky] “AI is Centralizing by Default; Let’s Not Make It Worse” — https://forum.effectivealtruism.org/posts/zd5inbT4kYKivincm/ai-is-centralizing-by-default-let-s-not-make-it-worse [https://forum.effectivealtruism.org/posts/zd5inbT4kYKivincm/ai-is-centralizing-by-default-let-s-not-make-it-worse] Singular Learning Theory, Alignment Forum sequence — https://www.alignmentforum.org/s/mqwA5FcL6SrHEQzox [https://www.alignmentforum.org/s/mqwA5FcL6SrHEQzox] Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com [https://doomdebates.com/] and to youtube.com/@DoomDebates [https://youtube.com/@DoomDebates], or to really take things to the next level: Donate [https://doomdebates.com/donate] 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe [https://lironshapira.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4]
Kies je abonnement
Meest populair
Tijdelijke aanbieding
Premium
20 uur aan luisterboeken
Podcasts die je alleen op Podimo hoort
Geen advertenties in Podimo shows
Elk moment opzegbaar
1 maand voor € 1
Daarna € 9,99 / maand
Premium Plus
Onbeperkt luisterboeken
Podcasts die je alleen op Podimo hoort
Geen advertenties in Podimo shows
Elk moment opzegbaar
Probeer 30 dagen gratis
Daarna € 11,99 / maand
1 maand voor € 1. Daarna € 9,99 / maand. Elk moment opzegbaar.