Future of Life Institute Podcast

Future of Life Institute Podcast

Podcast von Future of Life Institute

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Kostenlos testen für 30 Tage

4,99 € / Monat nach der Testphase.Jederzeit kündbar.

Gratis testen

Alle Folgen

231 Folgen
episode Inside China's AI Strategy: Innovation, Diffusion, and US Relations (with Jeffrey Ding) artwork
Inside China's AI Strategy: Innovation, Diffusion, and US Relations (with Jeffrey Ding)

On this episode, Jeffrey Ding joins me to discuss diffusion of AI versus AI innovation, how US-China dynamics shape AI’s global trajectory, and whether there is an AI arms race between the two powers. We explore Chinese attitudes toward AI safety, the level of concentration of AI development, and lessons from historical technology diffusion. Jeffrey also shares insights from translating Chinese AI writings and the potential of automating translations to bridge knowledge gaps.   You can learn more about Jeffrey’s work at: https://jeffreyjding.github.io   Timestamps:   00:00:00 Preview and introduction   00:01:36 A US-China AI arms race?   00:10:58 Attitudes to AI safety in China   00:17:53 Diffusion of AI   00:25:13 Innovation without diffusion   00:34:29 AI development concentration   00:41:40 Learning from the history of technology   00:47:48 Translating Chinese AI writings   00:55:36 Automating translation of AI writings

25. Apr. 2025 - 1 h 2 min
episode How Will We Cooperate with AIs? (with Allison Duettmann) artwork
How Will We Cooperate with AIs? (with Allison Duettmann)

On this episode, Allison Duettmann joins me to discuss centralized versus decentralized AI, how international governance could shape AI’s trajectory, how we might cooperate with future AIs, and the role of AI in improving human decision-making. We also explore which lessons from history apply to AI, the future of space law and property rights, whether technology is invented or discovered, and how AI will impact children.  You can learn more about Allison's work at: https://foresight.org   Timestamps:   00:00:00 Preview  00:01:07 Centralized AI versus decentralized AI   00:13:02 Risks from decentralized AI   00:25:39 International AI governance   00:39:52 Cooperation with future AIs   00:53:51 AI for decision-making   01:05:58 Capital intensity of AI  01:09:11 Lessons from history   01:15:50 Future space law and property rights   01:27:28 Is technology invented or discovered?   01:32:34 Children in the age of AI

11. Apr. 2025 - 1 h 36 min
episode Brain-like AGI and why it's Dangerous (with Steven Byrnes) artwork
Brain-like AGI and why it's Dangerous (with Steven Byrnes)

On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies.   You can learn more about Steven's work at: https://sjbyrnes.com/agi.html   Timestamps:   00:00 Preview   00:54 Brain-like AGI Safety  13:16 Controlled AGI versus Social-instinct AGI   19:12 Learning from the brain   28:36 Why is brain-like AI the most likely path to AGI?   39:23 Honesty in AI models   44:02 How to help with brain-like AGI safety   53:36 AI traits with both positive and negative effects   01:02:44 Different AI safety strategies

04. Apr. 2025 - 1 h 13 min
episode How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil) artwork
How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil)

On this episode, Ege Erdil from Epoch AI joins me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning. Toward the end, we dig into Moravec’s Paradox, which jobs are most at risk of automation, and what could change Ege's current AI timelines.   You can learn more about Ege's work at https://epoch.ai   Timestamps:  00:00:00 – Preview and introduction  00:02:59 – Compute scaling and automation - GATE model  00:13:12 – Evolution, Brain Efficiency, and AGI Compute Requirements  00:29:49 – Broad Automation vs. R&D-Focused AI Deployment  00:47:19 – AI, Wages, and Labor Market Transitions  00:59:54 – Training Agentic Models and Long-Term Planning Capabilities  01:06:56 – Moravec’s Paradox and Automation of Human Skills  01:13:59 – Which Jobs Are Most Vulnerable to AI?  01:33:00 – Timeline Extremes: What Could Change AI Forecasts?

28. März 2025 - 1 h 34 min
episode Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz) artwork
Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)

In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research.   00:00 Nicholas Carlini's contributions to cybersecurity 08:19 Understanding attack strategies  29:39 High-dimensional spaces and attack intuitions  51:00 Challenges in open-source model safety  01:00:11 Unlearning and fact editing in models  01:10:55 Adversarial examples and human robustness  01:37:03 Cryptography and AI robustness  01:55:51 Scaling AI security research

21. März 2025 - 2 h 23 min
Der neue Look und die “Trailer” sind euch verdammt gut gelungen! Die bisher beste Version eurer App 🎉 Und ich bin schon von Anfang an dabei 😉 Weiter so 👍
Eine wahnsinnig große, vielfältige Auswahl toller Hörbücher, Autobiographien und lustiger Reisegeschichten. Ein absolutes Muss auf der Arbeit und in unserem Urlaub am Strand nicht wegzudenken... für uns eine feine Bereicherung
Spannende Hörspiele und gute Podcasts aus Eigenproduktion, sowie große Auswahl. Die App ist übersichtlich und gut gestaltet. Der Preis ist fair.

Kostenlos testen für 30 Tage

4,99 € / Monat nach der Testphase.Jederzeit kündbar.

Exklusive Podcasts

Werbefrei

Alle frei verfügbaren Podcasts

Hörbücher

20 Stunden / Monat

Gratis testen

Nur bei Podimo

Beliebte Hörbücher