Billede af showet Future of Life Institute Podcast

Future of Life Institute Podcast

Podcast af Future of Life Institute

engelsk

Videnskab & teknologi

Begrænset tilbud

1 måned kun 9 kr.

Derefter 99 kr. / månedOpsig når som helst.

  • 20 lydbogstimer pr. måned
  • Podcasts kun på Podimo
  • Gratis podcasts
Kom i gang

Læs mere Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Alle episoder

485 episoder
episode Why the AI Race Undermines Safety (with Steven Adler) artwork

Why the AI Race Undermines Safety (with Steven Adler)

Stephen Adler is a former safety researcher at OpenAI. He joins the podcast to discuss how to govern increasingly capable AI systems. The conversation covers competitive races between AI companies, limits of current testing and alignment, mental health harms from chatbots, economic shifts from AI labor, and what international rules and audits might be needed before training superintelligent models.  LINKS: * Steven Adler's Substack: https://stevenadler.substack.com [https://stevenadler.substack.com] CHAPTERS: (00:00) Episode Preview (01:00) Race Dynamics And Safety (18:03) Chatbots And Mental Health (30:42) Models Outsmart Safety Tests (41:01) AI Swarms And Work (54:21) Human Bottlenecks And Oversight (01:06:23) Animals And Superintelligence (01:19:24) Safety Capabilities And Governance PRODUCED BY: https://aipodcast.ing [https://aipodcast.ing] SOCIAL LINKS: Website: https://podcast.futureoflife.org [https://podcast.futureoflife.org] Twitter (FLI): https://x.com/FLI_org [https://x.com/FLI_org] Twitter (Gus): https://x.com/gusdocker [https://x.com/gusdocker] LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ [https://www.linkedin.com/company/future-of-life-institute/] YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ [https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/] Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 [https://geo.itunes.apple.com/us/podcast/id1170991978] Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP [https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP]

12. dec. 2025 - 1 h 28 min
episode Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston) artwork

Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)

Tyler Johnston is Executive Director of the Midas Project. He joins the podcast to discuss AI transparency and accountability. We explore applying animal rights watchdog tactics to AI companies, the OpenAI Files investigation, and OpenAI's subpoenas against nonprofit critics. Tyler discusses why transparency is crucial when technical safety solutions remain elusive and how public pressure can effectively challenge much larger companies. LINKS: * The Midas Project Website [https://www.themidasproject.com] * Tyler Johnston's LinkedIn Profile [https://www.linkedin.com/in/tyler-johnston-479672224] CHAPTERS: (00:00 [https://www.youtube.com/watch?v=jqPDc9JpOc0]) Episode Preview (01:06 [https://www.youtube.com/watch?v=jqPDc9JpOc0&t=66s]) Introducing the Midas Project (05:01 [https://www.youtube.com/watch?v=jqPDc9JpOc0&t=301s]) Shining a Light on AI (08:36 [https://www.youtube.com/watch?v=jqPDc9JpOc0&t=516s]) Industry Lockdown and Transparency (13:45 [https://www.youtube.com/watch?v=jqPDc9JpOc0&t=825s]) The OpenAI Files (20:55 [https://www.youtube.com/watch?v=jqPDc9JpOc0&t=1255s]) Subpoenaed by OpenAI (29:10 [https://www.youtube.com/watch?v=jqPDc9JpOc0&t=1750s]) Responding to the Subpoena (37:41 [https://www.youtube.com/watch?v=jqPDc9JpOc0&t=2261s]) The Case for Transparency (44:30 [https://www.youtube.com/watch?v=jqPDc9JpOc0&t=2670s]) Pricing Risk and Regulation (52:15 [https://www.youtube.com/watch?v=jqPDc9JpOc0&t=3135s]) Measuring Transparency and Auditing (57:50 [https://www.youtube.com/watch?v=jqPDc9JpOc0&t=3470s]) Hope for the Future PRODUCED BY: https://aipodcast.ing [https://aipodcast.ing] SOCIAL LINKS: Website: https://podcast.futureoflife.org [https://podcast.futureoflife.org] Twitter (FLI): https://x.com/FLI_org [https://x.com/FLI_org] Twitter (Gus): https://x.com/gusdocker [https://x.com/gusdocker] LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ [https://www.linkedin.com/company/future-of-life-institute/] YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ [https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/] Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 [https://geo.itunes.apple.com/us/podcast/id1170991978] Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP [https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP]

27. nov. 2025 - 1 h 1 min
episode We're Not Ready for AGI (with Will MacAskill) artwork

We're Not Ready for AGI (with Will MacAskill)

William MacAskill is a senior research fellow at Forethought. He joins the podcast to discuss his Better Futures essay series. We explore moral error risks, AI character design, space governance, and persistent path dependence. The conversation also covers risk-averse AI systems, moral trade between value systems, and improving model specifications for ethical reasoning. LINKS: - Better Futures Research Series: https://www.forethought.org/research/better-futures [https://www.forethought.org/research/better-futures] - William MacAskill Forethought Profile: https://www.forethought.org/people/william-macaskill [https://www.forethought.org/people/william-macaskill] CHAPTERS: (00:00) Episode Preview (01:03) Improving The Future's Quality (09:58) Moral Errors and AI Rights (18:24) AI's Impact on Thinking (27:17) Utopias and Population Ethics (36:41) The Danger of Moral Lock-in (44:38) Deals with Misaligned AI (57:25) AI and Moral Trade (01:08:21) Improving AI Ethical Reasoning (01:16:05) The Risk of Path Dependence (01:27:41) Avoiding Future Lock-in (01:36:22) The Urgency of Space Governance (01:46:19) A Future Research Agenda (01:57:36) Is Intelligence a Good Bet? PRODUCED BY: https://aipodcast.ing [https://aipodcast.ing] SOCIAL LINKS: Website: https://podcast.futureoflife.org [https://podcast.futureoflife.org] Twitter (FLI): https://x.com/FLI_org [https://x.com/FLI_org] Twitter (Gus): https://x.com/gusdocker [https://x.com/gusdocker] LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ [https://www.linkedin.com/company/future-of-life-institute/] YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ [https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/] Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 [https://geo.itunes.apple.com/us/podcast/id1170991978] Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP [https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP]

14. nov. 2025 - 2 h 3 min
episode What Happens When Insiders Sound the Alarm on AI? (with Karl Koch) artwork

What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)

Karl Koch is founder of the AI Whistleblower Initiative. He joins the podcast to discuss transparency and protections for AI insiders who spot safety risks. We explore current company policies, legal gaps, how to evaluate disclosure decisions, and whistleblowing as a backstop when oversight fails. The conversation covers practical guidance for potential whistleblowers and challenges of maintaining transparency as AI development accelerates. LINKS: * About the AI Whistleblower Initiative [https://aiwi.org/about/] * Karl Koch [https://aiwi.org/team-member-karl-koch/] PRODUCED BY: https://aipodcast.ing [https://aipodcast.ing] CHAPTERS: (00:00) Episode Preview (00:55) Starting the Whistleblower Initiative (05:43) Current State of Protections (13:04) Path to Optimal Policies (23:28) A Whistleblower's First Steps (32:29) Life After Whistleblowing (39:24) Evaluating Company Policies (48:19) Alternatives to Whistleblowing (55:24) High-Stakes Future Scenarios (01:02:27) AI and National Security SOCIAL LINKS: Website: https://podcast.futureoflife.org [https://podcast.futureoflife.org] Twitter (FLI): https://x.com/FLI_org [https://x.com/FLI_org] Twitter (Gus): https://x.com/gusdocker [https://x.com/gusdocker] LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ [https://www.linkedin.com/company/future-of-life-institute/] YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ [https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/] Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 [https://geo.itunes.apple.com/us/podcast/id1170991978] Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP [https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP] DISCLAIMERS: - AIWI does not request, encourage or counsel potential whistleblowers or listeners of this podcast to act unlawfully. - This is not legal advice and if you, the listener, find yourself needing legal counsel, please visit https://aiwi.org/contact-hub/ [https://aiwi.org/contact-hub/] for detailed profiles of the world's leading whistleblower support organizations.

07. nov. 2025 - 1 h 8 min
episode Can Machines Be Truly Creative? (with Maya Ackerman) artwork

Can Machines Be Truly Creative? (with Maya Ackerman)

Maya Ackerman is an AI researcher, co-founder and CEO of WaveAI, and author of the book "Creative Machines: AI, Art & Us." She joins the podcast to discuss creativity in humans and machines. We explore defining creativity as novel and valuable output, why evolution qualifies as creative, and how AI alignment can reduce machine creativity. The conversation covers humble creative machines versus all-knowing oracles, hallucination's role in thought, and human-AI collaboration strategies that elevate rather than replace human capabilities. LINKS: - Maya Ackerman: https://en.wikipedia.org/wiki/Maya_Ackerman [https://en.wikipedia.org/wiki/Maya_Ackerman] - Creative Machines: AI, Art & Us: https://maya-ackerman.com/creative-machines-book/ [https://maya-ackerman.com/creative-machines-book/] PRODUCED BY: https://aipodcast.ing [https://aipodcast.ing] CHAPTERS: (00:00) Episode Preview (01:00) Defining Human Creativity (02:58) Machine and AI Creativity (06:25) Measuring Subjective Creativity (10:07) Creativity in Animals (13:43) Alignment Damages Creativity (19:09) Creativity is Hallucination (26:13) Humble Creative Machines (30:50) Incentives and Replacement (40:36) Analogies for the Future (43:57) Collaborating with AI (52:20) Reinforcement Learning & Slop (55:59) AI in Education SOCIAL LINKS: Website: https://podcast.futureoflife.org [https://podcast.futureoflife.org] Twitter (FLI): https://x.com/FLI_org [https://x.com/FLI_org] Twitter (Gus): https://x.com/gusdocker [https://x.com/gusdocker] LinkedIn: https://www.linkedin.com/company/future-of-life-institute/ [https://www.linkedin.com/company/future-of-life-institute/] YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/ [https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/] Apple: https://geo.itunes.apple.com/us/podcast/id1170991978 [https://geo.itunes.apple.com/us/podcast/id1170991978] Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP [https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP]

24. okt. 2025 - 1 h 1 min
En fantastisk app med et enormt stort udvalg af spændende podcasts. Podimo formår virkelig at lave godt indhold, der takler de lidt mere svære emner. At der så også er lydbøger oveni til en billig pris, gør at det er blevet min favorit app.
En fantastisk app med et enormt stort udvalg af spændende podcasts. Podimo formår virkelig at lave godt indhold, der takler de lidt mere svære emner. At der så også er lydbøger oveni til en billig pris, gør at det er blevet min favorit app.
Rigtig god tjeneste med gode eksklusive podcasts og derudover et kæmpe udvalg af podcasts og lydbøger. Kan varmt anbefales, om ikke andet så udelukkende pga Dårligdommerne, Klovn podcast, Hakkedrengene og Han duo 😁 👍
Podimo er blevet uundværlig! Til lange bilture, hverdagen, rengøringen og i det hele taget, når man trænger til lidt adspredelse.

Vælg dit abonnement

Begrænset tilbud

Premium

20 timers lydbøger

  • Podcasts kun på Podimo

  • Gratis podcasts

  • Opsig når som helst

1 måned kun 9 kr.
Derefter 99 kr. / måned

Kom i gang

Premium Plus

100 timers lydbøger

  • Podcasts kun på Podimo

  • Gratis podcasts

  • Opsig når som helst

Prøv gratis i 7 dage
Derefter 129 kr. / month

Prøv gratis

Kun på Podimo

Populære lydbøger

Kom i gang

1 måned kun 9 kr. Derefter 99 kr. / måned. Opsig når som helst.