
Future of Life Institute Podcast
Podcast by Future of Life Institute
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
90 vrk ilmainen kokeilu
Kokeilun jälkeen 7,99 € / kuukausi.Peru milloin tahansa.
Kaikki jaksot
241 jaksot
On this episode, Tom Davidson joins me to discuss the emerging threat of AI-enabled coups, where advanced artificial intelligence could empower covert actors to seize power. We explore scenarios including secret loyalties within companies, rapid military automation, and how AI-driven democratic backsliding could differ significantly from historical precedents. Tom also outlines key mitigation strategies, risk indicators, and opportunities for individuals to help prevent these threats. Learn more about Tom's work here: https://www.forethought.org Timestamps: 00:00:00 Preview: why preventing AI-enabled coups matters 00:01:24 What do we mean by an “AI-enabled coup”? 00:01:59 Capabilities AIs would need (persuasion, strategy, productivity) 00:02:36 Cyber-offense and the road to robotized militaries 00:05:32 Step-by-step example of an AI-enabled military coup 00:08:35 How AI-enabled coups would differ from historical coups 00:09:24 Democratic backsliding (Venezuela, Hungary, U.S. parallels) 00:12:38 Singular loyalties, secret loyalties, exclusive access 00:14:01 Secret-loyalty scenario: CEO with hidden control 00:18:10 From sleeper agents to sophisticated covert AIs 00:22:22 Exclusive-access threat: one project races ahead 00:29:03 Could one country outgrow the rest of the world? 00:40:00 Could a single company dominate global GDP? 00:47:01 Autocracies vs democracies 00:54:43 Mitigations for singular and secret loyalties 01:06:25 Guardrails, monitoring, and controlled-use APIs 01:12:38 Using AI itself to preserve checks-and-balances 01:24:53 Risk indicators to watch for AI-enabled coups 01:33:05 Tom’s risk estimates for the next 5 and 30 years 01:46:50 How you can help – research, policy, and careers

Anders Sandberg joins me to discuss superintelligence and its profound implications for human psychology, markets, and governance. We talk about physical bottlenecks, tensions between the technosphere and the biosphere, and the long-term cultural and physical forces shaping civilization. We conclude with Sandberg explaining the difficulties of designing reliable AI systems amidst rapid change and coordination risks. Learn more about Anders's work here: https://mimircenter.org/anders-sandberg Timestamps: 00:00:00 Preview and intro 00:04:20 2030 superintelligence scenario 00:11:55 Status, post-scarcity, and reshaping human psychology 00:16:00 Physical limits: energy, datacenter, and waste-heat bottlenecks 00:23:48 Technosphere vs biosphere 00:28:42 Culture and physics as long-run drivers of civilization 00:40:38 How superintelligence could upend markets and governments 00:50:01 State inertia: why governments lag behind companies 00:59:06 Value lock-in, censorship, and model alignment 01:08:32 Emergent AI ecosystems and coordination-failure risks 01:19:34 Predictability vs reliability: designing safe systems 01:30:32 Crossing the reliability threshold 01:38:25 Personal reflections on accelerating change

On this episode, Daniel Kokotajlo joins me to discuss why artificial intelligence may surpass the transformative power of the Industrial Revolution, and just how much AI could accelerate AI research. We explore the implications of automated coding, the critical need for transparency in AI development, the prospect of AI-to-AI communication, and whether AI is an inherently risky technology. We end by discussing iterative forecasting and its role in anticipating AI's future trajectory. You can learn more about Daniel's work at: https://ai-2027.com and https://ai-futures.org Timestamps: 00:00:00 Preview and intro 00:00:50 Why AI will eclipse the Industrial Revolution 00:09:48 How much can AI speed up AI research? 00:16:13 Automated coding and diffusion 00:27:37 Transparency in AI development 00:34:52 Deploying AI internally 00:40:24 Communication between AIs 00:49:23 Is AI inherently risky? 00:59:54 Iterative forecasting

On this episode, Daniel Susskind joins me to discuss disagreements between AI researchers and economists, how we can best measure AI’s economic impact, how human values can influence economic outcomes, what meaningful work will remain for humans in the future, the role of commercial incentives in AI development, and the future of education. You can learn more about Daniel's work here: https://www.danielsusskind.com Timestamps: 00:00:00 Preview and intro 00:03:19 AI researchers versus economists 00:10:39 Measuring AI's economic effects 00:16:19 Can AI be steered in positive directions? 00:22:10 Human values and economic outcomes 00:28:21 What will remain for people to do? 00:44:58 Commercial incentives in AI 00:50:38 Will education move towards general skills? 00:58:46 Lessons for parents

Ed Newton-Rex joins me to discuss the issue of AI models trained on copyrighted data, and how we might develop fairer approaches that respect human creators. We talk about AI-generated music, Ed’s decision to resign from Stability AI, the industry’s attitude towards rights, authenticity in AI-generated art, and what the future holds for creators, society, and living standards in an increasingly AI-driven world. Learn more about Ed's work here: https://ed.newtonrex.com Timestamps: 00:00:00 Preview and intro 00:04:18 AI-generated music 00:12:15 Resigning from Stability AI 00:16:20 AI industry attitudes towards rights 00:26:22 Fairly Trained 00:37:16 Special kinds of training data 00:50:42 The longer-term future of AI 00:56:09 Will AI improve living standards? 01:03:10 AI versions of artists 01:13:28 Authenticity and art 01:18:45 Competitive pressures in AI 01:24:06 Priorities going forward
90 vrk ilmainen kokeilu
Kokeilun jälkeen 7,99 € / kuukausi.Peru milloin tahansa.
Podimon podcastit
Mainoksista vapaa
Maksuttomat podcastit