
London Futurists
Podcast door London Futurists
Anticipating and managing exponential impact - hosts David Wood and Calum ChaceCalum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.
Tijdelijke aanbieding
3 maanden voor € 1,00
Daarna € 9,99 / maandElk moment opzegbaar.
Alle afleveringen
116 afleveringen
Can we use AI to improve how we handle conflict? Or even to end the worst conflicts that are happening all around us? That’s the subject of the new book of our guest in this episode, Simon Horton. The book has the bold title “The End of Conflict: How AI will end war and help us get on better”. Simon has a rich background, including being a stand-up comedian and a trapeze artist – which are, perhaps, two useful skills for dealing with acute conflict. He has taught negotiation and conflict resolution for 20 years, across 25 different countries, where his clients have included the British Army, the Saudi Space Agency, and Goldman Sachs. His previous books include “Change their minds” and “The leader’s guide to negotiation”. Selected follow-ups: * Simon Horton [https://theendofconflict.ai/about-the-author] * The End of Conflict [https://theendofconflict.ai/] - book website * The Better Angels of our Nature [https://stevenpinker.com/publications/better-angels-our-nature] - book by Steven Pinker * Crime in England and Wales: year ending March 2024 [https://www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice/bulletins/crimeinenglandandwales/yearendingmarch2024] - UK Office of National Statistics * How Martin McGuinness and Ian Paisley forged an unlikely friendship [https://www.belfasttelegraph.co.uk/news/northern-ireland/how-martin-mcguinness-and-ian-paisley-forged-an-unlikely-friendship/35550640.html] - Belfast Telegraph * Review of Steven Pinker’s Enlightenment Now [https://scottaaronson.blog/?p=3654] by Scott Aaronson * A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” [https://www.lesswrong.com/posts/C3wqYsAtgZDzCug8b/a-detailed-critique-of-one-section-of-steven-pinker-s] by Philosophy Torres * End Times: Elites, Counter-Elites, and the Path of Political Disintegration [https://peterturchin.com/book/end-times/] - book by Peter Turchin * Why do chimps kill each other? [https://www.science.org/content/article/why-do-chimps-kill-each-other] - Science * Using Artificial Intelligence in Peacemaking: The Libya Experience [https://peacepolls.etinu.net/peacepolls/documents/009260.pdf] - Colin Irwin, University of Liverpool * Retrospective on the Oslo Accord [https://www.nytimes.com/interactive/2023/11/20/magazine/israel-gaza-oslo-accords.html] - New York Times * Remesh [https://www.remesh.ai/] * Polis [https://democracy-technologies.org/tool/polis/] - Democracy Technologies * Waves: Tech-Powered Democracy [https://demos.co.uk/waves-tech-powered-democracy/] - Demos Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Real Talk About Marketing [https://www.acxiom.com/podcasts/] An Acxiom podcast where we discuss marketing made better, bringing you real... Listen on: Apple Podcasts [https://podcasts.apple.com/podcast/id1463601023] Spotify [https://open.spotify.com/show/1rqiMbNFn3kAabnYC0K70a] Digital Disruption with Geoff Nielson [https://www.youtube.com/@InfoTechRG] Discover how technology is reshaping our lives and livelihoods. Listen on: Apple Podcasts [https://podcasts.apple.com/podcast/id1798209377] Spotify [https://open.spotify.com/show/7L9LI7y1rcLVeNEbuUsgCq?si=30O1lE3LSpO8erTD5ov7Xg]

Our guest in this episode is Nate Soares, President of the Machine Intelligence Research Institute, or MIRI. MIRI was founded in 2000 as the Singularity Institute for Artificial Intelligence by Eliezer Yudkowsky, with support from a couple of internet entrepreneurs. Among other things, it ran a series of conferences called the Singularity Summit. In 2012, Peter Diamandis and Ray Kurzweil, acquired the Singularity Summit, including the Singularity brand, and the Institute was renamed as MIRI. Nate joined MIRI in 2014 after working as a software engineer at Google, and since then he’s been a key figure in the AI safety community. In a blogpost at the time he joined MIRI he observed “I turn my skills towards saving the universe, because apparently nobody ever got around to teaching me modesty.” MIRI has long had a fairly pessimistic stance on whether AI alignment is possible. In this episode, we’ll explore what drives that view—and whether there is any room for hope. Selected follow-ups: * Nate Soares [https://intelligence.org/team/nate-soares/] - MIRI * Yudkowsky and Soares Announce Major New Book: “If Anyone Builds It, Everyone Dies” [https://intelligence.org/2025/05/15/yudkowsky-and-soares-announce-major-new-book-if-anyone-builds-it-everyone-dies/] - MIRI * The Bayesian model of probabilistic reasoning [https://arbital.com/p/bayes_rule/?l=1zq] * During safety testing, o1 broke out of its VM [https://www.reddit.com/r/OpenAI/comments/1ffwbp5/wakeup_moment_during_safety_testing_o1_broke_out/] - Reddit * Leo Szilard [https://physicsworld.com/a/leo-szilard-the-physicist-who-envisaged-nuclear-weapons-but-later-opposed-their-use/] - Physics World * David Bowie - Five Years [https://www.youtube.com/watch?v=8gPSGrpIlkc] - Old Grey Whistle Test * Amara's Law [https://www.computer.org/publications/tech-news/trends/amaras-law-and-tech-future] - IEEE * Robert Oppenheimer calculation of p(doom) [https://onlinelibrary.wiley.com/doi/full/10.1002/ntls.20230023] * JD Vance commenting on AI-2027 [https://controlai.news/p/ais-are-improving-ais] * SolidGoldMagikarp [https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation] - LessWrong * ASML [https://www.asml.com/] * Chicago Pile-1 [https://en.wikipedia.org/wiki/Chicago_Pile-1] - Wikipedia * Castle Bravo [https://en.wikipedia.org/wiki/Castle_Bravo] - Wikipedia Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Real Talk About Marketing [https://www.acxiom.com/podcasts/] An Acxiom podcast where we discuss marketing made better, bringing you real... Listen on: Apple Podcasts [https://podcasts.apple.com/podcast/id1463601023] Spotify [https://open.spotify.com/show/1rqiMbNFn3kAabnYC0K70a] Digital Disruption with Geoff Nielson [https://www.youtube.com/@InfoTechRG] Discover how technology is reshaping our lives and livelihoods. Listen on: Apple Podcasts [https://podcasts.apple.com/podcast/id1798209377] Spotify [https://open.spotify.com/show/7L9LI7y1rcLVeNEbuUsgCq?si=30O1lE3LSpO8erTD5ov7Xg]

Our guest in this episode is Henry Shevlin. Henry is the Associate Director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where he also co-directs the Kinds of Intelligence program and oversees educational initiatives. He researches the potential for machines to possess consciousness, the ethical ramifications of such developments, and the broader implications for our understanding of intelligence. In his 2024 paper, “Consciousness, Machines, and Moral Status,” Henry examines the recent rapid advancements in machine learning and the questions they raise about machine consciousness and moral status. He suggests that public attitudes towards artificial consciousness may change swiftly, as human-AI interactions become increasingly complex and intimate. He also warns that our tendency to anthropomorphise may lead to misplaced trust in and emotional attachment to AIs. Note: this episode is co-hosted by David and Will Millership, the CEO of a non-profit called Prism (Partnership for Research Into Sentient Machines). Prism is seeded by Conscium, a startup where both Calum and David are involved, and which, among other things, is researching the possibility and implications of machine consciousness. Will and Calum will be releasing a new Prism podcast focusing entirely on Conscious AI, and the first few episodes will be in collaboration with the London Futurists Podcast. Selected follow-ups: * PRISM podcast [https://prism.buzzsprout.com/] * Henry Shevlin [https://henryshevlin.com/] - personal site * Kinds of Intelligence [https://www.lcfi.ac.uk/research/programme/kinds-of-intelligence] - Leverhulme Centre for the Future of Intelligence * Consciousness, Machines, and Moral Status [https://philarchive.org/rec/SHECMA-6] - 2024 paper by Henry Shevlin * Apply rich psychological terms in AI with care [https://www.nature.com/articles/s42256-019-0039-y] - by Henry Shevlin and Marta Halina * What insects can tell us about the origins of consciousness [https://www.pnas.org/doi/10.1073/pnas.1520084113] - by Andrew Barron and Colin Klein * Consciousness in Artificial Intelligence: Insights from the Science of Consciousness [https://arxiv.org/abs/2308.08708] - By Patrick Butlin, Robert Long, et al * Association for the Study of Consciousness [https://theassc.org/about-us/] Other researchers mentioned: * Blake Lemoine * Thomas Nagel * Ned Block * Peter Senge * Galen Strawson * David Chalmers * David Benatar * Thomas Metzinger * Brian Tomasik * Murray Shanahan Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Real Talk About Marketing [https://www.acxiom.com/podcasts/] An Acxiom podcast where we discuss marketing made better, bringing you real... Listen on: Apple Podcasts [https://podcasts.apple.com/podcast/id1463601023] Spotify [https://open.spotify.com/show/1rqiMbNFn3kAabnYC0K70a] Digital Disruption with Geoff Nielson [https://www.youtube.com/@InfoTechRG] Discover how technology is reshaping our lives and livelihoods. Listen on: Apple Podcasts [https://podcasts.apple.com/podcast/id1798209377] Spotify [https://open.spotify.com/show/7L9LI7y1rcLVeNEbuUsgCq?si=30O1lE3LSpO8erTD5ov7Xg]

How can a binding international treaty be agreed and put into practice, when many parties are strongly tempted to break the rules of the agreement, for commercial or military advantage, and when cheating may be hard to detect? That’s the dilemma we’ll examine in this episode, concerning possible treaties to govern the development and deployment of advanced AI. Our guest is Otto Barten, Director of the Existential Risk Observatory, which is based in the Netherlands but operates internationally. In November last year, Time magazine published an article by Otto, advocating what his organisation calls a Conditional AI Safety Treaty. In March this year, these ideas were expanded into a 34-page preprint which we’ll be discussing today, “International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty”. Before co-founding the Existential Risk Observatory in 2021, Otto had roles as a sustainable energy engineer, data scientist, and entrepreneur. He has a BSc in Theoretical Physics from the University of Groningen and an MSc in Sustainable Energy Technology from Delft University of Technology. Selected follow-ups: * Existential Risk Observatory [https://www.existentialriskobservatory.org/] * There Is a Solution to AI’s Existential Risk Problem [https://time.com/7171432/conditional-ai-safety-treaty-trump/] - Time * International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty [https://arxiv.org/abs/2503.18956] - Otto Barten and colleagues * The Precipice: Existential Risk and the Future of Humanity [https://theprecipice.com/] - book by Toby Ord * Grand futures and existential risk [https://www.youtube.com/watch?v=Y6kaOgjY7-E] - Lecture by Anders Sandberg in London attended by Otto * PauseAI [https://pauseai.info/] * StopAI [https://www.stopai.info/] * Responsible Scaling Policies [https://metr.org/blog/2023-09-26-rsp/] - METR * Meta warns of 'worse' experience for European users [https://www.bbc.co.uk/news/articles/czd3mey1ej2o] - BBC News * Accidental Nuclear War: a Timeline of Close Calls [https://futureoflife.org/resource/nuclear-close-calls-a-timeline/] - FLI * The Vulnerable World Hypothesis [https://nickbostrom.com/papers/vulnerable.pdf] - Nick Bostrom * Semiconductor Manufacturing Optics [https://www.zeiss.com/semiconductor-manufacturing-technology/products/semiconductor-manufacturing-optics.html] - Zeiss * California Institute for Machine Consciousness [https://cimc.ai/] * Tipping point for large-scale social change? Just 25 percent [https://penntoday.upenn.edu/news/damon-centola-tipping-point-large-scale-social-change] - Penn Today Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Real Talk About Marketing [https://www.acxiom.com/podcasts/] An Acxiom podcast where we discuss marketing made better, bringing you real... Listen on: Apple Podcasts [https://podcasts.apple.com/podcast/id1463601023] Spotify [https://open.spotify.com/show/1rqiMbNFn3kAabnYC0K70a] Digital Disruption with Geoff Nielson [https://www.youtube.com/@InfoTechRG] Discover how technology is reshaping our lives and livelihoods. Listen on: Apple Podcasts [https://podcasts.apple.com/podcast/id1798209377] Spotify [https://open.spotify.com/show/7L9LI7y1rcLVeNEbuUsgCq?si=30O1lE3LSpO8erTD5ov7Xg]

In this episode, we return to the subject of existential risks, but with a focus on what actions can be taken to eliminate or reduce these risks. Our guest is James Norris, who describes himself on his website as an existential safety advocate. The website lists four primary organizations which he leads: the International AI Governance Alliance, Upgradable, the Center for Existential Safety, and Survival Sanctuaries. Previously, one of James' many successful initiatives was Effective Altruism Global, the international conference series for effective altruists. He also spent some time as the organizer of a kind of sibling organization to London Futurists, namely Bay Area Futurists. He graduated from the University of Texas at Austin with a triple major in psychology, sociology, and philosophy, as well as with minors in too many subjects to mention. Selected follow-ups: * James Norris website [https://www.jamesnorris.org/] * Upgrade your life & legacy [https://www.upgradable.org/] - Upgradable * The 7 Habits of Highly Effective People [https://www.franklincovey.com/courses/the-7-habits/] (Stephen Covey) * Beneficial AI 2017 [https://futureoflife.org/event/bai-2017/] - Asilomar conference * "...superintelligence in a few thousand days" [https://ia.samaltman.com/] - Sam Altman blogpost * Amara's Law [https://deviq.com/laws/amaras-law] - DevIQ * The Probability of Nuclear War [https://www.jstor.org/stable/424226] (JFK estimate) * AI Designs Chemical Weapons [https://www.deeplearning.ai/the-batch/ai-designs-chemical-weapons/] - The Batch * The Vulnerable World Hypothesis [https://nickbostrom.com/papers/vulnerable.pdf] - Nick Bostrom * We Need To Build Trustworthy AI Systems To Monitor Other AI: Yoshua Bengio [https://officechai.com/ai/we-need-to-build-trustworthy-ai-systems-to-monitor-other-ai-yoshua-bengio/] * Instrumental convergence [https://en.wikipedia.org/wiki/Instrumental_convergence] - Wikipedia * Neanderthal extinction [https://en.wikipedia.org/wiki/Neanderthal_extinction] - Wikipedia * Matrioshka brain [https://en.wikipedia.org/wiki/Matrioshka_brain] - Wikipedia * Will there be a 'WW3' before 2050? [https://manifold.markets/MetaculusBot/will-there-be-a-world-war-three-bef-293b30c295af] - Manifold prediction market * Existential Safety Action Pledge [https://actionforsafety.org/] * An Urgent Call for Global AI Governance [https://www.iaiga.org/] - IAIGA petition * Build your survival sanctuary [https://survivalsanctuaries.com/] Other people mentioned include: * Eliezer Yudkowsky, Roman Yampolskiy, Yan LeCun, Andrew Ng Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Real Talk About Marketing [https://www.acxiom.com/podcasts/] An Acxiom podcast where we discuss marketing made better, bringing you real... Listen on: Apple Podcasts [https://podcasts.apple.com/podcast/id1463601023] Spotify [https://open.spotify.com/show/1rqiMbNFn3kAabnYC0K70a] Digital Disruption with Geoff Nielson [https://www.youtube.com/@InfoTechRG] Discover how technology is reshaping our lives and livelihoods. Listen on: Apple Podcasts [https://podcasts.apple.com/podcast/id1798209377] Spotify [https://open.spotify.com/show/7L9LI7y1rcLVeNEbuUsgCq?si=30O1lE3LSpO8erTD5ov7Xg]
Tijdelijke aanbieding
3 maanden voor € 1,00
Daarna € 9,99 / maandElk moment opzegbaar.
Exclusieve podcasts
Advertentievrij
Gratis podcasts
Luisterboeken
20 uur / maand