
London Futurists
Podcast de London Futurists
Disfruta 30 días gratis
4,99 € / mes después de la prueba.Cancela cuando quieras.

Más de 1 millón de oyentes
Podimo te va a encantar, y no sólo a ti
Valorado con 4,7 en la App Store
Acerca de London Futurists
Anticipating and managing exponential impact - hosts David Wood and Calum ChaceCalum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.
Todos los episodios
119 episodios
Could the future see the emergence and adoption of a new field of engineering called nucleonics, in which the energy of nuclear fusion is accessed at relatively low temperatures, producing abundant clean safe energy? This kind of idea has been discussed since 1989, when the claims of cold fusion first received media attention. It is often assumed that the field quickly reached a dead-end, and that the only scientists who continue to study it are cranks. However, as we’ll hear in this episode, there may be good reasons to keep an open mind about a number of anomalous but promising results. Our guest is Jonah Messinger, who is a Winton Scholar and Ph.D. student at the Cavendish Laboratory of Physics at the University of Cambridge. Jonah is also a Research Affiliate at MIT, a Senior Energy Analyst at the Breakthrough Institute, and previously he was a Visiting Scientist and ThinkSwiss Scholar at ETH Zürich. His work has appeared in research journals, on the John Oliver show, and in publications of Columbia University. He earned his Master’s in Energy and Bachelor’s in Physics from the University of Illinois at Urbana-Champaign, where he was named to its Senior 100 Honorary. Selected follow-ups: * Jonah Messinger [https://thebreakthrough.org/people/jonah-messinger] (The Breakthrough Institute) * nucleonics.org [https://nucleonics.org/] * U.S. Department of Energy Announces $10 Million in Funding to Projects Studying Low-Energy Nuclear Reactions [https://arpa-e.energy.gov/news-and-events/news-and-insights/us-department-energy-announces-10-million-funding-projects-studying-low-energy-nuclear-reactions] (ARPA-E) * How Anomalous Science Breaks Through [https://thebreakthrough.org/blog/how-anomalous-science-breaks-through] - by Jonah Messinger * Wolfgang Pauli [https://en.wikiquote.org/wiki/Wolfgang_Pauli] (Wikiquote) * Cold fusion: A case study for scientific behavior [https://undsci.berkeley.edu/cold-fusion-a-case-study-for-scientific-behavior/] (Understanding Science) * Calculated fusion rates in isotopic hydrogen molecules [https://nucleonics.org/literature-showcase/koonin-nauenberg-1989-spontaneous-fusion-deuterium-molecule-starting-point-low] - by SE Koonin & M Nauenberg * Known mechanisms that increase nuclear fusion rates in the solid state [https://iopscience.iop.org/article/10.1088/1367-2630/ad091c] - by Florian Metzler et al * Introduction to superradiance [https://coldfusionblog.net/2014/05/19/introduction-to-superradiance/] (Cold Fusion Blog) * Peter L. Hagelstein [https://www.rle.mit.edu/people/peter-l-hagelstein/] - Professor at MIT * Models for nuclear fusion in the solid state [https://arxiv.org/abs/2501.08338] - by Peter Hagelstein et al * Risk and Scientific Reputation: Lessons from Cold Fusion [https://arxiv.org/abs/2201.03776] - by Huw Price * Katalin Karikó [https://en.wikipedia.org/wiki/Katalin_Karik%C3%B3] (Wikipedia) * “Abundance” and Its Insights for Policymakers [https://www.eesi.org/articles/view/abundance-and-its-insights-for-policymakers] - by Hadley Brown * Identifying intellectual dark matter [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4624693] - by Florian Metzler and Jonah Messinger Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

This episode of London Futurists Podcast is a special joint production with the AI and You podcast which is hosted by Peter Scott. It features a three-way discussion, between Peter, Calum, and David, on the future of AI, with particular focus on AI agents, AI safety, and AI boycotts. Peter Scott is a futurist, speaker, and technology expert helping people master technological disruption. After receiving a Master’s degree in Computer Science from Cambridge University, he went to California to work for NASA’s Jet Propulsion Laboratory. His weekly podcast, “Artificial Intelligence and You” tackles three questions: What is AI? Why will it affect you? How do you and your business survive and thrive through the AI Revolution? Peter’s second book, also called “Artificial Intelligence and You,” was released in 2022. Peter works with schools to help them pivot their governance frameworks, curricula, and teaching methods to adapt to and leverage AI. Selected follow-ups: * Artificial Intelligence and You [https://aiandyou.net/] (podcast) * Making Sense of AI [https://www.peterscott.com/] - Peter's personal website * Artificial Intelligence and You [https://www.peterscott.com/ai-and-you.html] (book) * AI agent verification [https://conscium.com/] - Conscium * Preventing Zero-Click AI Threats: Insights from EchoLeak [https://www.trendmicro.com/en_gb/research/25/g/preventing-zero-click-ai-threats-insights-from-echoleak.html] - TrendMicro * Future Crimes [https://www.goodreads.com/book/show/22318398-future-crimes] - book by Marc Goodman * How TikTok Serves Up Sex and Drug Videos to Minors [https://www.wsj.com/tech/tiktok-algorithm-sex-drugs-minors-11631052944] - Washington Post * COVID-19 vaccine misinformation and hesitancy [https://en.wikipedia.org/wiki/COVID-19_vaccine_misinformation_and_hesitancy] - Wikipedia * Cambridge Analytica [https://en.wikipedia.org/wiki/Cambridge_Analytica] - Wikipedia * Invisible Rulers [https://www.reneediresta.com/books/] - book by Renée DiResta * 2025 Northern Ireland riots [https://en.wikipedia.org/wiki/2025_Northern_Ireland_riots] (Ballymena) - Wikipedia * Google DeepMind Slammed by Protesters Over Broken AI Safety Promise [https://www.techtimes.com/articles/311120/20250701/google-deepmind-slammed-protesters-over-broken-ai-safety-promise.htm] Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

The guest in this episode is Hugo Spowers. Hugo has led an adventurous life. In the 1970s and 80s he was an active member of the Dangerous Sports Club, which invented bungee jumping, inspired by an initiation ceremony in Vanuatu. Hugo skied down a black run in St.Moritz in formal dress, seated at a grand piano, and he broke his back, neck and hips when he misjudged the length of one of his bungee ropes. Hugo is a petrol head, and done more than his fair share of car racing. But if he’ll excuse the pun, his driving passion was always the environment, and he is one of the world’s most persistent and dedicated pioneers of hydrogen cars. He is co-founder and CEO of Riversimple, a 24 year-old pre-revenue startup, which have developed 5 generations of research vehicles. Hydrogen cars are powered by electric motors using electricity generated by fuel cells. Fuel cells are electrolysis in reverse. You put in hydrogen and oxygen, and what you get out is electricity and water. There is a long-standing debate among energy experts about the role of hydrogen fuel cells in the energy mix, and Hugo is a persuasive advocate. Riversimple’s cars carry modest sized fuel cells complemented by supercapacitors, with motors for each of the four wheels. The cars are made of composites, not steel, because minimising weight is critical for fuel efficiency, pollution, and road safety. The cars are leased rather than sold, which enables a circular business model, involving higher initial investment per car, and no built-in obsolescence. The initial, market entry cars are designed as local run-arounds for households with two cars, which means the fuelling network can be built out gradually. And Hugo also has strong opinions about company governance. Selected follow-ups: * Hugo Spowers [https://en.wikipedia.org/wiki/Hugo_Spowers] - Wikipedia * Riversimple [https://www.riversimple.com/] Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Can we use AI to improve how we handle conflict? Or even to end the worst conflicts that are happening all around us? That’s the subject of the new book of our guest in this episode, Simon Horton. The book has the bold title “The End of Conflict: How AI will end war and help us get on better”. Simon has a rich background, including being a stand-up comedian and a trapeze artist – which are, perhaps, two useful skills for dealing with acute conflict. He has taught negotiation and conflict resolution for 20 years, across 25 different countries, where his clients have included the British Army, the Saudi Space Agency, and Goldman Sachs. His previous books include “Change their minds” and “The leader’s guide to negotiation”. Selected follow-ups: * Simon Horton [https://theendofconflict.ai/about-the-author] * The End of Conflict [https://theendofconflict.ai/] - book website * The Better Angels of our Nature [https://stevenpinker.com/publications/better-angels-our-nature] - book by Steven Pinker * Crime in England and Wales: year ending March 2024 [https://www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice/bulletins/crimeinenglandandwales/yearendingmarch2024] - UK Office of National Statistics * How Martin McGuinness and Ian Paisley forged an unlikely friendship [https://www.belfasttelegraph.co.uk/news/northern-ireland/how-martin-mcguinness-and-ian-paisley-forged-an-unlikely-friendship/35550640.html] - Belfast Telegraph * Review of Steven Pinker’s Enlightenment Now [https://scottaaronson.blog/?p=3654] by Scott Aaronson * A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” [https://www.lesswrong.com/posts/C3wqYsAtgZDzCug8b/a-detailed-critique-of-one-section-of-steven-pinker-s] by Philosophy Torres * End Times: Elites, Counter-Elites, and the Path of Political Disintegration [https://peterturchin.com/book/end-times/] - book by Peter Turchin * Why do chimps kill each other? [https://www.science.org/content/article/why-do-chimps-kill-each-other] - Science * Using Artificial Intelligence in Peacemaking: The Libya Experience [https://peacepolls.etinu.net/peacepolls/documents/009260.pdf] - Colin Irwin, University of Liverpool * Retrospective on the Oslo Accord [https://www.nytimes.com/interactive/2023/11/20/magazine/israel-gaza-oslo-accords.html] - New York Times * Remesh [https://www.remesh.ai/] * Polis [https://democracy-technologies.org/tool/polis/] - Democracy Technologies * Waves: Tech-Powered Democracy [https://demos.co.uk/waves-tech-powered-democracy/] - Demos Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Our guest in this episode is Nate Soares, President of the Machine Intelligence Research Institute, or MIRI. MIRI was founded in 2000 as the Singularity Institute for Artificial Intelligence by Eliezer Yudkowsky, with support from a couple of internet entrepreneurs. Among other things, it ran a series of conferences called the Singularity Summit. In 2012, Peter Diamandis and Ray Kurzweil, acquired the Singularity Summit, including the Singularity brand, and the Institute was renamed as MIRI. Nate joined MIRI in 2014 after working as a software engineer at Google, and since then he’s been a key figure in the AI safety community. In a blogpost at the time he joined MIRI he observed “I turn my skills towards saving the universe, because apparently nobody ever got around to teaching me modesty.” MIRI has long had a fairly pessimistic stance on whether AI alignment is possible. In this episode, we’ll explore what drives that view—and whether there is any room for hope. Selected follow-ups: * Nate Soares [https://intelligence.org/team/nate-soares/] - MIRI * Yudkowsky and Soares Announce Major New Book: “If Anyone Builds It, Everyone Dies” [https://intelligence.org/2025/05/15/yudkowsky-and-soares-announce-major-new-book-if-anyone-builds-it-everyone-dies/] - MIRI * The Bayesian model of probabilistic reasoning [https://arbital.com/p/bayes_rule/?l=1zq] * During safety testing, o1 broke out of its VM [https://www.reddit.com/r/OpenAI/comments/1ffwbp5/wakeup_moment_during_safety_testing_o1_broke_out/] - Reddit * Leo Szilard [https://physicsworld.com/a/leo-szilard-the-physicist-who-envisaged-nuclear-weapons-but-later-opposed-their-use/] - Physics World * David Bowie - Five Years [https://www.youtube.com/watch?v=8gPSGrpIlkc] - Old Grey Whistle Test * Amara's Law [https://www.computer.org/publications/tech-news/trends/amaras-law-and-tech-future] - IEEE * Robert Oppenheimer calculation of p(doom) [https://onlinelibrary.wiley.com/doi/full/10.1002/ntls.20230023] * JD Vance commenting on AI-2027 [https://controlai.news/p/ais-are-improving-ais] * SolidGoldMagikarp [https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation] - LessWrong * ASML [https://www.asml.com/] * Chicago Pile-1 [https://en.wikipedia.org/wiki/Chicago_Pile-1] - Wikipedia * Castle Bravo [https://en.wikipedia.org/wiki/Castle_Bravo] - Wikipedia Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Valorado con 4,7 en la App Store
Disfruta 30 días gratis
4,99 € / mes después de la prueba.Cancela cuando quieras.
Podcasts exclusivos
Sin anuncios
Podcast gratuitos
Audiolibros
20 horas / mes