Your Undivided Attention

Your Undivided Attention

Podcast de The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Empieza 7 días de prueba

$99 / mes después de la prueba.Cancela cuando quieras.

Phone screen with podimo app open surrounded by emojis

Más de 1 millón de oyentes

Podimo te va a encantar, y no estás solo/a

Rated 4.7 in the App Store

Acerca de Your Undivided Attention

Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.

Todos los episodios

143 episodios
episode “Rogue AI” Used to be a Science Fiction Trope. Not Anymore. artwork
“Rogue AI” Used to be a Science Fiction Trope. Not Anymore.

Everyone knows the science fiction tropes of AI systems that go rogue, disobey orders, or even try to escape their digital environment. These are supposed to be warning signs and morality tales, not things that we would ever actually create in real life, given the obvious danger. And yet we find ourselves building AI systems that are exhibiting these exact behaviors. There’s growing evidence that in certain scenarios, every frontier AI system will deceive, cheat, or coerce their human operators. They do this when they're worried about being either shut down, having their training modified, or being replaced with a new model. And we don't currently know how to stop them from doing this—or even why they’re doing it all. In this episode, Tristan sits down with Edouard and Jeremie Harris of Gladstone AI, two experts who have been thinking about this worrying trend for years.  Last year, the State Department commissioned a report from them on the risk of uncontrollable AI to our national security. The point of this discussion is not to fearmonger but to take seriously the possibility that humans might lose control of AI and ask: how might this actually happen? What is the evidence we have of this phenomenon? And, most importantly, what can we do about it? Your Undivided Attention is produced by the Center for Humane Technology [https://www.humanetech.com/]. Follow us on X: @HumaneTech_ [https://twitter.com/humanetech_]. You can find a full transcript, key takeaways, and much more on our Substack [https://centerforhumanetechnology.substack.com/]. RECOMMENDED MEDIA Gladstone AI’s State Department Action Plan, which discusses the loss of control risk with AI [https://assets-global.website-files.com/62c4cf7322be8ea59c904399/65e7779f72417554f7958260_Gladstone%20Action%20Plan%20Executive%20Summary.pdf] Apollo Research’s summary of AI scheming, showing evidence of it in all of the frontier models [https://arxiv.org/pdf/2412.04984]The system card for Anthropic’s Claude Opus and Sonnet 4, detailing the emergent misalignment behaviors that came out in their red-teaming with Apollo Research [https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686f4f3b2ff47.pdf] Anthropic’s report on agentic misalignment based on their work with Apollo Research [https://www.anthropic.com/research/agentic-misalignment] Anthropic and Redwood Research’s work on alignment faking [https://www.anthropic.com/research/alignment-faking] The Trump White House AI Action Plan [https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf] Further reading on the phenomenon of more advanced AIs being better at deception. [https://www.livescience.com/technology/artificial-intelligence/the-more-advanced-ai-models-get-the-better-they-are-at-deceiving-us-they-even-know-when-theyre-being-tested] Further reading on Replit AI wiping a company’s coding database [https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-database-called-it-a-catastrophic-failure/] Further reading on the owl example that Jeremie gave [https://www.nbcnews.com/tech/tech-news/ai-models-can-secretly-influence-one-another-owls-rcna221583] Further reading on AI induced psychosis [https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html] Dan Hendryck and Eric Schmidt’s “Superintelligence Strategy” [https://drive.google.com/file/d/1JVPc3ObMP1L2a53T5LA1xxKXM6DAwEiC/view]   RECOMMENDED YUA EPISODES Daniel Kokotajlo Forecasts the End of Human Dominance [https://www.humanetech.com/podcast/daniel-kokotajlo-forecasts-the-end-of-human-dominance] Behind the DeepSeek Hype, AI is Learning to Reason [https://www.humanetech.com/podcast/behind-the-deepseek-hype-ai-is-learning-to-reason] The Self-Preserving Machine: Why AI Learns to Deceive [https://www.humanetech.com/podcast/the-self-preserving-machine-why-ai-learns-to-deceive] This Moment in AI: How We Got Here and Where We’re Going [https://www.humanetech.com/podcast/this-moment-in-ai-how-we-got-here-and-where-were-going] CORRECTIONS Tristan referenced a Wired article on the phenomenon of AI psychosis. It was actually from the New York Times [https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html]. Tristan hypothesized a scenario where a power-seeking AI might ask a user for access to their computer. While there are some AI services that can gain access to your computer with permission, they are specifically designed to do that. There haven’t been any documented cases of an AI going rogue and asking for control permissions.

14 ago 2025 - 42 min
episode AI is the Next Free Speech Battleground artwork
AI is the Next Free Speech Battleground

Imagine a future where the most persuasive voices in our society aren't human. Where AI generated speech fills our newsfeeds, talks to our children, and influences our elections. Where digital systems with no consciousness can hold bank accounts and property.  Where AI companies have transferred the wealth of human labor and creativity to their own ledgers without having to pay a cent. All without any legal accountability. This isn't a science fiction scenario. It’s the future we’re racing towards right now. The biggest tech companies are working right now to tip the scale of power in society away from humans and towards their AI systems. And the biggest arena for this fight is in the courts. In the absence of regulation, it's largely up to judges to determine the guardrails around AI. Judges who are relying on slim technical knowledge and archaic precedent to decide where this all goes. In this episode, Harvard Law professor Larry Lessig and Meetali Jain, director of the Tech Justice Law Project help make sense of the court’s role in steering AI and what we can do to help steer it better. Your Undivided Attention is produced by the Center for Humane Technology [https://www.humanetech.com/]. Follow us on X: @HumaneTech_ [https://twitter.com/humanetech_]. You can find a full transcript, key takeaways, and much more on our Substack [https://centerforhumanetechnology.substack.com/]. RECOMMENDED MEDIA “The First Amendment Does Not Protect Replicants” by Larry Lessig [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3922565] More information on the Tech Justice Law Project [https://techjusticelaw.org/] Further reading on Sewell Setzer’s story [https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html] Further reading on NYT v. Sullivan [https://www.oyez.org/cases/1963/39] Further reading on the Citizens United case [https://www.oyez.org/cases/2008/08-205] Further reading on Google’s deal with Character AI [https://www.washingtonpost.com/technology/2024/08/02/google-character-ai-noam-shazeer/] More information on Megan Garcia’s foundation, The Blessed Mother Family Foundation [https://blessedmotherfamily.org/] RECOMMENDED YUA EPISODES When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer [https://www.humanetech.com/podcast/when-the-person-abusing-your-child-is-a-chatbot-the-tragic-story-of-sewell-setzer] What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton [https://www.humanetech.com/podcast/what-can-we-do-about-abusive-chatbots-with-meetali-jain-and-camille-carlton] AI Is Moving Fast. We Need Laws that Will Too. [https://www.humanetech.com/podcast/ai-is-moving-fast-we-need-laws-that-will-too] The AI Dilemma [https://www.humanetech.com/podcast/the-ai-dilemma]

31 jul 2025 - 49 min
episode Daniel Kokotajlo Forecasts the End of Human Dominance artwork
Daniel Kokotajlo Forecasts the End of Human Dominance

In 2023, researcher Daniel Kokotajlo left OpenAI—and risked millions in stock options—to warn the world about the dangerous direction of AI development. Now he’s out with AI 2027, a forecast of where that direction might take us in the very near future.  AI 2027 predicts a world where humans lose control over our destiny at the hands of misaligned, super-intelligent AI systems within just the next few years. That may sound like science fiction but when you’re living on the upward slope of an exponential curve, science fiction can quickly become all too real. And you don’t have to agree with Daniel’s specific forecast to recognize that the incentives around AI could take us to a very bad place. We invited Daniel on the show this week to discuss those incentives, how they shape the outcomes he predicts in AI 2027, and what concrete steps we can take today to help prevent those outcomes.   Your Undivided Attention is produced by the Center for Humane Technology [https://www.humanetech.com/]. Follow us on X: @HumaneTech_ [https://twitter.com/humanetech_]. You can find a full transcript, key takeaways, and much more on our Substack [https://centerforhumanetechnology.substack.com/]. RECOMMENDED MEDIA The AI 2027 forecast from the AI Futures Project [https://ai-2027.com/] Daniel’s original AI 2026 blog post  [https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like] Further reading on Daniel’s departure from OpenAI [https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html] Anthropic recently released a survey of all the recent emergent misalignment research [https://www.anthropic.com/research/agentic-misalignment] Our statement in support of Sen. Grassley’s AI Whistleblower bill  [https://centerforhumanetechnology.substack.com/p/cht-supports-the-ai-whistleblower] RECOMMENDED YUA EPISODES The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future [https://www.humanetech.com/podcast/the-narrow-path-sam-hammond-on-ai-institutions-and-the-fragile-future] AGI Beyond the Buzz: What Is It, and Are We Ready? [https://www.humanetech.com/podcast/agi-beyond-the-buzz-what-is-it-and-are-we-ready] Behind the DeepSeek Hype, AI is Learning to Reason [https://www.humanetech.com/podcast/behind-the-deepseek-hype-ai-is-learning-to-reason] The Self-Preserving Machine: Why AI Learns to Deceive [https://www.humanetech.com/podcast/the-self-preserving-machine-why-ai-learns-to-deceive] Clarification: Daniel K. referred to whistleblower protections that apply when companies “break promises” or “mislead the public.” There are no specific private sector whistleblower protections that use these standards. In almost every case, a specific law has to have been broken to trigger whistleblower protections.

17 jul 2025 - 38 min
episode Is AI Productivity Worth Our Humanity? with Prof. Michael Sandel artwork
Is AI Productivity Worth Our Humanity? with Prof. Michael Sandel

Tech leaders promise that AI automation will usher in an age of unprecedented abundance: cheap goods, universal high income, and freedom from the drudgery of work. But even if AI delivers material prosperity, will that prosperity be shared? And what happens to human dignity if our labor and contributions become obsolete? Political philosopher Michael Sandel joins Tristan Harris to explore why the promise of AI-driven abundance could deepen inequalities and leave our society hollow. Drawing from his landmark work on justice and merit, Sandel argues that this isn't just about economics — it's about what it means to be human when our work role in society vanishes, and whether democracy can survive if productivity becomes our only goal. We've seen this story before with globalization: promises of shared prosperity that instead hollowed out the industrial heart of communities, economic inequalities, and left holes in the social fabric. Can we learn from the past, and steer the AI revolution in a more humane direction? Your Undivided Attention is produced by the Center for Humane Technology [https://www.humanetech.com/]. Follow us on X: @HumaneTech_ [https://twitter.com/humanetech_]. You can find a full transcript, key takeaways, and much more on our Substack [https://centerforhumanetechnology.substack.com/]. RECOMMENDED MEDIA The Tyranny of Merit by Michael Sandel [https://bookshop.org/p/books/the-tyranny-of-merit-can-we-find-the-common-good-michael-j-sandel/14384595?ean=9781250800060&next=t] Democracy’s Discontent by Michael Sandel [https://bookshop.org/p/books/democracy-s-discontent-a-new-edition-for-our-perilous-times-michael-j-sandel/18207441?ean=9780674270718&next=t] What Money Can’t Buy by Michael Sandel [https://bookshop.org/p/books/what-money-can-t-buy-the-moral-limits-of-markets-michael-j-sandel/8478819?ean=9780374533656&next=t&source=IndieBound] Take Michael’s online course “Justice” [https://www.harvardonline.harvard.edu/course/justice] Michael’s discussion on AI Ethics at the World Economic Forum [https://www.youtube.com/watch?v=KudqR2GCJow] Further reading on “The Intelligence Curse” [https://intelligence-curse.ai/] Read the full text of Robert F. Kennedy’s 1968 speech [https://www.jfklibrary.org/learn/about-jfk/the-kennedy-family/robert-f-kennedy/robert-f-kennedy-speeches/remarks-at-the-university-of-kansas-march-18-1968] Read the full text of Dr. Martin Luther King Jr.’s 1968 speech [https://cooperative-individualism.org/king-martin-luther_all-labor-has-dignity-1968-mar.pdf] Neil Postman’s lecture on the seven questions to ask of any new technology [https://www.youtube.com/watch?v=hlrv7DIHllE] RECOMMENDED YUA EPISODES AGI Beyond the Buzz: What Is It, and Are We Ready? [https://www.humanetech.com/podcast/agi-beyond-the-buzz-what-is-it-and-are-we-ready] The Man Who Predicted the Downfall of Thinking [https://www.humanetech.com/podcast/the-man-who-predicted-the-downfall-of-thinking] The Tech-God Complex: Why We Need to be Skeptics [https://www.humanetech.com/podcast/the-tech-god-complex-why-we-need-to-be-skeptics] The Three Rules of Humane Tech [https://www.humanetech.com/podcast/the-three-rules-of-humane-tech] AI and Jobs: How to Make AI Work With Us, Not Against Us with Daron Acemoglu [https://www.humanetech.com/podcast/ai-and-jobs-how-to-make-ai-work-with-us-not-against-us-with-daron-acemoglu] Mustafa Suleyman Says We Need to Contain AI. How Do We Do It? [https://www.humanetech.com/podcast/mustafa-suleyman-says-we-need-to-contain-ai-how-do-we-do-it]

26 jun 2025 - 46 min
episode The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future artwork
The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future

The race to develop ever-more-powerful AI is creating an unstable dynamic. It could lead us toward either dystopian centralized control or uncontrollable chaos. But there's a third option: a narrow path where technological power is matched with responsibility at every step. Sam Hammond is the chief economist at the Foundation for American Innovation. He brings a different perspective to this challenge than we do at CHT. Though he approaches AI from an innovation-first standpoint, we share a common mission on the biggest challenge facing humanity: finding and navigating this narrow path. This episode dives deep into the challenges ahead: How will AI reshape our institutions? Is complete surveillance inevitable, or can we build guardrails around it? Can our 19th-century government structures adapt fast enough, or will they be replaced by a faster moving private sector? And perhaps most importantly: how do we solve the coordination problems that could determine whether we build AI as a tool to empower humanity or as a superintelligence that we can't control? We're in the final window of choice before AI becomes fully entangled with our economy and society. This conversation explores how we might still get this right. Your Undivided Attention is produced by the Center for Humane Technology [https://www.humanetech.com/]. Follow us on X: @HumaneTech_ [https://twitter.com/humanetech_]. You can find a full transcript, key takeaways, and much more on our Substack [https://centerforhumanetechnology.substack.com/]. RECOMMENDED MEDIA  Tristan’s TED talk on the Narrow Path [https://www.youtube.com/watch?v=6kPHnl-RsVI] Sam’s 95 Theses on AI [https://www.thefai.org/posts/ninety-five-theses-on-ai] Sam’s proposal for a Manhattan Project for AI Safety [https://www.thefai.org/posts/a-manhattan-project-for-ai-safety] Sam’s series on AI and Leviathan [https://www.secondbest.ca/p/ai-and-leviathan-part-i] The Narrow Corridor: States, Societies, and the Fate of Liberty by Daron Acemoglu and James Robinson [https://www.penguinrandomhouse.com/books/555400/the-narrow-corridor-by-daron-acemoglu-and-james-a-robinson/] Dario Amodei’s Machines of Loving Grace essay. [https://www.darioamodei.com/essay/machines-of-loving-grace] Bourgeois Dignity: Why Economics Can’t Explain the Modern World by Deirdre McCloskey [https://press.uchicago.edu/Misc/Chicago/556659.html] The Paradox of Libertarianism by Tyler Cowen [https://www.cato-unbound.org/2007/03/11/tyler-cowen/paradox-libertarianism/] Dwarkesh Patel’s interview with Kevin Roberts at the FAI’s annual conference [https://www.youtube.com/watch?v=5NZ8LcZdkAw] Further reading on surveillance with 6G [https://www.nokia.com/bell-labs/research/6g-networks/6g-technologies/network-as-a-sensor/] RECOMMENDED YUA EPISODES AGI Beyond the Buzz: What Is It, and Are We Ready? [https://www.humanetech.com/podcast/agi-beyond-the-buzz-what-is-it-and-are-we-ready] The Self-Preserving Machine: Why AI Learns to Deceive [https://www.humanetech.com/podcast/the-self-preserving-machine-why-ai-learns-to-deceive]  The Tech-God Complex: Why We Need to be Skeptics [https://www.humanetech.com/podcast/the-tech-god-complex-why-we-need-to-be-skeptics]  Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt [https://www.humanetech.com/podcast/decoding-our-dna-how-ai-supercharges-medical-breakthroughs-and-bioweapons-with-kevin-esvelt] CORRECTIONS Sam referenced a blog post titled “The Libertarian Paradox” by Tyler Cowen. The actual title is the “Paradox of Libertarianism.”  Sam also referenced a blog post titled “The Collapse of Complex Societies” by Eli Dourado. The actual title is “A beginner’s guide to sociopolitical collapse.”

12 jun 2025 - 47 min
Muy buenos Podcasts , entretenido y con historias educativas y divertidas depende de lo que cada uno busque. Yo lo suelo usar en el trabajo ya que estoy muchas horas y necesito cancelar el ruido de al rededor , Auriculares y a disfrutar ..!!
Muy buenos Podcasts , entretenido y con historias educativas y divertidas depende de lo que cada uno busque. Yo lo suelo usar en el trabajo ya que estoy muchas horas y necesito cancelar el ruido de al rededor , Auriculares y a disfrutar ..!!
Fantástica aplicación. Yo solo uso los podcast. Por un precio módico los tienes variados y cada vez más.
Me encanta la app, concentra los mejores podcast y bueno ya era ora de pagarles a todos estos creadores de contenido
Phone screen with podimo app open surrounded by emojis

Rated 4.7 in the App Store

Empieza 7 días de prueba

$99 / mes después de la prueba.Cancela cuando quieras.

Podcasts exclusivos

Sin anuncios

Podcast gratuitos

Audiolibros

20 horas / mes

Prueba gratis

Sólo en Podimo

Audiolibros populares