
Your Undivided Attention
Podcast de Tristan Harris and Aza Raskin, The Center for Humane Technology
Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.
Disfruta 90 días gratis
9,99 € / mes después de la prueba.Cancela cuando quieras.
Todos los episodios
140 episodios
Tech leaders promise that AI automation will usher in an age of unprecedented abundance: cheap goods, universal high income, and freedom from the drudgery of work. But even if AI delivers material prosperity, will that prosperity be shared? And what happens to human dignity if our labor and contributions become obsolete? Political philosopher Michael Sandel joins Tristan Harris to explore why the promise of AI-driven abundance could deepen inequalities and leave our society hollow. Drawing from his landmark work on justice and merit, Sandel argues that this isn't just about economics — it's about what it means to be human when our work role in society vanishes, and whether democracy can survive if productivity becomes our only goal. We've seen this story before with globalization: promises of shared prosperity that instead hollowed out the industrial heart of communities, economic inequalities, and left holes in the social fabric. Can we learn from the past, and steer the AI revolution in a more humane direction? Your Undivided Attention is produced by the Center for Humane Technology [https://www.humanetech.com/]. Follow us on X: @HumaneTech_ [https://twitter.com/humanetech_]. You can find a full transcript, key takeaways, and much more on our Substack [https://centerforhumanetechnology.substack.com/]. RECOMMENDED MEDIA The Tyranny of Merit by Michael Sandel [https://bookshop.org/p/books/the-tyranny-of-merit-can-we-find-the-common-good-michael-j-sandel/14384595?ean=9781250800060&next=t] Democracy’s Discontent by Michael Sandel [https://bookshop.org/p/books/democracy-s-discontent-a-new-edition-for-our-perilous-times-michael-j-sandel/18207441?ean=9780674270718&next=t] What Money Can’t Buy by Michael Sandel [https://bookshop.org/p/books/what-money-can-t-buy-the-moral-limits-of-markets-michael-j-sandel/8478819?ean=9780374533656&next=t&source=IndieBound] Take Michael’s online course “Justice” [https://www.harvardonline.harvard.edu/course/justice] Michael’s discussion on AI Ethics at the World Economic Forum [https://www.youtube.com/watch?v=KudqR2GCJow] Further reading on “The Intelligence Curse” [https://intelligence-curse.ai/] Read the full text of Robert F. Kennedy’s 1968 speech [https://www.jfklibrary.org/learn/about-jfk/the-kennedy-family/robert-f-kennedy/robert-f-kennedy-speeches/remarks-at-the-university-of-kansas-march-18-1968] Read the full text of Dr. Martin Luther King Jr.’s 1968 speech [https://cooperative-individualism.org/king-martin-luther_all-labor-has-dignity-1968-mar.pdf] Neil Postman’s lecture on the seven questions to ask of any new technology [https://www.youtube.com/watch?v=hlrv7DIHllE] RECOMMENDED YUA EPISODES AGI Beyond the Buzz: What Is It, and Are We Ready? [https://www.humanetech.com/podcast/agi-beyond-the-buzz-what-is-it-and-are-we-ready] The Man Who Predicted the Downfall of Thinking [https://www.humanetech.com/podcast/the-man-who-predicted-the-downfall-of-thinking] The Tech-God Complex: Why We Need to be Skeptics [https://www.humanetech.com/podcast/the-tech-god-complex-why-we-need-to-be-skeptics] The Three Rules of Humane Tech [https://www.humanetech.com/podcast/the-three-rules-of-humane-tech] AI and Jobs: How to Make AI Work With Us, Not Against Us with Daron Acemoglu [https://www.humanetech.com/podcast/ai-and-jobs-how-to-make-ai-work-with-us-not-against-us-with-daron-acemoglu] Mustafa Suleyman Says We Need to Contain AI. How Do We Do It? [https://www.humanetech.com/podcast/mustafa-suleyman-says-we-need-to-contain-ai-how-do-we-do-it]

The race to develop ever-more-powerful AI is creating an unstable dynamic. It could lead us toward either dystopian centralized control or uncontrollable chaos. But there's a third option: a narrow path where technological power is matched with responsibility at every step. Sam Hammond is the chief economist at the Foundation for American Innovation. He brings a different perspective to this challenge than we do at CHT. Though he approaches AI from an innovation-first standpoint, we share a common mission on the biggest challenge facing humanity: finding and navigating this narrow path. This episode dives deep into the challenges ahead: How will AI reshape our institutions? Is complete surveillance inevitable, or can we build guardrails around it? Can our 19th-century government structures adapt fast enough, or will they be replaced by a faster moving private sector? And perhaps most importantly: how do we solve the coordination problems that could determine whether we build AI as a tool to empower humanity or as a superintelligence that we can't control? We're in the final window of choice before AI becomes fully entangled with our economy and society. This conversation explores how we might still get this right. Your Undivided Attention is produced by the Center for Humane Technology [https://www.humanetech.com/]. Follow us on X: @HumaneTech_ [https://twitter.com/humanetech_]. You can find a full transcript, key takeaways, and much more on our Substack [https://centerforhumanetechnology.substack.com/]. RECOMMENDED MEDIA Tristan’s TED talk on the Narrow Path [https://www.youtube.com/watch?v=6kPHnl-RsVI] Sam’s 95 Theses on AI [https://www.thefai.org/posts/ninety-five-theses-on-ai] Sam’s proposal for a Manhattan Project for AI Safety [https://www.thefai.org/posts/a-manhattan-project-for-ai-safety] Sam’s series on AI and Leviathan [https://www.secondbest.ca/p/ai-and-leviathan-part-i] The Narrow Corridor: States, Societies, and the Fate of Liberty by Daron Acemoglu and James Robinson [https://www.penguinrandomhouse.com/books/555400/the-narrow-corridor-by-daron-acemoglu-and-james-a-robinson/] Dario Amodei’s Machines of Loving Grace essay. [https://www.darioamodei.com/essay/machines-of-loving-grace] Bourgeois Dignity: Why Economics Can’t Explain the Modern World by Deirdre McCloskey [https://press.uchicago.edu/Misc/Chicago/556659.html] The Paradox of Libertarianism by Tyler Cowen [https://www.cato-unbound.org/2007/03/11/tyler-cowen/paradox-libertarianism/] Dwarkesh Patel’s interview with Kevin Roberts at the FAI’s annual conference [https://www.youtube.com/watch?v=5NZ8LcZdkAw] Further reading on surveillance with 6G [https://www.nokia.com/bell-labs/research/6g-networks/6g-technologies/network-as-a-sensor/] RECOMMENDED YUA EPISODES AGI Beyond the Buzz: What Is It, and Are We Ready? [https://www.humanetech.com/podcast/agi-beyond-the-buzz-what-is-it-and-are-we-ready] The Self-Preserving Machine: Why AI Learns to Deceive [https://www.humanetech.com/podcast/the-self-preserving-machine-why-ai-learns-to-deceive] The Tech-God Complex: Why We Need to be Skeptics [https://www.humanetech.com/podcast/the-tech-god-complex-why-we-need-to-be-skeptics] Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt [https://www.humanetech.com/podcast/decoding-our-dna-how-ai-supercharges-medical-breakthroughs-and-bioweapons-with-kevin-esvelt] CORRECTIONS Sam referenced a blog post titled “The Libertarian Paradox” by Tyler Cowen. The actual title is the “Paradox of Libertarianism.” Sam also referenced a blog post titled “The Collapse of Complex Societies” by Eli Dourado. The actual title is “A beginner’s guide to sociopolitical collapse.”

Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder. And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them. How will that change us? And what rules should we set down now to avoid the mistakes of the past? These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel’s Sessions 2025, a conference for clinical therapists. This week, we’re bringing you an edited version of that conversation, originally recorded on April 25th, 2025. Your Undivided Attention is produced by the Center for Humane Technology [https://www.humanetech.com/]. Follow us on X: @HumaneTech_. [https://twitter.com/humanetech_] You can find complete transcripts, key takeaways, and much more on our Substack [https://centerforhumanetechnology.substack.com/]. RECOMMENDED MEDIA “Alone Together,” “Evocative Objects,” “The Second Self” or any other of Sherry Turkle’s books on how technology mediates our relationships. [https://sherryturkle.mit.edu/selected-publications/] Key & Peele - Text Message Confusion [https://www.youtube.com/watch?v=naleynXS7yo] Further reading on Hinge’s rollout of AI features [https://www.fastcompany.com/91259831/hinge-will-now-use-ai-to-grade-your-dating-profile-prompts] Hinge’s AI principles [https://hinge.co/ai-principles] “The Anxious Generation” by Jonathan Haidt [https://www.anxiousgeneration.com/book] “Bowling Alone” by Robert Putnam [http://bowlingalone.com/] The NYT profile on the woman in love with ChatGPT [https://www.nytimes.com/2025/01/15/technology/ai-chatgpt-boyfriend-companion.html] Further reading on the Sewell Setzer story [https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html] Further reading on the ELIZA chatbot [https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai] RECOMMENDED YUA EPISODES Echo Chambers of One: Companion AI and the Future of Human Connection [https://www.humanetech.com/podcast/echo-chambers-of-one-companion-ai-and-the-future-of-human-connection] What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton [https://www.humanetech.com/podcast/what-can-we-do-about-abusive-chatbots-with-meetali-jain-and-camille-carlton] Esther Perel on Artificial Intimacy [https://www.humanetech.com/podcast/esther-perel-on-artificial-intimacy-2] Jonathan Haidt On How to Solve the Teen Mental Health Crisis [https://www.humanetech.com/podcast/jonathan-haidt-on-how-to-solve-the-teen-mental-health-crisis]

AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It’s no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we’re connecting with another person. But these AI companions are not human, they’re a platform designed to maximize user engagement—and they’ll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing. RECOMMENDED MEDIA Further reading on the rise of addictive intelligence [https://www.media.mit.edu/articles/we-need-to-prepare-for-addictive-intelligence/#:~:text=Aug] More information on Melvin Kranzberg’s laws of technology [https://www.cs.ucdavis.edu/~koehl/Teaching/ECS188/PDF_files/Kranzberg.pdf] More information on MIT’s Advancing Humans with AI lab [https://aha.media.mit.edu/] Pattie and Pat’s longitudinal study on the psycho-social effects of prolonged chatbot use [https://www.media.mit.edu/publications/how-ai-and-human-behaviors-shape-psychosocial-effects-of-chatbot-use-a-longitudinal-controlled-study/] Pattie and Pat’s study that found that AI avatars of well-liked people improved education outcomes [https://sites.cs.ucsb.edu/~sra/publications/idols.pdf] Pattie and Pat’s study that found that AI systems that frame answers and questions improve human understanding [https://www.media.mit.edu/publications/don-t-just-tell-me-ask-me-ai-systems-that-intelligently-frame-explanations-as-questions-improve-human-logical-discernment-accuracy/] Pat’s study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction [https://www.media.mit.edu/publications/influencing-human-ai-interaction-by-priming-beliefs/#:~:text=As%20conversational%20agents%20powered%20by,for%20a%20more%20sophisticated%20AI] Further reading on AI’s positivity bias [https://www.techradar.com/computing/artificial-intelligence/sam-altman-says-openai-will-fix-chatgpts-annoying-new-personality-but-this-viral-prompt-is-a-good-workaround-for-now] Further reading on MIT’s “lifelong kindergarten” initiative [https://www.media.mit.edu/groups/lifelong-kindergarten/overview/] Further reading on “cognitive forcing functions” to reduce overreliance on AI [https://www.eecs.harvard.edu/~kgajos/papers/2021/bucinca21trust.pdf] Further reading on the death of Sewell Setzer and his mother’s case against Character.AI [https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html] Further reading on the legislative response to digital companions [https://www.technologyreview.com/2025/04/08/1114369/ai-companions-are-the-final-stage-of-digital-addiction-and-lawmakers-are-taking-aim/] RECOMMENDED YUA EPISODES The Self-Preserving Machine: Why AI Learns to Deceive [https://www.humanetech.com/podcast/the-self-preserving-machine-why-ai-learns-to-deceive] What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton [https://www.humanetech.com/podcast/what-can-we-do-about-abusive-chatbots-with-meetali-jain-and-camille-carlton] Esther Perel on Artificial Intimacy [https://www.humanetech.com/podcast/esther-perel-on-artificial-intimacy-2] Jonathan Haidt On How to Solve the Teen Mental Health Crisis [https://www.humanetech.com/podcast/jonathan-haidt-on-how-to-solve-the-teen-mental-health-crisis] Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.

What does it really mean to ‘feel the AGI?’ Silicon Valley is racing toward AI systems that could soon match or surpass human intelligence. The implications for jobs, democracy, and our way of life are enormous. In this episode, Aza Raskin and Randy Fernando dive deep into what ‘feeling the AGI’ really means. They unpack why the surface-level debates about definitions of intelligence and capability timelines distract us from urgently needed conversations around governance, accountability, and societal readiness. Whether it's climate change, social polarization and loneliness, or toxic forever chemicals, humanity keeps creating outcomes that nobody wants because we haven't yet built the tools or incentives needed to steer powerful technologies. As the AGI wave draws closer, it's critical we upgrade our governance and shift our incentives now, before it crashes on shore. Are we capable of aligning powerful AI systems with human values? Can we overcome geopolitical competition and corporate incentives that prioritize speed over safety? Join Aza and Randy as they explore the urgent questions and choices facing humanity in the age of AGI, and discuss what we must do today to secure a future we actually want. Your Undivided Attention is produced by the Center for Humane Technology [https://www.humanetech.com/]. Follow us on X: @HumaneTech_ [https://twitter.com/humanetech_] and subscribe to our Substack [https://centerforhumanetechnology.substack.com/]. RECOMMENDED MEDIA Daniel Kokotajlo et al’s “AI 2027” paper [https://ai-2027.com/] A demo of Omni Human One, referenced by Randy [https://www.youtube.com/watch?v=dr7Wchz-9bk] A paper from Redwood Research and Anthropic that found an AI was willing to lie to preserve it’s values [https://www.anthropic.com/research/alignment-faking] A paper from Palisades Research that found an AI would cheat in order to win [https://palisaderesearch.org/blog/specification-gaming] The treaty that banned blinding laser weapons [https://treaties.un.org/pages/ViewDetails.aspx?src=TREATY&mtdsg_no=XXVI-2-a&chapter=26] Further reading on the moratorium on germline editing [https://www.nih.gov/about-nih/who-we-are/nih-director/statements/nih-supports-international-moratorium-clinical-application-germline-editing] RECOMMENDED YUA EPISODES The Self-Preserving Machine: Why AI Learns to Deceive [https://www.humanetech.com/podcast/the-self-preserving-machine-why-ai-learns-to-deceive] Behind the DeepSeek Hype, AI is Learning to Reason [https://www.humanetech.com/podcast/behind-the-deepseek-hype-ai-is-learning-to-reason] The Tech-God Complex: Why We Need to be Skeptics [https://www.humanetech.com/podcast/the-tech-god-complex-why-we-need-to-be-skeptics] This Moment in AI: How We Got Here and Where We’re Going [https://www.humanetech.com/podcast/this-moment-in-ai-how-we-got-here-and-where-were-going] How to Think About AI Consciousness with Anil Seth [https://www.humanetech.com/podcast/how-to-think-about-ai-consciousness-with-anil-seth] Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn [https://www.humanetech.com/podcast/former-open-ai-engineer-william-saunders-on-silence-safety-and-the-right-to-warn] Clarification: When Randy referenced a “$110 trillion game” as the target for AI companies, he was referring to the entire global economy.
Disfruta 90 días gratis
9,99 € / mes después de la prueba.Cancela cuando quieras.
Podcasts exclusivos
Sin anuncios
Podcast gratuitos
Audiolibros
100 horas / mes