Your Undivided Attention
Podcast von Tristan Harris and Aza Raskin, The Center for Humane Technology
Diesen Podcast kannst du überall hören, wo es Podcasts gibt und auch ohne Mitgliedschaft in der Podimo App.
Alle Folgen
127 FolgenSilicon Valley's interest in AI is driven by more than just profit and innovation. There’s an unmistakable mystical quality to it as well. In this episode, Daniel and Aza sit down with humanist chaplain Greg Epstein to explore the fascinating parallels between technology and religion. From AI being treated as a godlike force to tech leaders' promises of digital salvation, religious thinking is shaping the future of technology and humanity. Epstein breaks down why he believes technology has become our era's most influential religion and what we can learn from these parallels to better understand where we're heading. Your Undivided Attention is produced by the Center for Humane Technology [https://www.humanetech.com/]. Follow us on X. [https://twitter.com/HumaneTech_] If you like the show and want to support CHT's mission, please consider donating to the organization this giving season: https://www.humanetech.com/donate. Any amount helps support our goal to bring about a more humane future. RECOMMENDED MEDIA “Tech Agnostic” by Greg Epstein [https://mitpress.mit.edu/9780262049207/tech-agnostic/] Further reading on Avi Schiffmann’s “Friend” AI necklace [https://fortune.com/2024/08/12/avi-schiffmann-ai-necklace-friend-interview/] Further reading on Blake Lemoine and Lamda [https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/] Blake LeMoine’s conversation with Greg at MIT [https://www.youtube.com/watch?v=d9ipv6HhuWM] Further reading on the Sewell Setzer case [https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html] Further reading on Terminal of Truths [https://www.ccn.com/education/crypto/what-is-truth-terminal/] Further reading on Ray Kurzweil’s attempt to create a digital recreation of his dad with AI [https://abcnews.go.com/Technology/futurist-ray-kurzweil-bring-dead-father-back-life/story?id=14267712] The Drama of the Gifted Child by Alice Miller [https://www.goodreads.com/book/show/4887.The_Drama_of_the_Gifted_Child] RECOMMENDED YUA EPISODES ’A Turning Point in History’: Yuval Noah Harari on AI’s Cultural Takeover [https://www.humanetech.com/podcast/a-turning-point-in-history-yuval-noah-harari-on-ais-cultural-takeover] How to Think About AI Consciousness with Anil Seth [https://www.humanetech.com/podcast/how-to-think-about-ai-consciousness-with-anil-seth] Can Myth Teach Us Anything About the Race to Build Artificial General Intelligence? With Josh Schrei [https://www.humanetech.com/podcast/can-myth-teach-us-anything-about-the-race-to-build-artificial-general-intelligence-with-josh-schrei] How To Free Our Minds with Cult Deprogramming Expert Dr. Steven Hassan [https://www.humanetech.com/podcast/51-how-to-free-our-minds]
CW: This episode features discussion of suicide and sexual abuse. In the last episode, we had the journalist Laurie Segall on to talk about the tragic story of Sewell Setzer, a 14 year old boy who took his own life after months of abuse and manipulation by an AI companion from the company Character.ai [ ]. The question now is: what's next? Megan has filed a major new lawsuit against Character.ai [ ] in Florida, which could force the company–and potentially the entire AI industry–to change its harmful business practices. So today on the show, we have Meetali Jain, director of the Tech Justice Law Project and one of the lead lawyers in Megan's case against Character.ai [ ]. Meetali breaks down the details of the case, the complex legal questions under consideration, and how this could be the first step toward systemic change. Also joining is Camille Carlton, CHT’s Policy Director. RECOMMENDED MEDIA Further reading on Sewell’s story [https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html] Laurie Segall’s interview with Megan Garcia [https://www.youtube.com/watch?v=YbuBfizSnPk] The full complaint filed by Megan against Character.AI [https://cdn.arstechnica.net/wp-content/uploads/2024/10/Garcia-v-Character-Technologies-Complaint-10-23-24.pdf] Further reading on suicide bots [https://futurism.com/suicide-chatbots-character-ai] Further reading on Noam Shazier and Daniel De Frietas’ relationship with Google [https://www.wsj.com/tech/ai/noam-shazeer-google-ai-deal-d3605697] The CHT Framework for Incentivizing Responsible Artificial Intelligence Development and Use [https://www.humanetech.com/insights/framework-for-incentivizing-responsible-artificial-intelligence] Organizations mentioned: The Tech Justice Law Project [https://techjusticelaw.org/] The Social Media Victims Law Center [https://socialmediavictims.org/?utm_source=google&utm_medium=cpc&utm_campaign=17676029706&utm_content=152786768068&utm_term=social%20media%20victims%20law%20center&gad_source=1&gclid=CjwKCAiAxKy5BhBbEiwAYiW--ymqvgh9NDfHGmbHqV1TFaR9DeWqdjAKxdCOi542LIrc1XHkYgA03BoCzbkQAvD_BwE] Mothers Against Media Addiction [https://www.joinmama.org/] Parents SOS [https://www.parentssos.org/] Parents Together [https://parents-together.org/] Common Sense Media [https://www.commonsensemedia.org/] RECOMMENDED YUA EPISODES When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer [https://www.humanetech.com/podcast/when-the-person-abusing-your-child-is-a-chatbot-the-tragic-story-of-sewell-setzer] Jonathan Haidt On How to Solve the Teen Mental Health Crisis [https://www.humanetech.com/podcast/jonathan-haidt-on-how-to-solve-the-teen-mental-health-crisis] AI Is Moving Fast. We Need Laws that Will Too. [https://www.humanetech.com/podcast/ai-is-moving-fast-we-need-laws-that-will-too] Corrections: Meetali referred to certain chatbot apps as banning users under 18, however the settings for the major app stores ban users that are under 17, not under 18. Meetali referred to Section 230 as providing “full scope immunity” to internet companies, however Congress has passed subsequent laws that have made carve outs for that immunity for criminal acts such as sex trafficking and intellectual property theft. Your Undivided Attention is produced by the Center for Humane Technology [https://www.humanetech.com/]. Follow us on X. [https://twitter.com/HumaneTech_]
Content Warning: This episode contains references to suicide, self-harm, and sexual abuse. Megan Garcia lost her son Sewell to suicide after he was abused and manipulated by AI chatbots for months. Now, she’s suing the company that made those chatbots. On today’s episode of Your Undivided Attention, Aza sits down with journalist Laurie Segall, who's been following this case for months. Plus, Laurie’s full interview with Megan on her new show, Dear Tomorrow. Aza and Laurie discuss the profound implications of Sewell’s story on the rollout of AI. Social media began the race to the bottom of the brain stem and left our society addicted, distracted, and polarized. Generative AI is set to supercharge that race, taking advantage of the human need for intimacy and connection amidst a widespread loneliness epidemic. Unless we set down guardrails on this technology now, Sewell’s story may be a tragic sign of things to come, but it also presents an opportunity to prevent further harms moving forward. If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance. Your Undivided Attention is produced by the Center for Humane Technology [https://www.humanetech.com/]. Follow us on Twitter: @HumaneTech_ [https://twitter.com/humanetech_] RECOMMENDED MEDIA The first episode of Dear Tomorrow, from Mostly Human Media [https://www.youtube.com/watch?v=YbuBfizSnPk] The CHT Framework for Incentivizing Responsible AI Development [https://www.humanetech.com/insights/framework-for-incentivizing-responsible-artificial-intelligence] Further reading on Sewell’s case [https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html] Character.ai’s “About Us” page [https://character.ai/about] Further reading on the addictive properties of AI [https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/] RECOMMENDED YUA EPISODES AI Is Moving Fast. We Need Laws that Will Too. [https://www.humanetech.com/podcast/ai-is-moving-fast-we-need-laws-that-will-too] This Moment in AI: How We Got Here and Where We’re Going [https://www.humanetech.com/podcast/this-moment-in-ai-how-we-got-here-and-where-were-going] Jonathan Haidt On How to Solve the Teen Mental Health Crisis [https://www.humanetech.com/podcast/jonathan-haidt-on-how-to-solve-the-teen-mental-health-crisis] The AI Dilemma [https://www.humanetech.com/podcast/the-ai-dilemma]
Social media disinformation did enormous damage to our shared idea of reality. Now, the rise of generative AI has unleashed a flood of high-quality synthetic media into the digital ecosystem. As a result, it's more difficult than ever to tell what’s real and what’s not, a problem with profound implications for the health of our society and democracy. So how do we fix this critical issue? As it turns out, there’s a whole ecosystem of folks to answer that question. One is computer scientist Oren Etzioni, the CEO of TrueMedia.org [http://TrueMedia.org], a free, non-partisan, non-profit tool that is able to detect AI generated content with a high degree of accuracy. Oren joins the show this week to talk about the problem of deepfakes and disinformation and what he sees as the best solutions. Your Undivided Attention is produced by the Center for Humane Technology [https://www.humanetech.com/]. Follow us on Twitter: @HumaneTech_ [https://twitter.com/humanetech_] RECOMMENDED MEDIA TrueMedia.org [http://truemedia.org/] Further reading on the deepfaked image of an explosion near the Pentagon [https://www.npr.org/2023/05/22/1177590231/fake-viral-images-of-an-explosion-at-the-pentagon-were-probably-created-by-ai] Further reading on the deepfaked robocall pretending to be President Biden [https://www.nytimes.com/2024/02/27/us/politics/ai-robocall-biden-new-hampshire.html] Further reading on the election deepfake in Slovakia [https://www.cnn.com/2024/02/01/politics/election-deepfake-threats-invs/index.html] Further reading on the President Obama lip-syncing deepfake from 2017 [https://www.washington.edu/news/2017/07/11/lip-syncing-obama-new-tools-turn-audio-clips-into-realistic-video/] One of several deepfake quizzes from the New York Times, test yourself! [https://www.nytimes.com/interactive/2024/09/09/technology/ai-video-deepfake-runway-kling-quiz.html] The Partnership on AI [https://partnershiponai.org/] C2PA [https://c2pa.org/] Witness.org [https://www.witness.org/] Truepic [https://www.truepic.com/] RECOMMENDED YUA EPISODES ‘We Have to Get It Right’: Gary Marcus On Untamed AI [https://www.humanetech.com/podcast/we-have-to-get-it-right-gary-marcus-on-untamed-ai] Taylor Swift is Not Alone: The Deepfake Nightmare Sweeping the Internet [https://www.humanetech.com/podcast/taylor-swift-is-not-alone-the-deepfake-nightmare-sweeping-the-internet] Synthetic Humanity: AI & What’s At Stake [https://www.humanetech.com/podcast/synthetic-humanity-ai-whats-at-stake] CLARIFICATION: Oren said that the largest social media platforms “don’t see a responsibility to let the public know this was manipulated by AI.” Meta has made a public commitment to flagging AI-generated or -manipulated content. Whereas other platforms like TikTok and Snapchat rely on users to flag.
Historian Yuval Noah Harari says that we are at a critical turning point. One in which AI’s ability to generate cultural artifacts threatens humanity’s role as the shapers of history. History will still go on, but will it be the story of people or, as he calls them, ‘alien AI agents’? In this conversation with Aza Raskin, Harari discusses the historical struggles that emerge from new technology, humanity’s AI mistakes so far, and the immediate steps lawmakers can take right now to steer us towards a non-dystopian future. This episode was recorded live at the Commonwealth Club World Affairs of California. Your Undivided Attention is produced by the Center for Humane Technology [https://www.humanetech.com/]. Follow us on Twitter: @HumaneTech_ [https://twitter.com/humanetech_] RECOMMENDED MEDIA NEXUS: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari [https://www.ynharari.com/book/nexus/] You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills: a New York Times op-ed from 2023, written by Yuval, Aza, and Tristan [https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-chatgpt.html] The 2023 open letter calling for a pause in AI development of at least 6 months, signed by Yuval and Aza [https://futureoflife.org/open-letter/pause-giant-ai-experiments/] Further reading on the Stanford Marshmallow Experiment [https://www.simplypsychology.org/marshmallow-test.html] Further reading on AlphaGo’s “move 37” [https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/] Further Reading on Social.AI [https://www.theverge.com/24255887/social-ai-bots-social-network-chatgpt-vergecast] RECOMMENDED YUA EPISODES This Moment in AI: How We Got Here and Where We’re Going [https://www.humanetech.com/podcast/this-moment-in-ai-how-we-got-here-and-where-were-going] The Tech We Need for 21st Century Democracy with Divya Siddarth [https://www.humanetech.com/podcast/the-tech-we-need-for-21st-century-democracy-with-divya-siddarth] Synthetic Humanity: AI & What’s At Stake [https://www.humanetech.com/podcast/synthetic-humanity-ai-whats-at-stake] The AI Dilemma [https://www.humanetech.com/podcast/the-ai-dilemma] Two Million Years in Two Hours: A Conversation with Yuval Noah Harari [https://www.humanetech.com/podcast/28-two-million-years-in-two-hours-a-conversation-with-yuval-noah-harari]
Nutze Podimo überall
Höre Podimo auf deinem Smartphone, Tablet, Computer oder im Auto!
Ein ganzes Universum für Unterhaltung für die Ohren
Tausende Hörbücher und exklusive Podcasts
Ohne Werbung
Verschwende keine Zeit mit Werbeunterbrechungen, wenn du bei Podimo hörst