
AITEC Podcast
Podcast door AITEC
Tijdelijke aanbieding
1 maand voor € 1
Daarna € 9,99 / maandElk moment opzegbaar.

Meer dan 1 miljoen luisteraars
Ervaar Podimo zelf en ontdek de beste podcasts en luisterboeken
4.7 sterren in de App Store
Over AITEC Podcast
Welcome to AITEC Podcast, where we explore the ethical side of AI and emerging tech.We call our little group the AI and Technology Ethics Circle (AITEC). Visit ethicscircle.org for more info.
Alle afleveringen
24 afleveringen#23 Sebastian Purcell: Rootedness, Not Happiness — Aztec Wisdom for a Slippery World
In this episode, we speak with philosopher Sebastian Purcell about his new book The Outward Path: Lessons on Living from the Aztecs. Purcell shows that Aztec philosophy offers a strikingly different vision of the good life — one that rejects the modern obsession with happiness and invulnerability in favor of something deeper: rootedness. We discuss what it means to live a rooted life in a world that feels increasingly unstable — from collective agency and humility to willpower, ritual, and the art of balance. Along the way, Purcell explains how Aztec ethics can help us rethink everything from self-discipline and courage to how we live with technology, social media, and each other. Links: Sebastian’s website [https://sebastianpurcell.com/] Sebastian’s articles on Medium [https://medium.com/@sebastian.purcell] Sebastian’s book [https://wwnorton.com/books/the-outward-path/overview] For more info, visit ethicscircle.org [https://www.ethicscircle.org/].
#22 Iain Thomson: Why Heidegger Thought Technology Was More Dangerous Than We Realize
What if our deepest fears about AI aren't really about the machines at all—but about something we've forgotten about ourselves? In this episode, we speak with philosopher Iain D. Thomson (University of New Mexico), a leading scholar of Martin Heidegger, about his new book Heidegger on Technology’s Danger and Promise in the Age of AI.Together we explore Heidegger’s famous claim that “the essence of technology is nothing technological,” and why today’s crises—from environmental collapse to algorithmic control—are really symptoms of a deeper existential and ontological predicament.Also discussed: – Why AI may not be dangerous because it’s too smart, but because we stop thinking – Heidegger’s concept of “world-disclosive beings” and why ChatGPT doesn’t qualify – How the technological mindset reshapes not just our tools but our selves – What a “free relation” to technology might look like – The creeping danger of lowering our standards and mistaking supplements for the real thing For more info, visit ethicscircle.org [https://www.ethicscircle.org/].
#21 Jayashri Bangali: AI in Education
In this episode, we sit down with Jayashri A. Bangali, a researcher and educator whose work explores the evolving role of artificial intelligence in education—both in India and around the world. We discuss how AI is transforming learning through personalization, interactivity, and accessibility—but also raise hard questions about bias, surveillance, dependence, and deskilling. We dig into Jayashri’s recent research on AI integration in Indian schools and universities, including key findings from surveys of students and teachers across academic levels. We also explore global trends in AI adoption, potential regulatory safeguards, and how policymakers can ensure that AI enhances—not erodes—critical thinking and creativity. This is a wide-ranging conversation on the future of learning, the risks of offloading too much to machines, and the kind of education worth fighting for in an AI-driven world. For more info, visit ethicscircle.org [https://www.ethicscircle.org/].
#20 Bernardo Bolaños and Jorge Luis Morton: On Stoicism and Technology
In this episode, we speak with Bernardo Bolaños and Jorge Luis Morton, authors of On Singularity and the Stoics [https://link.springer.com/article/10.1007/s43681-024-00548-w], about the rise of generative AI, the looming prospect of superintelligence, and how Stoic philosophy offers a framework for navigating it all. We explore Stoic principles like the dichotomy of control, cosmopolitanism, and living with wisdom as we face of deepfakes, algorithmic manipulation, and the risk of superintelligent AI. For more info, visit ethicscircle.org [https://www.ethicscircle.org/].
#19 Joshua Hatherley: When Your Doctor Uses AI—Should They Tell You?
In this episode, we speak with Dr. Joshua Hatherley, a bioethicist at the University of Copenhagen, about his recent article, “Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?” Dr. Hatherley challenges what has become a widely accepted view in bioethics: that patients must always be informed when clinicians use medical AI systems in diagnosis or treatment planning. We explore his critiques of four central arguments for the “disclosure thesis”—including risk, rights, materiality, and autonomy—and discuss why, in some cases, mandatory disclosure might do more harm than good. For more info, visit ethicscircle.org [https://www.ethicscircle.org/].

Meer dan 1 miljoen luisteraars
Ervaar Podimo zelf en ontdek de beste podcasts en luisterboeken
4.7 sterren in de App Store
Tijdelijke aanbieding
1 maand voor € 1
Daarna € 9,99 / maandElk moment opzegbaar.
Exclusieve podcasts
Advertentievrij
Gratis podcasts
Luisterboeken
20 uur / maand

































