Cover image of show AITEC Philosophy Podcast

AITEC Philosophy Podcast

Podcast by AITEC

English

Technology & science

Limited Offer

2 months for 19 kr.

Then 99 kr. / monthCancel anytime.

  • 20 hours of audiobooks / month
  • Podcasts only on Podimo
  • All free podcasts
Get Started

About AITEC Philosophy Podcast

Welcome to AITEC Podcast, where we explore the ethical side of AI and emerging tech.We call our little group the AI and Technology Ethics Circle (AITEC). Visit ethicscircle.org for more info.

All episodes

30 episodes
episode #29 Justin Tiehen: Why AI Can't Make a Promise—The Hidden Limits of Large Language Models artwork

#29 Justin Tiehen: Why AI Can't Make a Promise—The Hidden Limits of Large Language Models

Have you ever felt like ChatGPT genuinely understands you? What if the reality is that it doesn't even have the foundational capacity to "speak" to you at all? On this episode of The AITEC Podcast, Roberto Carlos García and Sam Bennett sit down with philosopher Justin Tiehen (University of Puget Sound) to unpack his fascinating new paper, LLM's Lack a Theory of Mind and So Can't Perform Speech Acts--A Causal Argument. Justin takes us on a deep dive into the philosophy of mind to explain why current Large Language Models, despite their impressive output, are essentially just faking it. We explore why next-token predictors are completely missing the causal architecture required to have a "Theory of Mind," and why, without that, they are fundamentally incapable of making assertions, giving orders, or performing true speech acts. Key Takeaways from this Episode: * The Ladder of Causation: Why AI is stuck observing statistical correlations and cannot grasp true causal interventions or counterfactuals (drawing on Judea Pearl’s work). * The Speech Act Problem: Why performing a true "speech act" requires the deliberate intention to influence another person's mind. * Cheating the Benchmarks: How LLMs "cheat" on psychological exams like the Sally-Anne false-belief test simply by memorizing statistical patterns in text. * The Threat of AI Blackmail: What it would actually look like if an AI possessed a Theory of Mind and strategically tried to manipulate human behavior to achieve its goals. Whether you are deeply invested in the philosophy of language or just trying to figure out how much you should trust your favorite AI assistant, this conversation will completely reframe how you view generative AI. Learn more about our work and join the conversation at ethicscircle.org [https://www.google.com/search?q=http://ethicscircle.org].

24 Feb 2026 - 1 h 7 min
episode #28 Mathilda Marie Mulert: Sex Robots, Simulation, and the Question of Moral Harm artwork

#28 Mathilda Marie Mulert: Sex Robots, Simulation, and the Question of Moral Harm

In this episode of the AITEC Podcast, we’re joined by philosopher Mathilda Marie Mulert, a doctoral researcher at the Oxford Internet Institute, to explore one of the most difficult questions in contemporary tech ethics: when, if ever, is it morally permissible to simulate sexual violence?  Drawing on her recent work on simulation ethics, Mulert examines video games, virtual environments, sex robots, and consensual role-play to challenge the assumption that “it’s just pretend.” We discuss the Gamers’ Dilemma, the limits of consent, and why moral context—not just content—matters when evaluating simulated wrongdoing. This conversation is philosophical, careful, and candid. Listener discretion is advised. Links: * Mathilda’s Oxford Internet Institute Webpage [https://www.oii.ox.ac.uk/people/profiles/mathilda-mulert/] * Mathilda’s recent article [https://link.springer.com/article/10.1007/s10676-025-09878-7] For more, visit ethicscircle.org [https://www.ethicscircle.org/].

27 Jan 2026 - 1 h 8 min
episode #27 Matheus Ferreira de Barros: Technology, Spheres, and the Human Being artwork

#27 Matheus Ferreira de Barros: Technology, Spheres, and the Human Being

In this episode of the AITEC podcast, Sam Bennett and Roberto Carlos speak with Matheus Ferreira de Barros, a philosopher of technology at PUC-Rio and the Federal University of Rio de Janeiro, about the work of Peter Sloterdijk. Ferreira de Barros introduces Sloterdijk’s philosophy of technology, focusing on the idea that human beings and technology co-evolve and that technology plays a constitutive role in human life rather than merely serving as an external tool.The conversation explores Sloterdijk’s Spheres project, including his account of insulation, distance from nature, and the creation of protective interiors that stabilize human existence at biological, psychological, and symbolic levels. The discussion also examines the loss of large-scale meaning structures in modernity, the role of religion and culture as technologies of existential security, and how contemporary technologies, including AI, may both disrupt and reshape the spheres through which human life becomes livable.

27 Jan 2026 - 1 h 7 min
episode #26 Iwan Williams: Do Language Models Have Intentions? artwork

#26 Iwan Williams: Do Language Models Have Intentions?

In this episode of the AITEC podcast, Sam Bennett speaks with philosopher of mind and AI researcher Iwan Williams about his paper “Intention-like representations in language models?” Williams is a postdoctoral researcher at the University of Copenhagen and received his PhD from Monash University. The conversation explores whether large language models exhibit internal representations that resemble intentions, as distinct from beliefs or credences. Focusing on features such as directive function, planning, and commitment, Williams evaluates several empirical case studies and explains why current models may appear intention-like in some respects while falling short in others. The discussion also considers why intentions matter for communication, safety, and our broader understanding of artificial intelligence. For more, visit ethicscircle.org.

16 Jan 2026 - 1 h 27 min
episode #25 Pilar López-Cantero: The Ethics of Breakup Chatbots artwork

#25 Pilar López-Cantero: The Ethics of Breakup Chatbots

What if your ex never really left—because you trained a chatbot to be them? In this episode of the AITEC Podcast, we’re joined by philosopher Pilar López-Cantero to explore her provocative article, The Ethics of Breakup Chatbots. From the haunting potential of AI relationships to the dangers of narrative stagnation, we dive into what it means to love, let go, and maybe linger too long—with a machine. Are these bots helping us heal, or are they shaping a lonelier, more controllable kind of intimacy? For more info, visit ethicscircle.org [https://www.ethicscircle.org/].

11 Jan 2026 - 1 h 13 min
En fantastisk app med et enormt stort udvalg af spændende podcasts. Podimo formår virkelig at lave godt indhold, der takler de lidt mere svære emner. At der så også er lydbøger oveni til en billig pris, gør at det er blevet min favorit app.
En fantastisk app med et enormt stort udvalg af spændende podcasts. Podimo formår virkelig at lave godt indhold, der takler de lidt mere svære emner. At der så også er lydbøger oveni til en billig pris, gør at det er blevet min favorit app.
Rigtig god tjeneste med gode eksklusive podcasts og derudover et kæmpe udvalg af podcasts og lydbøger. Kan varmt anbefales, om ikke andet så udelukkende pga Dårligdommerne, Klovn podcast, Hakkedrengene og Han duo 😁 👍
Podimo er blevet uundværlig! Til lange bilture, hverdagen, rengøringen og i det hele taget, når man trænger til lidt adspredelse.

Choose your subscription

Limited Offer

Premium

20 hours of audiobooks

  • Podcasts only on Podimo

  • All free podcasts

  • Cancel anytime

2 months for 19 kr.
Then 99 kr. / month

Get Started

Premium Plus

Unlimited audiobooks

  • Podcasts only on Podimo

  • All free podcasts

  • Cancel anytime

Start 7 days free trial
Then 129 kr. / month

Start for free

Only on Podimo

Popular audiobooks

Get Started

2 months for 19 kr. Then 99 kr. / month. Cancel anytime.