
engelsk
Teknologi og vitenskap
99 kr / Måned etter prøveperioden.Avslutt når som helst.
Les mer AITEC Philosophy Podcast
Welcome to AITEC Podcast, where we explore the ethical side of AI and emerging tech.We call our little group the AI and Technology Ethics Circle (AITEC). Visit ethicscircle.org for more info.
#29 Justin Tiehen: Why AI Can't Make a Promise—The Hidden Limits of Large Language Models
Have you ever felt like ChatGPT genuinely understands you? What if the reality is that it doesn't even have the foundational capacity to "speak" to you at all? On this episode of The AITEC Podcast, Roberto Carlos García and Sam Bennett sit down with philosopher Justin Tiehen (University of Puget Sound) to unpack his fascinating new paper, LLM's Lack a Theory of Mind and So Can't Perform Speech Acts--A Causal Argument. Justin takes us on a deep dive into the philosophy of mind to explain why current Large Language Models, despite their impressive output, are essentially just faking it. We explore why next-token predictors are completely missing the causal architecture required to have a "Theory of Mind," and why, without that, they are fundamentally incapable of making assertions, giving orders, or performing true speech acts. Key Takeaways from this Episode: * The Ladder of Causation: Why AI is stuck observing statistical correlations and cannot grasp true causal interventions or counterfactuals (drawing on Judea Pearl’s work). * The Speech Act Problem: Why performing a true "speech act" requires the deliberate intention to influence another person's mind. * Cheating the Benchmarks: How LLMs "cheat" on psychological exams like the Sally-Anne false-belief test simply by memorizing statistical patterns in text. * The Threat of AI Blackmail: What it would actually look like if an AI possessed a Theory of Mind and strategically tried to manipulate human behavior to achieve its goals. Whether you are deeply invested in the philosophy of language or just trying to figure out how much you should trust your favorite AI assistant, this conversation will completely reframe how you view generative AI. Learn more about our work and join the conversation at ethicscircle.org [https://www.google.com/search?q=http://ethicscircle.org].
#28 Mathilda Marie Mulert: Sex Robots, Simulation, and the Question of Moral Harm
In this episode of the AITEC Podcast, we’re joined by philosopher Mathilda Marie Mulert, a doctoral researcher at the Oxford Internet Institute, to explore one of the most difficult questions in contemporary tech ethics: when, if ever, is it morally permissible to simulate sexual violence? Drawing on her recent work on simulation ethics, Mulert examines video games, virtual environments, sex robots, and consensual role-play to challenge the assumption that “it’s just pretend.” We discuss the Gamers’ Dilemma, the limits of consent, and why moral context—not just content—matters when evaluating simulated wrongdoing. This conversation is philosophical, careful, and candid. Listener discretion is advised. Links: * Mathilda’s Oxford Internet Institute Webpage [https://www.oii.ox.ac.uk/people/profiles/mathilda-mulert/] * Mathilda’s recent article [https://link.springer.com/article/10.1007/s10676-025-09878-7] For more, visit ethicscircle.org [https://www.ethicscircle.org/].
#27 Matheus Ferreira de Barros: Technology, Spheres, and the Human Being
In this episode of the AITEC podcast, Sam Bennett and Roberto Carlos speak with Matheus Ferreira de Barros, a philosopher of technology at PUC-Rio and the Federal University of Rio de Janeiro, about the work of Peter Sloterdijk. Ferreira de Barros introduces Sloterdijk’s philosophy of technology, focusing on the idea that human beings and technology co-evolve and that technology plays a constitutive role in human life rather than merely serving as an external tool.The conversation explores Sloterdijk’s Spheres project, including his account of insulation, distance from nature, and the creation of protective interiors that stabilize human existence at biological, psychological, and symbolic levels. The discussion also examines the loss of large-scale meaning structures in modernity, the role of religion and culture as technologies of existential security, and how contemporary technologies, including AI, may both disrupt and reshape the spheres through which human life becomes livable.
#26 Iwan Williams: Do Language Models Have Intentions?
In this episode of the AITEC podcast, Sam Bennett speaks with philosopher of mind and AI researcher Iwan Williams about his paper “Intention-like representations in language models?” Williams is a postdoctoral researcher at the University of Copenhagen and received his PhD from Monash University. The conversation explores whether large language models exhibit internal representations that resemble intentions, as distinct from beliefs or credences. Focusing on features such as directive function, planning, and commitment, Williams evaluates several empirical case studies and explains why current models may appear intention-like in some respects while falling short in others. The discussion also considers why intentions matter for communication, safety, and our broader understanding of artificial intelligence. For more, visit ethicscircle.org.
#25 Pilar López-Cantero: The Ethics of Breakup Chatbots
What if your ex never really left—because you trained a chatbot to be them? In this episode of the AITEC Podcast, we’re joined by philosopher Pilar López-Cantero to explore her provocative article, The Ethics of Breakup Chatbots. From the haunting potential of AI relationships to the dangers of narrative stagnation, we dive into what it means to love, let go, and maybe linger too long—with a machine. Are these bots helping us heal, or are they shaping a lonelier, more controllable kind of intimacy? For more info, visit ethicscircle.org [https://www.ethicscircle.org/].
Velg abonnementet ditt
Premium
20 timer lydbøker
Eksklusive podkaster
Gratis podkaster
Avslutt når som helst
Prøv gratis i 14 dager
Deretter 99 kr / måned
Premium Plus
100 timer lydbøker
Eksklusive podkaster
Gratis podkaster
Avslutt når som helst
Prøv gratis i 14 dager
Deretter 169 kr / måned
Prøv gratis i 14 dager. 99 kr / Måned etter prøveperioden. Avslutt når som helst.