Machines that fail us

Machines that fail us

Podcast door University of St. Gallen, Philip Di Salvo

From educational institutions to healthcare professionals, from employers to governing bodies, artificial intelligence technologies and algorithms are increasingly used to assess and decide upon various aspects of our lives. However, the question arises: are these systems truly impartial and just in their judgments when they read humans and their behaviour? Our answer is that they are not. Despite their purported aim to enhance objectivity and efficiency, these technologies paradoxically harbor systemic biases and inaccuracies, particularly in the realm of human profiling. “Machines That Fail Us” investigates how AI and its errors are impacting on different areas of our society and how different societal actors are negotiating and coexisting with the human rights implications of AI. The "Machines That Fail Us" podcast series hosts the voices of some of the most engaged individuals involved in the fight for a better future with artificial intelligence. The first season of "Machines That Fail Us" has been made possible thanks to a grant provided by the Swiss National Science Foundation (SNSF)’s "Agora" scheme, whereas the second one is supported by the University of St. Gallen’s Communications Department. The podcast is produced by the Media and Culture Research Group at the Institute for Media and Communications Management. Dr. Philip Di Salvo, the main host, works as a researcher and lecturer at the University of St.Gallen.

Probeer 14 dagen gratis

Na de proefperiode € 9,99 / maand.Elk moment opzegbaar.

Probeer gratis

Alle afleveringen

7 afleveringen
episode Machines That Fail Us - Season 2, Episode 2: "Teaching the Machine: The Hidden Work Behind AI’s Intelligence" artwork
Machines That Fail Us - Season 2, Episode 2: "Teaching the Machine: The Hidden Work Behind AI’s Intelligence"

The training and coding of AI systems, particularly generative ones, depend on the work of humans teaching machines how to think. This work includes content moderation and labeling, is often conducted under exploitative conditions in the Global South, and remains hidden from users' view. In this episode, we discuss these issues with Adio Dinika, a Research Fellow at the Distributed AI Research Institute (DAIR), where he investigates the invisible labor behind AI systems and how it reflects the various inequalities within the AI industry. We often perceive AI tools as entirely artificial, if not almost magical. In reality, the effectiveness and reliability of these systems depend significantly on the labor of humans who ensure that generative AI tools, for example, produce responses that are moderated and free from harmful or toxic content. High-quality training data is essential for building a high-performing large language model, and this data is made up of precisely labeled datasets—a task still carried out by human workers. However, this work is predominantly performed by people in the Global South, often under exploitative and unhealthy conditions, and remains largely invisible to end-users worldwide. The roles of these invisible workers, along with the challenges they face, represent some of the most visible signs of inequality within the AI and tech supply chain, yet they remain little discussed. In this episode of Machines That Fail Us, we dive into this issue with Adio Dinika, a Research Fellow at the Distributed AI Research Institute (DAIR), an international research center focused on the social implications of AI, founded by Timnit Gebru. Together with Dr. Dinika, we explore the hidden human labor behind AI systems and the real, human nature of artificial intelligence.

27 feb 2025 - 32 min
episode Machines That Fail Us - Season 2, Episode 1: "Artificial Lies and Synthetic Media: How AI Powers Disinformation" artwork
Machines That Fail Us - Season 2, Episode 1: "Artificial Lies and Synthetic Media: How AI Powers Disinformation"

How is artificial intelligence being used for disinformation purposes? How effective can it be in influencing our reality and political choices? We discuss the rise of synthetic media with Craig Silverman, a reporter for ProPublica who covers voting, platforms, disinformation, and online manipulation, and one of the world’s leading experts on online disinformation. In the first season of Machines That Fail Us, our focus was to explore a fundamental question: what do AI errors reveal about their societal impact and our future with artificial intelligence? Through engaging discussions with global experts from journalism, activism, entrepreneurship, and academia, we examined how AI and its shortcomings are already influencing various sectors of society. Alongside analyzing present challenges, we envisioned strategies for creating more equitable and effective AI systems. As artificial intelligence becomes increasingly integrated into our lives, we decided to expand these conversations in the new season, delving into additional areas where machine learning, generative AI, and their societal effects are making a significant mark. This season begins by examining AI's role in the spread of misinformation, disinformation, and the ways generative AI has been used to orchestrate influence campaigns. Are we unknowingly falling victim to machine-generated falsehoods? With 2024 being a record year for global elections, we will explore the extent to which AI-driven disinformation has shaped democratic processes. Has it truly had an impact, and if so, how? In this episode, we are joined by Craig Silverman, an award-winning journalist, author, and one of the foremost authorities on online disinformation, fake news, and digital investigations. Currently reporting for ProPublica, Craig specializes in topics such as voting, disinformation, online manipulation, and the role of digital platforms.

30 jan 2025 - 30 min
episode Machines That Fail Us #5: "The shape of AI to come" artwork
Machines That Fail Us #5: "The shape of AI to come"

The AI we have built so far comes with many different shortcomings and concerns. At the same time, the AI tools we have today are the product of specific technological cultures and business decisions. Could we just do AI differently? For the final episode of “Machines That Fail Us”, we are joined by a leading expert on the intersection of emerging technology, policy, and rights. With Frederike Kaltheuner, founder of the consulting firm new possible and a Senior Advisor to the AI NOW institute, we discussed the shape of future AI and of our life with it.

04 jul 2024 - 30 min
episode Machines That Fail Us #4: Building different AI futures artwork
Machines That Fail Us #4: Building different AI futures

We don’t necessarily have to build artificial intelligence the way we’re doing it today. To make AI really inclusive we must look beyond Western techno-cultures and beyond our understanding of technology being either utopian or dystopian. How could our AI future look different? We asked Prof. Payal Arora, a Professor of Inclusive AI Cultures at Utrecht University.

13 jun 2024 - 34 min
episode Machines That Fail Us #3 Errors and biases: tales of algorithmic discrimination artwork
Machines That Fail Us #3 Errors and biases: tales of algorithmic discrimination

The records of biases, discriminatory outcomes, and errors as well as the societal impacts of artificial intelligence systems is now widely documented. However, the question remains: How is the struggle for algorithmic justice evolving? We asked Angela Müller, Executive Director of AlgorithmWatch Switzerland.

16 mei 2024 - 27 min
Super app. Onthoud waar je bent gebleven en wat je interesses zijn. Heel veel keuze!
Makkelijk in gebruik!
App ziet er mooi uit, navigatie is even wennen maar overzichtelijk.

Probeer 14 dagen gratis

Na de proefperiode € 9,99 / maand.Elk moment opzegbaar.

Exclusieve podcasts

Advertentievrij

Gratis podcasts

Luisterboeken

20 uur / maand

Probeer gratis

Andere exclusieve shows

Populaire luisterboeken