
Lytt til Pondering AI
Podkast av Kimberly Nevala, Strategic Advisor - SAS
How is the use of artificial intelligence (AI) shaping our human experience? Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse. All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
Prøv gratis i 7 dager
99,00 kr / Måned etter prøveperioden.Avslutt når som helst.
Alle episoder
70 Episoder
Robert Mahari [https://www.linkedin.com/in/robert-mahari/] examines the consequences of addictive intelligence, adaptive responses to regulating AI companions, and the benefits of interdisciplinary collaboration. Robert and Kimberly discuss the attributes of addictive products; the allure of AI companions; AI as a prescription for loneliness; not assuming only the lonely are susceptible; regulatory constraints and gaps; individual rights and societal harms; adaptive guardrails and regulation by design; agentic self-awareness; why uncertainty doesn’t negate accountability; AI’s negative impact on the data commons; economic disincentives; interdisciplinary collaboration and future research. Robert Mahari [https://www.linkedin.com/in/robert-mahari/] is a JD-PhD researcher at MIT Media Lab and the Harvard Law School where he studies the intersection of technology, law and business. In addition to computational law, Robert has a keen interest in AI regulation and embedding regulatory objectives and guardrails into AI designs. A transcript of this episode is here [https://pondering-ai.transistor.fm/episodes/ep70/transcript]. Additional Resources: * The Allure of Addictive Intelligence (article): https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/ [https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/] * Robert Mahari (website): https://robertmahari.com/ [https://robertmahari.com/]

Phaedra Boinodiris [https://www.linkedin.com/in/phaedra/] minds the gap between AI access and literacy by integrating educational siloes, practicing human-centric design, and cultivating critical consumers. Phaedra and Kimberly discuss the dangerous confluence of broad AI accessibility with lagging AI literacy and accountability; coding as a bit player in AI design; data as an artifact of human experience; the need for holistic literacy; creating critical consumers; bringing everyone to the AI table; unlearning our siloed approach to education; multidisciplinary training; human-centricity in practice; why good intent isn’t enough; and the hard work required to develop good AI. Phaedra Boinodiris [https://www.linkedin.com/in/phaedra/] is IBM’s Global Consulting Leader for Trustworthy AI and co-author of the book AI for the Rest of Us [https://aifortherestofus.us/]. As an RSA Fellow, co-founder of the Future World Alliance, and academic advisor, Phaedra is shaping a future in which AI is accessible and good for all. A transcript of this episode is here [https://pondering-ai.transistor.fm/episodes/ep69/transcript]. Additional Resources: Phaedra’s Website - https://phaedra.ai/ [https://phaedra.ai/] The Future World Alliance - https://futureworldalliance.org/ [https://futureworldalliance.org/]

Ryan Carrier [https://www.linkedin.com/in/ryan-carrier-fhca-b286924/] trues up the benefits and costs of responsible AI while debunking misleading narratives and underscoring the positive power of the consumer collective. Ryan and Kimberly discuss the growth of AI governance; predictable resistance; the (mis)belief that safety impedes innovation; the “cost of doing business”; downside and residual risk; unacceptable business practices; regulatory trends and the law; effective disclosures and deceptive design; the value of independence; auditing as a business asset; the AI lifecycle; ethical expertise and choice; ethics boards as advisors not activists; and voting for beneficial AI with our wallets. A transcript of this episode is here [https://pondering-ai.transistor.fm/episodes/ep68/transcript]. Ryan Carrier [https://www.linkedin.com/in/ryan-carrier-fhca-b286924/] is the Executive Director of ForHumanity [https://forhumanity.center/], a non-profit organization improving AI outcomes through increased accountability and oversight.

Olivia Gambelin [https://www.linkedin.com/in/oliviagambelin/] values ethical innovation, revels in human creativity and curiosity, and advocates for AI systems that reflect and enable human values and objectives. Olivia and Kimberly discuss philogagging; us vs. “them” (i.e. AI systems) comparisons; enabling curiosity and human values; being accountable for the bombs we build - figuratively speaking; AI models as the tip of the iceberg; literacy, values-based judgement and trust; replacing proclamations with strong living values; The Values Canvas; inspired innovations; falling back in love with technology; foundational risk practices; optimism and valuing what matters. A transcript of this episode is here [https://pondering-ai.transistor.fm/episodes/ep67/transcript]. Olivia Gambelin [https://www.linkedin.com/in/oliviagambelin/] is a renowned AI Ethicist and the Founder of Ethical Intelligence [https://www.ethicalintelligence.co/], the world’s largest network of Responsible AI practitioners. An active researcher, policy advisor and entrepreneur, Olivia helps executives and product teams innovate confidently with AI. Additional Resources: Responsible AI: Implement an Ethical Approach in Your Organization [https://www.amazon.com/Responsible-AI-Implement-Approach-Organization/dp/1398615706/] – Book Plato & a Platypus Walk Into a Bar: Understanding Philosophy Through Jokes [https://www.amazon.com/Plato-Platypus-Walk-into-Understanding/dp/0143113879/] - Book The Values Canvas [https://www.thevaluescanvas.com/about] – RAI Design Tool Women Shaping the Future of Responsible AI [https://sheshapes.ai/] – Organization In Pursuit of Good Tech | Subscribe [https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect.checkpoint.com%2Fv2%2Fr01%2F___https%3A%2F%2Fpursuitofgoodtech.substack.com%2F___.YzJ1OnNhc2luc3RpdHV0ZTpjOm86Y2Q4NzFkYzhkNzhjYThlYTQyYmNmNDQ0YjAwNGViZTg6Nzo4ZWJkOjhkNTNmNWE1ZjcyNDc1ZDg4MGQ5ZTNiZGVjNmZhNDE1ZDRlNGY1ZDg1NTc5MTFlNjJiYTEzMzJhY2U2OWQxYmI6aDpUOk4&data=05%7C02%7CKimberly.Nevala%40sas.com%7Cc69c6ac2de004e762b6408dd57fd1cd1%7Cb1c14d5c362545b3a4309552373a0c2f%7C0%7C0%7C638763466835708470%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=JMopXxy3FZQ%2FT7HErIDOpA69O6kG%2B0jDbZw4voblyZI%3D&reserved=0] - Newsletter

Helen Beetham [https://www.linkedin.com/in/helen-beetham/] isn’t waiting for an AI upgrade as she considers what higher education is for, why learning is ostensibly ripe for AI, and how to diversify our course. Helen and Kimberly discuss the purpose of higher education; the current two tribe moment; systemic effects of AI; rethinking learning; GenAI affordances; the expertise paradox; productive developmental challenges; converging on an educational norm; teachers as data laborers; the data-driven personalization myth; US edtech and instrumental pedagogy; the fantasy of AI’s teacherly behavior; students as actors in their learning; critical digital literacy; a story of future education; AI ready graduates; pre-automation and AI adoption; diversity of expression and knowledge; two-tiered educational systems; and the rich heritage of universities. Helen Beetham [https://www.linkedin.com/in/helen-beetham/] is an educator, researcher and consultant who advises universities and international bodies worldwide on their digital education strategies. Helen is also a prolific author whose publications include “Rethinking Pedagogy for a Digital Age”. Her Substack, Imperfect Offerings [https://helenbeetham.substack.com/], is recommended by the Guardian/Observer for its wise and thoughtful critique of generative AI. Additional Resources: Imperfect Offerings - https://helenbeetham.substack.com/ [https://helenbeetham.substack.com/] Audrey Watters - https://audreywatters.com/ [https://audreywatters.com/] Kathryn (Katie) Conrad - https://www.linkedin.com/in/kathryn-katie-conrad-1b0749b/ [https://www.linkedin.com/in/kathryn-katie-conrad-1b0749b/] Anna Mills - https://www.linkedin.com/in/anna-mills-oer/ [https://www.linkedin.com/in/anna-mills-oer/] Dr. Maya Indira Ganesh - https://www.linkedin.com/in/dr-des-maya-indira-ganesh/ [https://www.linkedin.com/in/dr-des-maya-indira-ganesh/] Tech(nically) Politics - https://www.technicallypolitics.org/ [https://www.technicallypolitics.org/] LOG OFF - logoffmovement.org/ [http://www.logoffmovement.org/] Rest of World - www.restofworld.org/ [http://www.restofworld.org/] Derechos Digitales – www.derechosdigitales.org [http://www.derechosdigitales.org] A transcript of this episode is here [https://pondering-ai.transistor.fm/episodes/ep66/transcript].
Prøv gratis i 7 dager
99,00 kr / Måned etter prøveperioden.Avslutt når som helst.
Eksklusive podkaster
Uten reklame
Gratis podkaster
Lydbøker
20 timer i måneden