TEC Talks

TEC Talks

Podcast de Notre Dame Technology Ethics Center

Hosted by Kirsten Martin, director of the Notre Dame Technology Ethics Center (ND TEC), TEC Talks features conversations on a broad range of topics in...

Empieza 7 días de prueba

Después de la prueba $99.00 / mes.Cancela cuando quieras.

Prueba gratis

Todos los episodios

15 episodios
episode Our Data Privacy and the Issue With Inferences artwork
Our Data Privacy and the Issue With Inferences

How much would “owning” your data actually protect your privacy? Host Kirsten Martin is joined by Ignacio Cofone, an assistant professor and Canada Research Chair in Artificial Intelligence Law & Data Governance at McGill University’s Faculty of Law. His research focuses on privacy harms and on algorithmic decision-making, with his current projects examining how to evaluate standing and compensation in privacy class actions and how to prevent algorithmic discrimination. Ignacio came on the show to talk about his paper “Privacy Standing,” which appeared in the University of Illinois Law Review. Providing courts with guidance on how to assess privacy injuries and advocating for people’s rights to seek compensation for them (i.e., legal standing), Ignacio’s paper distinguishes between what constitutes a privacy loss, a privacy harm, and an actionable privacy injury. He also seeks to define downstream, consequential harms as something distinct from privacy harms so that the latter can be recognized as harmful on their own and not dismissed simply because they haven’t (yet) led to something more tangible like identity theft or a financial loss. As for where privacy harms originate, Ignacio emphasizes how frequently they arise not from the moment our data is collected but rather from the inferences later made about us from that data—or even from the data of others who just happen to be similar to us. That means the prevalent approach of giving people notice and choice—which Kirsten traces back to the economics of information literature of the 1960s—and its focus on asking users for permission to collect their data is in many ways inadequate when it comes to protecting our privacy. Episode Links * Paper Discussed in the Episode: “Privacy Standing” [https://illinoislawreview.org/print/vol-2022-no-4/privacy-standing/] * Ignacio’s Bio [https://www.mcgill.ca/law/profs/cofone-ignacio] * Episode Transcript [https://techethics.nd.edu/tec-talks/tec-talks-episode-transcripts/] At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics whose work our guest is particularly excited about. Ignacio highlighted three fellow law professors who also study privacy, among other issues: * Salomé Viljoen [https://michigan.law.umich.edu/faculty-and-scholarship/our-faculty/salome-viljoen] (University of Michigan)* * Rebecca Wexler [https://www.law.berkeley.edu/our-faculty/faculty-profiles/rebecca-wexler/#tab_profile] (University of California, Berkeley) * Margot Kaminski [https://lawweb.colorado.edu/profiles/profile.jsp?id=825](University of Colorado Boulder) *Salomé was also the guest for episode 10 of TEC Talks, “Moving Data Governance to the Forest From the Trees.” Follow ND TEC on Twitter [https://twitter.com/techethicsnd] and LinkedIn [https://www.linkedin.com/company/notre-dame-technology-ethics-center-nd-tec/]

30 nov 2022 - 29 min
episode Al, Anti-Discrimination Law, and Your (Artificial) Immutability artwork
Al, Anti-Discrimination Law, and Your (Artificial) Immutability

How could a personal characteristic like eye movement affect, say, whether you get a loan? Host Kirsten Martin is joined by Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute (OII) at the University of Oxford. She founded and leads OII’s Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies. Sandra came on the show to talk about her paper “The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law,” which is forthcoming in the Tulane Law Review. Most people are familiar with the idea of anti-discrimination law and its focus on protected-class attributes—e.g., race, national origin, age, etc.—that represent something immutable about who we are as individuals and that, as Sandra explains, have been criteria humans have historically used to hold each other back. She says that with algorithms, we’re now being placed in other groups that are also largely beyond our control but that can nevertheless impact our access to goods and services and things like whether we get hired for a job. These groups fall into two main categories: people who share non-protected attributes—say, what type of internet browser they use, how their retinas move, dog owners, etc.—and people who share characteristics that are significant to computers (e.g., clicking behavior) but for which we as humans have no social concept. This leads to what Sandra calls “artificial immutability” in the attributes used to describe us, or the idea that there are things about ourselves we can’t change not because they were given by birth but because we’re unaware they’ve been assigned to us by an algorithm. She offers a definition of what constitutes an immutable trait and notes that there can be legitimate uses of them in decision-making, but that in those cases organizations need to be able to explain why they’re relevant. Episode Links * Paper Discussed in the Episode: “The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law” [https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4099100] * Sandra’s Bio [https://www.oii.ox.ac.uk/people/profiles/sandra-wachter/] * Episode Transcript [https://techethics.nd.edu/tec-talks/tec-talks-episode-transcripts/] At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics whose work our guest is particularly excited about. Sandra highlighted University of Cambridge psychologist Amy Orben [https://www.neuroscience.cam.ac.uk/directory/profile.php?aco35] and her research on online harms, particularly in the context of young people’s use of social media. Follow ND TEC on Twitter [https://twitter.com/techethicsnd] and LinkedIn [https://www.linkedin.com/company/notre-dame-technology-ethics-center-nd-tec/]

16 nov 2022 - 22 min
episode Algorithmic Fairness is More Than a Math Problem artwork
Algorithmic Fairness is More Than a Math Problem

Host Kirsten Martin is joined by Ben Green, an assistant professor at the Gerald R. Ford School of Public Policy and a postdoctoral scholar in the Michigan Society of Fellows at the University of Michigan. Specializing in the social and political impacts of government algorithms, with a focus on algorithmic fairness, smart cities, and the criminal justice system, Ben is also an affiliate of the Berkman Klein Center for Internet & Society at Harvard University and a fellow of the Center for Democracy & Technology. He came on the show to talk about his paper “Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness,” which recently appeared in Philosophy & Technology. Ben begins by explaining the aforementioned “impossibility of fairness,” an idea that describes the incompatibility of different mathematical notions of what makes a system fair. By focusing on meeting one of these formal definitions of fairness, an algorithm that is mathematically “fair” can nevertheless yield decisions that re-entrench real-world injustices, including those it may have been designed to counter. Asking whether the ultimate purpose of an algorithm is to satisfy a mathematical formalism or rather improve society, Ben puts forward an alternative notion of what he calls substantive algorithmic fairness—his detailed diagram of which, labelled Figure 2 in the paper, made a lasting impression on Kirsten. His approach still envisions a role for mathematical conceptions of fairness, but it repositions them as one consideration in a broader process where the primary concern is accounting for and mitigating both upstream inequalities that exist before an algorithm is deployed and downstream harms present afterwards. Episode Links * Paper Discussed in the Episode: “Escaping the Impossibility of Fairness: From Formal to Substantive Algorithmic Fairness” [https://dx.doi.org/10.2139/ssrn.3883649] (Note: Figure 2 referenced in the episode appears on p. 17.) * Ben’s Bio [https://fordschool.umich.edu/faculty/ben-green] * Episode Transcript [https://techethics.nd.edu/tec-talks/tec-talks-episode-transcripts/] At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Ben highlighted four he says are working at the intersections of AI, ethics, race, and real-world social impact: * Rashida Richardson [https://law.northeastern.edu/faculty/richardson/] (Northeastern University) * Anna Lauren Hoffmann [https://www.annaeveryday.com/] (University of Washington) * Lily Hu [https://philosophy.yale.edu/people/lily-hu] (Yale University) * Rodrigo Ochigame [https://ochigame.org/] (Leiden University) Follow ND TEC on Twitter [https://twitter.com/techethicsnd] and LinkedIn [https://www.linkedin.com/company/notre-dame-technology-ethics-center-nd-tec/]

19 oct 2022 - 27 min
episode Provoking Alternative Visions of Technology artwork
Provoking Alternative Visions of Technology

Host Kirsten Martin is joined by Daniel Susser, an assistant professor in the College of Information Sciences and Technology and a research associate in the Rock Ethics Institute at Penn State University. A philosopher by training, he works at the intersection of technology, ethics, and policy, with his research currently focused on questions about privacy, online influence, and automated decision-making. Daniel came on the show to talk about his short essay “Data and the Good?” that recently appeared in Surveillance & Society. Considering the intersection of scholarship in privacy law and surveillance studies, he notes how research in these fields tends to focus on critiques of existing technologies and their potential harms. While he and Kirsten are quick to emphasize how necessary this kind of work is, Daniel describes his paper as a provocation meant to push researchers, himself included, to at the same time put forward substantive alternatives for how technology could or should be used. He says there are understandable reasons why this doesn’t happen more often, but that absent competing visions for our technological future, we are beholden to those crafted by the technology industry. Episode Links * Paper Discussed in the Episode: “Data and the Good?” [https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/15764] * Daniel’s Bio [https://ist.psu.edu/directory/dus1043] * Episode Transcript [https://techethics.nd.edu/tec-talks/tec-talks-episode-transcripts/] At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about. In addition to citing classic texts in science and technology studies by Langdon Winner and Phil Agre as well as The Convivial Society blog [https://theconvivialsociety.substack.com/], which applies classic writing in philosophy of technology to contemporary problems, Daniel highlighted three people working to advance alternative visions of technology:  * Ruha Benjamin [https://www.ruhabenjamin.com/] (Princeton University) * Salomé Viljoen [https://michigan.law.umich.edu/faculty-and-scholarship/our-faculty/salome-viljoen] (University of Michigan)* * James Muldoon [https://jamesmuldoon.org/] (University of Exeter) *Salomé was also the guest for episode 10 of TEC Talks, “Moving Data Governance to the Forest From the Trees.” Follow ND TEC on Twitter [https://twitter.com/techethicsnd] and LinkedIn [https://www.linkedin.com/company/notre-dame-technology-ethics-center-nd-tec/]

05 oct 2022 - 16 min
episode Moving Data Governance to the Forest From the Trees artwork
Moving Data Governance to the Forest From the Trees

Host Kirsten Martin is joined by Salomé Viljoen, an assistant professor of law at the University of Michigan Law School and an affiliate of the Berkman Klein Center for Internet & Society at Harvard University. She studies the information economy, particularly data about people and the automated systems it trains, and is interested in how information law structures inequality and how alternative legal arrangements might address that inequality. Salomé came on the show to talk about her paper “A Relational Theory of Data Governance,” which appeared in The Yale Law Journal. The paper proposes a new framework for thinking about how we govern the use of people’s data, so she and Kirsten begin by discussing the current/traditional approach focused on the privacy of individual transactions and the degree to which we consent to share our own information. However, Salomé explains what this approach misses, saying how in the digital economy, data isn’t collected to make decisions about any one person. Instead, it’s used to understand populations of people with similar interests, backgrounds, etc. and then predict things about them, such that opting out of sharing your own data doesn’t change the inferences being made about you. Based on Salomé’s argument, Kirsten compares putting all our attention on the handoff of our data rather than on what happens with it afterwards to the old adage about missing the forest for the trees. Salomé then details what she means by moving toward a relational theory of data governance, one that accounts for population-level impacts of big data, recognizes both its potential benefits and harms, and prioritizes the scrutiny of data flows most likely to affect vulnerable communities in disproportionately negative ways (e.g., facial recognition data). Episode Links * Paper Discussed in the Episode: “A Relational Theory of Data Governance” [https://www.yalelawjournal.org/feature/a-relational-theory-of-data-governance] * Salomé’s Bio [https://michigan.law.umich.edu/faculty-and-scholarship/our-faculty/salome-viljoen] * Episode Transcript [https://techethics.nd.edu/tec-talks/tec-talks-episode-transcripts/] At the end of each episode, Kirsten asks for a recommendation about another scholar in tech ethics (or several) whose work our guest is particularly excited about. Salomé highlighted four: * Beatriz Botero Arcila [https://www.sciencespo.fr/ecole-de-droit/en/profile/beatriz-botero-arcila-0.html] (Sciences Po) * Ignacio Cofone [https://www.mcgill.ca/law/about/profs/cofone-ignacio] (McGill University) * Elettra Bietti [https://www.lawschool.cornell.edu/faculty-research/faculty-directory/elettra-bietti/] (New York University and Cornell Tech) * Amanda Parsons [https://lawweb.colorado.edu/profiles/profile.jsp?id=1076] (University of Colorado Boulder) Follow ND TEC on Twitter [https://twitter.com/techethicsnd] and LinkedIn [https://www.linkedin.com/company/notre-dame-technology-ethics-center-nd-tec/]

21 sep 2022 - 31 min
Muy buenos Podcasts , entretenido y con historias educativas y divertidas depende de lo que cada uno busque. Yo lo suelo usar en el trabajo ya que estoy muchas horas y necesito cancelar el ruido de al rededor , Auriculares y a disfrutar ..!!
Fantástica aplicación. Yo solo uso los podcast. Por un precio módico los tienes variados y cada vez más.
Me encanta la app, concentra los mejores podcast y bueno ya era ora de pagarles a todos estos creadores de contenido

Disponible en todas partes

¡Escucha Podimo en tu celular, tableta, computadora o coche!

Un universo de entretenimiento en audio

Miles de pódcasts y audiolibros exclusivos

Sin anuncios

No pierdas tiempo escuchando anuncios cuando escuches los contenidos de Podimo.

Empieza 7 días de prueba

Después de la prueba $99.00 / mes.Cancela cuando quieras.

Podcasts exclusivos

Sin anuncios

Podcasts que no pertenecen a Podimo

Audiolibros

20 horas / mes

Prueba gratis

Otros pódcasts exclusivos

Audiolibros populares