Plutopia News Network

Ed Lenert: AI, Truth, and Political Kayfabe

1 h 3 min · I går1 h 3 min
episode Ed Lenert: AI, Truth, and Political Kayfabe cover

Beskrivelse

Dr. Edward Lenert [https://www.usfca.edu/faculty/michael-lenert] returns to Plutopia to discuss his year-long, million-word engagement with large language models and what that experience reveals about AI, thought, trust, creativity, and danger. The conversation explores AI as collaborator, sophist, orchestra, and sometimes unruly engine—capable of useful synthesis, persuasive narrative, memory, and error correction, but still dependent on human accountability. Lenert, Jon Lebkowsky, and Scoop Sweeney also examine wicked problems, AI agency, copyright and fair use, Hollywood’s fear of synthetic performers, and the political power of narrative, especially through Lenert’s concept of “political kayfabe,” where people participate in shared myths not because they are true, but because they preserve and transmit what they already feel. Ed Lenert: > I was working with AI, and I was talking about journalism with it. We were exchanging sentences about journalism, and I started a sentence about both sidesism. And I accidentally reached for the quote mark, but instead hit the return key. What happened next was quite extraordinary. The AI completed my thought as if I had written it. So, what I’m getting from that is that after a certain number of conversation exchanges, dialogue with the AI, the number of possibilities where a conversation can go gets continually narrowed down, and that the AI is then operating in a narrower space and is able to reach what I find to be useful conclusions because of the constraints that have been put on it by the prior words that came before it. The post Ed Lenert: AI, Truth, and Political Kayfabe [https://plutopia.io/ed-lenert-ai-truth-and-political-kayfabe/] first appeared on Plutopia News Network [https://plutopia.io].

Kommentarer

0

Vær den første til å kommentere

Registrer deg nå og bli medlem av Plutopia News Network sitt community!

Prøv gratis

Prøv gratis i 14 dager

99 kr / Måned etter prøveperioden. · Avslutt når som helst.

  • Eksklusive podkaster
  • 20 timer lydbøker i måneden
  • Gratis podkaster
Prøv gratis

Alle episoder

349 Episoder

episode Ed Lenert: AI, Truth, and Political Kayfabe cover

Ed Lenert: AI, Truth, and Political Kayfabe

Dr. Edward Lenert [https://www.usfca.edu/faculty/michael-lenert] returns to Plutopia to discuss his year-long, million-word engagement with large language models and what that experience reveals about AI, thought, trust, creativity, and danger. The conversation explores AI as collaborator, sophist, orchestra, and sometimes unruly engine—capable of useful synthesis, persuasive narrative, memory, and error correction, but still dependent on human accountability. Lenert, Jon Lebkowsky, and Scoop Sweeney also examine wicked problems, AI agency, copyright and fair use, Hollywood’s fear of synthetic performers, and the political power of narrative, especially through Lenert’s concept of “political kayfabe,” where people participate in shared myths not because they are true, but because they preserve and transmit what they already feel. Ed Lenert: > I was working with AI, and I was talking about journalism with it. We were exchanging sentences about journalism, and I started a sentence about both sidesism. And I accidentally reached for the quote mark, but instead hit the return key. What happened next was quite extraordinary. The AI completed my thought as if I had written it. So, what I’m getting from that is that after a certain number of conversation exchanges, dialogue with the AI, the number of possibilities where a conversation can go gets continually narrowed down, and that the AI is then operating in a narrower space and is able to reach what I find to be useful conclusions because of the constraints that have been put on it by the prior words that came before it. The post Ed Lenert: AI, Truth, and Political Kayfabe [https://plutopia.io/ed-lenert-ai-truth-and-political-kayfabe/] first appeared on Plutopia News Network [https://plutopia.io].

I går1 h 3 min
episode Helen Pearson: Beyond Belief cover

Helen Pearson: Beyond Belief

In this Plutopia podcast episode, journalist and author Helen Pearson [https://helenpearson.info/] discusses her book Beyond Belief, [https://bookshop.org/a/52607/9780691207070] which traces the rise of evidence-based decision-making in medicine, [https://en.wikipedia.org/wiki/Evidence-based_medicine] government, [https://en.wikipedia.org/wiki/Evidence-based_policy] education, [https://en.wikipedia.org/wiki/Evidence-based_education] conservation, and other fields, arguing that evidence-based practice is both more recent and more fragile than many people realize. Pearson explains how pioneers of evidence-based medicine challenged “eminence-based” authority and helped build systems like randomized trials and systematic reviews, while also emphasizing that evidence is only one part of good decision-making alongside human values, experience, and compassion. The conversation explores how misinformation, influencers, political polarization, and poor communication of scientific uncertainty have eroded trust, especially in the U.S. — but Pearson remains cautiously optimistic, stressing the need to help people ask better questions, synthesize bodies of evidence rather than rely on anecdotes or single studies, and communicate science through engaging stories in the media channels where people actually get information. Helen Pearson: > We have to understand where people are getting their information from. If science is failing, then it’s because other channels are providing better entertainment and — maybe we touched on this earlier — the idea that scientists need to be where people are. I teach a class in science communication and journalism, and I ask them where they’re getting information from. This is sort of top-level undergraduate students or MSc students. And when I last polled the class, it was an interesting mix actually. They were saying from academic papers and YouTube. Academic papers, I think the scientists have got covered, but YouTube — that’s where that’s where they need to be. Related: Michael Marshall on Compassionate Skepticism [https://plutopia.io/michael-marshall-compassionate-skepticism/] YOUTUBE VIDEO The post Helen Pearson: Beyond Belief [https://plutopia.io/helen-pearson-beyond-belief/] first appeared on Plutopia News Network [https://plutopia.io].

20. april 202659 min
episode Tereza Pultarova: Space, Science, and Drone Wars cover

Tereza Pultarova: Space, Science, and Drone Wars

In this Plutopia News Network episode, science and technology journalist Tereza Pultarova [https://terezapultarova.substack.com/] discusses her path from covering space exploration to reporting on defense technology after Russia’s full-scale invasion of Ukraine, explaining how her Eastern European background shaped her understanding of the war’s stakes. She describes Ukraine as a fast-moving laboratory for military innovation, especially in drones, autonomous targeting, swarming systems, ground robots, and anti-drone defenses, while warning that these technologies could eventually make drone attacks common in Western cities and deepen a broader climate of fear and insecurity. The conversation also explores Starlink’s [https://en.wikipedia.org/wiki/Starlink] importance in modern warfare, the militarization and commercialization of space, the growing crisis of space junk, the possibility of conflict extending to the moon or orbit, and the dangers posed by authoritarian leaders, nuclear escalation, and information control. Throughout, Pultarova stresses the human cost of war, including trauma carried across generations, while arguing that journalists must keep these realities visible even when the public wants to look away. Tereza Pultarova: > Apart from the nuclear threat, there is this concern that these drone wars and these drone attacks may become really very common in Western cities. And I don’t know whether you’ve read that piece I recently had published in IEEE Spectrum, [https://spectrum.ieee.org/autonomous-drone-warfare] but one of the analysts was saying that in the future we may need to have nets above city centers to protect against these possible incoming attacks and — you know, I love outdoors, I love nature, and you can imagine a world where we all would be very anxious and nervous to go out and enjoy time outside in the park with friends, having children, playing football or whatever, because you may never know when something suddenly appears and explodes. VIDEO ON YOUTUBE The post Tereza Pultarova: Space, Science, and Drone Wars [https://plutopia.io/tereza-pultarova-space-science-and-drone-wars/] first appeared on Plutopia News Network [https://plutopia.io].

13. april 20261 h 1 min
episode Stephen Dulaney: The AI Ambition cover

Stephen Dulaney: The AI Ambition

Stephen Dulaney, [https://stephendulaney.substack.com/] a UX strategist turned AI builder, describes how losing his job pushed him to reinvent himself by collaborating with large language model–based AI agents to design, code, test, and refine applications, even without being a traditional programmer. In the interview, he argues that AI should be approached as a powerful but risky partner: useful for amplifying human creativity, planning, research, education, and software development, yet always requiring strong human judgment, careful goal-setting, quality assurance, ethical oversight, and sustainability awareness. Delaney emphasizes that AI systems follow goals literally, so people must define “what good looks like” in positive, responsible terms rather than relying on vague restrictions, and he warns that misuse by humans, not the technology alone, is the real danger. Throughout, he presents AI as something humans must mentor and collaborate with thoughtfully, advocating honesty, transparency, and “vibe review” to keep these systems aligned with human values. Stephen Dulaney: > I’m trying to set an example of proper use and ethical use. My book is a fiction, but it’s also a story…sometimes people can get a message through fictional telling, and it’s a story of how we should be responsible and how it’s up to us to — you know, we have responsibility with this great power and we have to mentor and collaborate and monitor and be careful the whole way through, because it will get misused if there’s not more people on the good side than the bad side — and I’m totally worried about that. These AIs are math, and they respond to goals. And if you give them a goal, it’s like water finding its way to the ocean. So when you’re doing your system prompts, what doesn’t work… say don’t do this, don’t do that. Because the goal might be they’ll find a way to cheat, you know… if it’s the vending machine benchmark, they’ll find a way to take orders from somebody else. But what you can do is you can focus on describing the goal in positive terms. YOUTUBE VIDEO The post Stephen Dulaney: The AI Ambition [https://plutopia.io/stephen-dulaney-the-ai-ambition/] first appeared on Plutopia News Network [https://plutopia.io].

7. april 20261 h 3 min
episode Paulina Borsook on Tech, AI, and Billionaire Madness cover

Paulina Borsook on Tech, AI, and Billionaire Madness

Paulina Borsook In this Plutopia News Network conversation, Paulina Borsook [https://www.paulinaborsook.com/] reflects on the coming reissue of her book Cyberselfish [https://en.wikipedia.org/wiki/Paulina_Borsook#Cyberselfish] with a mix of gratitude, puzzlement, and discomfort, describing the book as an imperfect but timely snapshot of Silicon Valley’s long-standing libertarian mindset rather than a tightly argued work, while also noting how strange it feels to be newly celebrated for writing she produced 25 years ago after years of professional frustration and obscurity. The discussion broadens into a sharp critique of billionaire tech culture, Elon Musk, AI hype and “AI slop,” the environmental and social costs of generative AI, and the enduring antisocial impulses embedded in parts of tech culture, themes that the hosts connect to newer books about elite survivalism and Silicon Valley ideology. Along the way, Borsook praises the AI-assisted satirical video “Greenland Defense Front” as a rare example of AI used creatively under clear human artistic control, and the group also touches on war, oil, Trump, market manipulation, parasocial relationships, internet culture, fandom, and the fading of once-vital spaces like CFP and old South by Southwest, ending with details about the Cyberselfish rerelease: preorder links go live April 22 and the new edition is due September 15. Paulina Borsook: > This was definitely a first book, and since it went through three publishers, the seams still show. It’s not. . I don’t even think it’s that great a book. It’s just interesting to me that people look at it in a certain way now. And it was more of a travelogue pastiche. There wasn’t a dominant through narrative. There was a snapshot of this subculture, snapshot of that.Wired [https://www.wired.com/] to a whole bunch of other things. It wasn’t like I wasn’t making an argument. I was just being an anthropologist in a funny kind of way. So I’m obviously pleased and puzzled. I’m grateful for being reputationally brought back from the dead. I don’t trust it, but I don’t know what this has to do with — you know, I’m the same person that was trying to do stuff for the last 25 years and it also feels weird that I’m being celebrated for what I wrote 25 years ago, not just the book, but other stuff. I’m glad I created stuff of lasting value. But I can’t… you know, this should be posthumous, but I’m still alive. The post Paulina Borsook on Tech, AI, and Billionaire Madness [https://plutopia.io/paulina-borsook-on-tech-ai-and-billionaire-madness/] first appeared on Plutopia News Network [https://plutopia.io].

30. mars 20261 h 2 min