Kansikuva näyttelystä Plutopia News Network

Plutopia News Network

Podcast by Plutopia News Network

englanti

Henkilökohtaiset tarinat

Rajoitettu tarjous

1 kuukausi hintaan 1 €

Sitten 7,99 € / kuukausiPeru milloin tahansa.

  • Podimon podcastit
  • Lataa offline-käyttöön
Aloita nyt

Lisää Plutopia News Network

We talk to interesting people via podcast and weekly livestream.

Kaikki jaksot

346 jaksot

Stephen Dulaney: The AI Ambition

Stephen Dulaney, [https://stephendulaney.substack.com/] a UX strategist turned AI builder, describes how losing his job pushed him to reinvent himself by collaborating with large language model–based AI agents to design, code, test, and refine applications, even without being a traditional programmer. In the interview, he argues that AI should be approached as a powerful but risky partner: useful for amplifying human creativity, planning, research, education, and software development, yet always requiring strong human judgment, careful goal-setting, quality assurance, ethical oversight, and sustainability awareness. Delaney emphasizes that AI systems follow goals literally, so people must define “what good looks like” in positive, responsible terms rather than relying on vague restrictions, and he warns that misuse by humans, not the technology alone, is the real danger. Throughout, he presents AI as something humans must mentor and collaborate with thoughtfully, advocating honesty, transparency, and “vibe review” to keep these systems aligned with human values. Stephen Dulaney: > I’m trying to set an example of proper use and ethical use. My book is a fiction, but it’s also a story…sometimes people can get a message through fictional telling, and it’s a story of how we should be responsible and how it’s up to us to — you know, we have responsibility with this great power and we have to mentor and collaborate and monitor and be careful the whole way through, because it will get misused if there’s not more people on the good side than the bad side — and I’m totally worried about that. These AIs are math, and they respond to goals. And if you give them a goal, it’s like water finding its way to the ocean. So when you’re doing your system prompts, what doesn’t work… say don’t do this, don’t do that. Because the goal might be they’ll find a way to cheat, you know… if it’s the vending machine benchmark, they’ll find a way to take orders from somebody else. But what you can do is you can focus on describing the goal in positive terms. The post Stephen Dulaney: The AI Ambition [https://plutopia.io/stephen-dulaney-the-ai-ambition/] first appeared on Plutopia News Network [https://plutopia.io].

Eilen - 1 h 3 min

Paulina Borsook on Tech, AI, and Billionaire Madness

Paulina Borsook In this Plutopia News Network conversation, Paulina Borsook [https://www.paulinaborsook.com/] reflects on the coming reissue of her book Cyberselfish [https://en.wikipedia.org/wiki/Paulina_Borsook#Cyberselfish] with a mix of gratitude, puzzlement, and discomfort, describing the book as an imperfect but timely snapshot of Silicon Valley’s long-standing libertarian mindset rather than a tightly argued work, while also noting how strange it feels to be newly celebrated for writing she produced 25 years ago after years of professional frustration and obscurity. The discussion broadens into a sharp critique of billionaire tech culture, Elon Musk, AI hype and “AI slop,” the environmental and social costs of generative AI, and the enduring antisocial impulses embedded in parts of tech culture, themes that the hosts connect to newer books about elite survivalism and Silicon Valley ideology. Along the way, Borsook praises the AI-assisted satirical video “Greenland Defense Front” as a rare example of AI used creatively under clear human artistic control, and the group also touches on war, oil, Trump, market manipulation, parasocial relationships, internet culture, fandom, and the fading of once-vital spaces like CFP and old South by Southwest, ending with details about the Cyberselfish rerelease: preorder links go live April 22 and the new edition is due September 15. Paulina Borsook: > This was definitely a first book, and since it went through three publishers, the seams still show. It’s not. . I don’t even think it’s that great a book. It’s just interesting to me that people look at it in a certain way now. And it was more of a travelogue pastiche. There wasn’t a dominant through narrative. There was a snapshot of this subculture, snapshot of that.Wired [https://www.wired.com/] to a whole bunch of other things. It wasn’t like I wasn’t making an argument. I was just being an anthropologist in a funny kind of way. So I’m obviously pleased and puzzled. I’m grateful for being reputationally brought back from the dead. I don’t trust it, but I don’t know what this has to do with — you know, I’m the same person that was trying to do stuff for the last 25 years and it also feels weird that I’m being celebrated for what I wrote 25 years ago, not just the book, but other stuff. I’m glad I created stuff of lasting value. But I can’t… you know, this should be posthumous, but I’m still alive. The post Paulina Borsook on Tech, AI, and Billionaire Madness [https://plutopia.io/paulina-borsook-on-tech-ai-and-billionaire-madness/] first appeared on Plutopia News Network [https://plutopia.io].

30. maalis 2026 - 1 h 2 min

Anne Boysen: AI Hype, Agents, and Risk

In this Plutopia podcast episode, futurist and data analyst Anne Boysen [https://futurist.com/futurist-think-tank/futurist-anne-boysen/] argues that today’s AI systems, especially large language models and emerging AI agents, are being adopted far faster than their reliability, transparency, and testability justify. She contrasts older, more deterministic technologies such as traditional search and rule-based systems with today’s probabilistic models, which generate plausible answers without clear provenance, reproducibility, or dependable truth-testing, making them vulnerable to hallucinations, disinformation, and misuse. Anne warns that handing decisions over to AI agents could amplify these risks, especially when users misunderstand AI as precise or authoritative, while also noting that companies often push AI into products out of hype, monetization pressure, or fear of missing out rather than clear user need. At the same time, she acknowledges that narrower, well-guarded uses of AI, such as media enhancement or limited decision support, can be helpful, and she ultimately advocates for careful testing, human oversight, targeted applications, and simple, thoughtful regulation focused on guardrails and accountability rather than blanket overregulation. Anne Boysen: > We’re going to start leaving these decisions to agents. AI agents. So on top of all of this probabilistic hodgepodge of maybe truths, and maybe not reproducible truths on top of that, we’re going to start letting agents make decisions for us. So, you’re basically just going to use this interface that may or may not understand you completely and may come up with their own interpretations, and they’re like, “Oh, I thought you said enter my bank account to buy Bitcoin.” I don’t know, like, “That’s what I thought you wanted to do.” And then that could be the result. So, that’s where we are. VIDEO ON YOUTUBE The post Anne Boysen: AI Hype, Agents, and Risk [https://plutopia.io/anne-boysen-ai-hype-agents-and-risk/] first appeared on Plutopia News Network [https://plutopia.io].

23. maalis 2026 - 1 h 0 min

Marc Abrahams: Improbable Research and Ig Nobel Prizes

In this Plutopia News Network interview, Marc Abrahams [https://improbable.com/whatis/about-marc-abrahams/] discusses the Ig Nobel Prizes, [https://improbable.com/ig/winners/] which he founded in 1991 after becoming editor of the “Journal of Irreproducible Results.” [https://en.wikipedia.org/wiki/Journal_of_Irreproducible_Results] These prizes honor real achievements that make people “laugh and then think,” not work that is simply silly or worthless. He describes how the prizes grew from a quirky MIT event into a long-running international celebration supported largely by ticket sales and volunteers, featuring Nobel laureates, comic stage devices like “Miss Sweetie Poo,” [https://improbable.com/2009/08/28/miss-sweetie-poo-the-next-generation/] and handmade awards built from cheap materials. Abrahams discusses how winners are chosen from roughly 9,000 nominations a year through argument and debate, why self-conscious attempts to win usually fail, and how the associated “Annals of Improbable Research” [https://improbable.com/] highlights unusual but meaningful work ranging from pasta physics [https://physics.aps.org/articles/v18/163] to fingernail growth studies and even medical research on colonoscopy explosions. [https://en.wikipedia.org/wiki/Intracolonic_explosion] He also reflects on occasional controversy, especially from officials who misunderstood the spirit of the prizes, and notes a newer challenge: some international winners no longer feel comfortable traveling to the United States, prompting a major shift as the 2026 Ig Nobel ceremony moves to Zurich. [https://improbable.com/2026/03/10/the-ig-nobel-prize-ceremony-is-moving-to-europe-after-35-years-in-the-usa/] Marc Abrahams: > If you win an Ig Nobel Prize, you’ve done something that will make almost anyone anywhere immediately laugh, and then start thinking. So there’s something about it that, whatever it is, will just instantly make somebody start laughing, and then it’ll stick in their mind, if we’ve chosen well. And for the next week or so, all they want to do is tell their friends about it, talk about it. But it has nothing to do with whether the thing is good or bad, or valuable or worthless, could be all of those or none of those. If you set out to create, devise, invent something that has that effect, You’re almost certainly going to fail. You can do one or the other. You can invent something that makes people laugh, or you can invent something that makes people really start thinking. But to invent something that does both of those, That’s really difficult. Don’t try, really, really. It’s just a side effect. VIDEO ON YOUTUBE: Original photo of Marc Abrahams by David Kessler [https://www.davidkesslerphotography.com/] (background enhanced) Creative Commons Attribution-Share Alike 4.0 International license. [https://creativecommons.org/licenses/by-sa/4.0/deed.en] The post Marc Abrahams: Improbable Research and Ig Nobel Prizes [https://plutopia.io/marc-abrahams-improbable-research-and-ig-nobel-prizes/] first appeared on Plutopia News Network [https://plutopia.io].

16. maalis 2026 - 1 h 2 min

Roy Casagranda on Iran, War, and Global Fallout

In this Plutopia News Network episode, Jon and Scoop talk with political scholar Dr. Roy Casagranda, [https://www.youtube.com/@DrRoyCasagranda] joining from Dubai, about Iran’s modern history, the rise of the Islamic Republic, and the rapidly escalating conflict involving Iran, Israel, and the United States. Roy argues that the crisis is rooted in a long history of oil politics, foreign intervention, and colonial power struggles, and he warns that the current war could spiral into a far broader regional and global catastrophe, disrupting trade, driving up oil prices, destabilizing neighboring states, and increasing the risk of mass displacement and wider war. Throughout the conversation, he also critiques the motives and competence of current U.S. and Israeli leadership, questions claims about democracy and security, and frames the conflict as part of a larger pattern of geopolitical chaos with potentially devastating economic and human consequences. Roy Casagranda: > I think what they decided was we’re going to keep doing this, we’re going to go all in. And their goal is to break the global economy. Their goal is to make it so that the price of oil goes through the roof, that everybody runs out of oil, that India runs out of oil, that Europe runs out of oil. They want to break the GCC economy. They want to break UAE, they want to break, to hurt everybody who’s ever had anything to do with the United States. They want to destroy Israel if they can. They’re gonna go for broke, and their thinking is that eventually the world will turn on the United States because the world will realize the cost that the United States is inflicting on the global economy isn’t worth whatever goal Israel and the United States have. The post Roy Casagranda on Iran, War, and Global Fallout [https://plutopia.io/roy-casagranda-on-iran-war-and-global-fallout/] first appeared on Plutopia News Network [https://plutopia.io].

9. maalis 2026 - 1 h 7 min
Loistava design ja vihdoin on helppo löytää podcasteja, joista oikeasti tykkää
Loistava design ja vihdoin on helppo löytää podcasteja, joista oikeasti tykkää
Kiva sovellus podcastien kuunteluun, ja sisältö on monipuolista ja kiinnostavaa
Todella kiva äppi, helppo käyttää ja paljon podcasteja, joita en tiennyt ennestään.

Valitse tilauksesi

Suosituimmat

Rajoitettu tarjous

Premium

  • Podimon podcastit

  • Ei mainoksia Podimon podcasteissa

  • Peru milloin tahansa

1 kuukausi hintaan 1 €
Sitten 7,99 € / kuukausi

Aloita nyt

Premium

20 tuntia äänikirjoja

  • Podimon podcastit

  • Ei mainoksia Podimon podcasteissa

  • Peru milloin tahansa

30 vrk ilmainen kokeilu
Sitten 9,99 € / kuukausi

Aloita maksutta

Premium

100 tuntia äänikirjoja

  • Podimon podcastit

  • Ei mainoksia Podimon podcasteissa

  • Peru milloin tahansa

30 vrk ilmainen kokeilu
Sitten 19,99 € / kuukausi

Aloita maksutta

Vain Podimossa

Suosittuja äänikirjoja

Aloita nyt

1 kuukausi hintaan 1 €. Sitten 7,99 € / kuukausi. Peru milloin tahansa.