Billede af showet Plutopia News Network

Plutopia News Network

Podcast af Plutopia News Network

engelsk

Personlige fortællinger & samtaler

Begrænset tilbud

2 måneder kun 19 kr.

Derefter 99 kr. / månedOpsig når som helst.

  • 20 lydbogstimer pr. måned
  • Podcasts kun på Podimo
  • Gratis podcasts
Kom i gang

Læs mere Plutopia News Network

We talk to interesting people via podcast and weekly livestream.

Alle episoder

344 episoder
episode Anne Boysen: AI Hype, Agents, and Risk artwork

Anne Boysen: AI Hype, Agents, and Risk

In this Plutopia podcast episode, futurist and data analyst Anne Boysen [https://futurist.com/futurist-think-tank/futurist-anne-boysen/] argues that today’s AI systems, especially large language models and emerging AI agents, are being adopted far faster than their reliability, transparency, and testability justify. She contrasts older, more deterministic technologies such as traditional search and rule-based systems with today’s probabilistic models, which generate plausible answers without clear provenance, reproducibility, or dependable truth-testing, making them vulnerable to hallucinations, disinformation, and misuse. Anne warns that handing decisions over to AI agents could amplify these risks, especially when users misunderstand AI as precise or authoritative, while also noting that companies often push AI into products out of hype, monetization pressure, or fear of missing out rather than clear user need. At the same time, she acknowledges that narrower, well-guarded uses of AI, such as media enhancement or limited decision support, can be helpful, and she ultimately advocates for careful testing, human oversight, targeted applications, and simple, thoughtful regulation focused on guardrails and accountability rather than blanket overregulation. Anne Boysen: > We’re going to start leaving these decisions to agents. AI agents. So on top of all of this probabilistic hodgepodge of maybe truths, and maybe not reproducible truths on top of that, we’re going to start letting agents make decisions for us. So, you’re basically just going to use this interface that may or may not understand you completely and may come up with their own interpretations, and they’re like, “Oh, I thought you said enter my bank account to buy Bitcoin.” I don’t know, like, “That’s what I thought you wanted to do.” And then that could be the result. So, that’s where we are. VIDEO ON YOUTUBE The post Anne Boysen: AI Hype, Agents, and Risk [https://plutopia.io/anne-boysen-ai-hype-agents-and-risk/] first appeared on Plutopia News Network [https://plutopia.io].

23. mar. 2026 - 1 h 0 min
episode Marc Abrahams: Improbable Research and Ig Nobel Prizes artwork

Marc Abrahams: Improbable Research and Ig Nobel Prizes

In this Plutopia News Network interview, Marc Abrahams [https://improbable.com/whatis/about-marc-abrahams/] discusses the Ig Nobel Prizes, [https://improbable.com/ig/winners/] which he founded in 1991 after becoming editor of the “Journal of Irreproducible Results.” [https://en.wikipedia.org/wiki/Journal_of_Irreproducible_Results] These prizes honor real achievements that make people “laugh and then think,” not work that is simply silly or worthless. He describes how the prizes grew from a quirky MIT event into a long-running international celebration supported largely by ticket sales and volunteers, featuring Nobel laureates, comic stage devices like “Miss Sweetie Poo,” [https://improbable.com/2009/08/28/miss-sweetie-poo-the-next-generation/] and handmade awards built from cheap materials. Abrahams discusses how winners are chosen from roughly 9,000 nominations a year through argument and debate, why self-conscious attempts to win usually fail, and how the associated “Annals of Improbable Research” [https://improbable.com/] highlights unusual but meaningful work ranging from pasta physics [https://physics.aps.org/articles/v18/163] to fingernail growth studies and even medical research on colonoscopy explosions. [https://en.wikipedia.org/wiki/Intracolonic_explosion] He also reflects on occasional controversy, especially from officials who misunderstood the spirit of the prizes, and notes a newer challenge: some international winners no longer feel comfortable traveling to the United States, prompting a major shift as the 2026 Ig Nobel ceremony moves to Zurich. [https://improbable.com/2026/03/10/the-ig-nobel-prize-ceremony-is-moving-to-europe-after-35-years-in-the-usa/] Marc Abrahams: > If you win an Ig Nobel Prize, you’ve done something that will make almost anyone anywhere immediately laugh, and then start thinking. So there’s something about it that, whatever it is, will just instantly make somebody start laughing, and then it’ll stick in their mind, if we’ve chosen well. And for the next week or so, all they want to do is tell their friends about it, talk about it. But it has nothing to do with whether the thing is good or bad, or valuable or worthless, could be all of those or none of those. If you set out to create, devise, invent something that has that effect, You’re almost certainly going to fail. You can do one or the other. You can invent something that makes people laugh, or you can invent something that makes people really start thinking. But to invent something that does both of those, That’s really difficult. Don’t try, really, really. It’s just a side effect. VIDEO ON YOUTUBE: Original photo of Marc Abrahams by David Kessler [https://www.davidkesslerphotography.com/] (background enhanced) Creative Commons Attribution-Share Alike 4.0 International license. [https://creativecommons.org/licenses/by-sa/4.0/deed.en] The post Marc Abrahams: Improbable Research and Ig Nobel Prizes [https://plutopia.io/marc-abrahams-improbable-research-and-ig-nobel-prizes/] first appeared on Plutopia News Network [https://plutopia.io].

16. mar. 2026 - 1 h 2 min
episode Roy Casagranda on Iran, War, and Global Fallout artwork

Roy Casagranda on Iran, War, and Global Fallout

In this Plutopia News Network episode, Jon and Scoop talk with political scholar Dr. Roy Casagranda, [https://www.youtube.com/@DrRoyCasagranda] joining from Dubai, about Iran’s modern history, the rise of the Islamic Republic, and the rapidly escalating conflict involving Iran, Israel, and the United States. Roy argues that the crisis is rooted in a long history of oil politics, foreign intervention, and colonial power struggles, and he warns that the current war could spiral into a far broader regional and global catastrophe, disrupting trade, driving up oil prices, destabilizing neighboring states, and increasing the risk of mass displacement and wider war. Throughout the conversation, he also critiques the motives and competence of current U.S. and Israeli leadership, questions claims about democracy and security, and frames the conflict as part of a larger pattern of geopolitical chaos with potentially devastating economic and human consequences. Roy Casagranda: > I think what they decided was we’re going to keep doing this, we’re going to go all in. And their goal is to break the global economy. Their goal is to make it so that the price of oil goes through the roof, that everybody runs out of oil, that India runs out of oil, that Europe runs out of oil. They want to break the GCC economy. They want to break UAE, they want to break, to hurt everybody who’s ever had anything to do with the United States. They want to destroy Israel if they can. They’re gonna go for broke, and their thinking is that eventually the world will turn on the United States because the world will realize the cost that the United States is inflicting on the global economy isn’t worth whatever goal Israel and the United States have. The post Roy Casagranda on Iran, War, and Global Fallout [https://plutopia.io/roy-casagranda-on-iran-war-and-global-fallout/] first appeared on Plutopia News Network [https://plutopia.io].

9. mar. 2026 - 1 h 7 min
episode Kate Devlin: Robot Love artwork

Kate Devlin: Robot Love

In this episode of the Plutopia News Network podcast, we interview AI and society expert Kate Devlin [https://en.wikipedia.org/wiki/Kate_Devlin] about the rise of AI companions, sex robots, and the evolving relationship between humans and artificial intelligence. Devlin explores why people fall in love with chatbots despite knowing they lack consciousness, tracing the phenomenon back to ancient myths like Pygmalion and forward through science fiction and shows like “Black Mirror.” [https://en.wikipedia.org/wiki/Black_Mirror] She discusses the ethics of AI design, the limits of machine “morality,” concerns about exploitation and “ghost work” [https://en.wikipedia.org/wiki/Ghost_work] behind supposedly autonomous systems, and the need for thoughtful regulation that holds tech companies accountable. The conversation also touches on generational shifts in intimacy, online misogyny, AI’s role in education and law, and the persistent moral panics that accompany new technologies, highlighting Devlin’s view that while AI cannot love us back, the feelings people experience are real, complex, and part of a long human history of forming emotional bonds with our creations. So a lot of the science fiction stories feature — usually, if it’s a female robot, they tend to either be incredibly subservient or they tend to break their programming and go rogue, which is sort of a cautionary tale about what happens if feminism gets out of control, and these women break the shackles and rise up against their male owners. There was a “Black Mirror” episode, the “Be Right Back” [https://en.wikipedia.org/wiki/Be_Right_Back] episode, where the husband dies in a car wreck and she creates or she gets a robot version that she can imprint his leftover messages and videos and everything onto so she can create herself a new version of the husband. But, of course, it’s uncanny — it’s not really him, and it all goes terribly wrong because she doesn’t feel it’s really him. So, lots of good questions there about what we expect, I think, from these artificial alternatives. VIDEO ON YOUTUBE: [https://m.media-amazon.com/images/I/818aR2HJk0L._SL1500_.jpg]https://a.co/d/05cBVRM9 The post Kate Devlin: Robot Love [https://plutopia.io/kate-devlin-robot-love/] first appeared on Plutopia News Network [https://plutopia.io].

2. mar. 2026 - 1 h 1 min
episode Gareth Branwyn in Slumberland artwork

Gareth Branwyn in Slumberland

The Plutopia News Network podcast welcomes writer, editor, and media critic Gareth Branwyn [https://garerthbranwyn.com/] to discuss his workshop “Dreaming for Creatives,” which focuses less on dream symbolism or interpretation and more on mining the “dream-time mind” for usable creative material. Gareth and the Plutopians reminisce about early-1990s zine and cyberculture scenes (The WELL, [https://well.com] FactSheet 5, [https://f5archive.org/] bOING bOING, [https://boingboing.net/] Mondo 2000, [https://www.mondo2000.com/] “Jargon Watch,” [https://www.amazon.com/Jargon-Watch-Pocket-Dictionary-Jitterati/dp/1888869062] and “Street Tech”), then shift into Branwyn’s lifelong dream practice, including lucid dreaming [https://en.wikipedia.org/wiki/Lucid_dream] as a teen and techniques to improve dream recall, especially using a “dream recall tally sheet” and the habit of staying still upon waking to retrieve dream fragments. He describes three liminal sources of creativity: “night thoughts” (hypnagogic scribbles), “night bulbs” (clear middle-of-the-night insights), and dreams themselves. He gives examples of how these have shaped his work and even his name. The conversation also touches on “second sleep,” sleep tracking, recurring flying dreams, sleep paralysis and its eerie “presence” hallucinations, and the idea that paying attention to dreaming, like meditation, can deepen one’s relationship with consciousness — while still warning against turning dream work into an unhealthy obsession. Gareth Branwyn: > I’ve only done the workshop once so far, and one thing I wanted to make, clear because when I started talking it up before I did it — people immediately think you’re going to talk about dream interpretation, dream symbolism, which I have basically no interest in, besides the obvious things of that was clearly an anxiety dream, like I lost my wallet, or I lost my phone (I have those a lot) or I got lost at a conference. But I’m not interested in that at all, and so I really needed to make it clear that’s not what this is about. This is really mining your dream time mind for creative material. That’s really what my interest is. VIDEO ON YOUTUBE: The post Gareth Branwyn in Slumberland [https://plutopia.io/gareth-branwyn-in-slumberland/] first appeared on Plutopia News Network [https://plutopia.io].

23. feb. 2026 - 1 h 3 min
En fantastisk app med et enormt stort udvalg af spændende podcasts. Podimo formår virkelig at lave godt indhold, der takler de lidt mere svære emner. At der så også er lydbøger oveni til en billig pris, gør at det er blevet min favorit app.
En fantastisk app med et enormt stort udvalg af spændende podcasts. Podimo formår virkelig at lave godt indhold, der takler de lidt mere svære emner. At der så også er lydbøger oveni til en billig pris, gør at det er blevet min favorit app.
Rigtig god tjeneste med gode eksklusive podcasts og derudover et kæmpe udvalg af podcasts og lydbøger. Kan varmt anbefales, om ikke andet så udelukkende pga Dårligdommerne, Klovn podcast, Hakkedrengene og Han duo 😁 👍
Podimo er blevet uundværlig! Til lange bilture, hverdagen, rengøringen og i det hele taget, når man trænger til lidt adspredelse.

Vælg dit abonnement

Mest populære

Begrænset tilbud

Premium

20 timers lydbøger

  • Podcasts kun på Podimo

  • Ingen reklamer i podcasts fra Podimo

  • Opsig når som helst

2 måneder kun 19 kr.
Derefter 99 kr. / måned

Kom i gang

Premium Plus

100 timers lydbøger

  • Podcasts kun på Podimo

  • Ingen reklamer i podcasts fra Podimo

  • Opsig når som helst

Prøv gratis i 7 dage
Derefter 129 kr. / måned

Prøv gratis

Kun på Podimo

Populære lydbøger

Kom i gang

2 måneder kun 19 kr. Derefter 99 kr. / måned. Opsig når som helst.