Machines Like Us

Michael Pollan Says AI Isn’t Conscious – But Plants Might Be

40 min · 7. huhti 202640 min
jakson Michael Pollan Says AI Isn’t Conscious – But Plants Might Be kansikuva

Kuvaus

Four years ago, a Google engineer named Blake Lemoine went public with a strange claim: he thought the large language model he’d been working on had become sentient. At the time, virtually no one took him seriously. (Including, it would seem, Google, who promptly fired him). But lately, it’s started to seem like Lemoine might have been on to something. When I interviewed Geoffrey Hinton last year, he was pretty confident that artificial intelligence was already exhibiting signs of sentience. Dario Amodei, the CEO of Anthropic, has said that he can’t be sure that his chatbot, Claude, isn’t conscious. But what exactly does that mean? A chatbot may be intelligent, but does it have a sense of self? And what would happen if it did? These are the kinds of strange, mind-bending questions Michael Pollan wrestles with in his new book, A World Appears: A Journey Into Consciousness. It’s the kind of book that raises more questions than it answers. But as Silicon Valley continues to flirt with the idea of building artificial consciousness – of designing machines that don’t just think, but feel – these are the kinds of questions we should probably start asking. MENTIONED: A World Appears: A Journey Into Consciousness [https://www.penguinrandomhouse.ca/books/646644/a-world-appears-by-michael-pollan/9781984881991], by Michael Pollan Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com [https://pcm.adswizz.com] for information about our collection and use of personal data for advertising.

Kommentit

0

Ole ensimmäinen kommentoija

Rekisteröidy nyt ja liity Machines Like Us-yhteisöön!

Aloita nyt

1 kuukausi hintaan 1 €

Sitten 7,99 € / kuukausi · Peru milloin tahansa.

  • Podimon podcastit
  • 20 kuunteluaikaa / kuukausi
  • Lataa offline-käyttöön
Aloita nyt

Kaikki jaksot

113 jaksot

jakson Does 21st Century Politics Still Need Politicians? kansikuva

Does 21st Century Politics Still Need Politicians?

When Prime Minister Mark Carney took the floor at the recent Liberal convention, he described a future where AI benefits all Canadians – not just a lucky few. It’s an optimistic vision. But according to political theorist Hélène Landemore and democratic innovator Peter MacLeod, our current political system just isn’t capable of delivering on it. Instead, Landemore, a Yale professor and the author of Politics Without Politicians, argues that ordinary citizens – not politicians – should be the ones calling the shots. MacLeod has spent more than twenty years putting that idea into practice in Canada. His new book is Democracy’s Second Act: Why Politics Needs The Public. Our conversation isn’t really about artificial intelligence. But it is about whether our current form of politics is capable of governing it – or whether a radical new technology demands an equally radical form of governance. MENTIONED: Politics Without Politicians: The Case for Citizen Rule [https://www.penguinrandomhouse.com/books/730879/politics-without-politicians-by-helene-landemore/], Hélène Landemore Democratic Reason: Politics, Collective Intelligence, and the Rule of the Many [https://press.princeton.edu/books/paperback/9780691176390/democratic-reason?srsltid=AfmBOornnl-2r8tAI_MsSumhuv2dWHZaBG-bPP35PavPStGDfYZMA3e6], Hélène Landemore Democracy’s Second Act: Why Politics Needs the Public [https://utppublishing.com/doi/book/10.3138/9781487517137], Peter MacLeod and Richard Johnson Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com [https://pcm.adswizz.com] for information about our collection and use of personal data for advertising.

21. huhti 202644 min
jakson Michael Pollan Says AI Isn’t Conscious – But Plants Might Be kansikuva

Michael Pollan Says AI Isn’t Conscious – But Plants Might Be

Four years ago, a Google engineer named Blake Lemoine went public with a strange claim: he thought the large language model he’d been working on had become sentient. At the time, virtually no one took him seriously. (Including, it would seem, Google, who promptly fired him). But lately, it’s started to seem like Lemoine might have been on to something. When I interviewed Geoffrey Hinton last year, he was pretty confident that artificial intelligence was already exhibiting signs of sentience. Dario Amodei, the CEO of Anthropic, has said that he can’t be sure that his chatbot, Claude, isn’t conscious. But what exactly does that mean? A chatbot may be intelligent, but does it have a sense of self? And what would happen if it did? These are the kinds of strange, mind-bending questions Michael Pollan wrestles with in his new book, A World Appears: A Journey Into Consciousness. It’s the kind of book that raises more questions than it answers. But as Silicon Valley continues to flirt with the idea of building artificial consciousness – of designing machines that don’t just think, but feel – these are the kinds of questions we should probably start asking. MENTIONED: A World Appears: A Journey Into Consciousness [https://www.penguinrandomhouse.ca/books/646644/a-world-appears-by-michael-pollan/9781984881991], by Michael Pollan Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com [https://pcm.adswizz.com] for information about our collection and use of personal data for advertising.

7. huhti 202640 min
jakson Why Did We Stop Talking About The AI Apocalypse? kansikuva

Why Did We Stop Talking About The AI Apocalypse?

Just a few years ago, it seemed like all anyone in AI wanted to talk about was existential risk – this idea that an artificial super intelligence could eventually break containment and destroy humanity. More than 30,000 experts signed an open letter demanding a pause on AI development; bills were drafted that would constrain the most powerful new models; and the “godfathers” of AI were travelling around the world, warning anyone who would listen that we were hurtling toward our extinction. And then: we moved on. We started using AI for work, and school, and to plan our kids’ birthday parties. Collectively, we just stopped talking about the end of the world. But Nate Soares didn’t move on. Last year, the artificial intelligence researcher wrote a book with Eliezer Yudkowsky called If Anyone Builds It, Everyone Dies. As you can probably tell from the title, the book is unequivocal: If we keep going down the path we’re on, it will almost certainly lead to the end of our species. Now, not everyone is convinced of the arguments Soares makes. But if there’s even a chance he’s right, I think we need to hear him out. MENTIONED: If Anyone Builds It, Everyone Dies [https://www.penguin.co.uk/books/474267/if-anyone-builds-it-everyone-dies-by-soares-eliezer-yudkowsky-and-nate/9781847928924], by Eliezer Yudkowsky and Nate Soares Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com [https://pcm.adswizz.com] for information about our collection and use of personal data for advertising.

24. maalis 202646 min
jakson In the Wake of Tumbler Ridge, Can We Trade Privacy for Safety? kansikuva

In the Wake of Tumbler Ridge, Can We Trade Privacy for Safety?

On Feb. 10, 2026, an 18-year-old opened fire at a high school in Tumbler Ridge, B.C., killing eight people before turning a gun on herself. In the weeks that followed, OpenAI admitted that the perpetrator had been discussing the attack with ChatGPT – and that the company had chosen not to alert authorities. But, in the aftermath of one of the deadliest shootings in our country’s history, many Canadians are asking: Why not? It’s a reasonable question. But the idea that AI companies should automatically report violent conversations to police is more complicated than it sounds. To try and unpack it, I spoke with Meredith Whittaker, the President of Signal – an encrypted messaging platform that doesn’t collect your data, serve you ads, or track who you’re talking to. Whittaker runs the most private messaging app on the planet, which also means there is almost certainly illegal activity happening on Signal that no one, including her, knows about. But this conversation isn’t just about Tumbler Ridge. The instinct to trade privacy for “safety” is reshaping the entire tech landscape: Amazon now lets you scan a whole neighbourhood’s worth of Ring camera footage; Australia requires teenagers to verify their ages before accessing social media. These technologies offer real value – but they all ask you to give something up in return. So I wanted to ask Whittaker why that trade might not be worth making. Editor's note: A previous version of this article reported an incorrect final tally of the injured during the shooting at Tumbler Ridge. Two were critically injured. The podcast audio also includes an incorrect final tally of the injured. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com [https://pcm.adswizz.com] for information about our collection and use of personal data for advertising.

10. maalis 202646 min
jakson When Did Common Sense AI Policy Become Radical? kansikuva

When Did Common Sense AI Policy Become Radical?

A couple of months ago, I joined the Canadian government’s AI [https://www.theglobeandmail.com/topics/artificial-intelligence/] strategy task force. Out of thirty members, I was one of only four focused on safety. Everyone else was there to talk growth. It reflects a pattern playing out all over the world: we’re going all in on AI, and regulation will only slow us down. It’s hard to overstate how quickly this shift happened. Just a few years ago, even Elon Musk was calling for an industry-wide pause on AI development, and the Biden administration was developing an “AI Bill of Rights” – one of the most thoughtful and comprehensive frameworks for AI regulation I’ve ever seen. The architect of that initiative was Dr. Alondra Nelson. Today, she leads the Science, Technology, and Social Values Lab at the Institute for Advanced Study and is fresh off a stint on Zohran Mamdani’s mayoral transition team in New York. I wanted to have her on to wrestle with an urgent question: how do you make a technology safe when nobody seems particularly interested in regulating it – and what might happen if we don’t? Mentioned: Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People [https://marketingstorageragrs.blob.core.windows.net/webfiles/Blueprint-for-an-AI-Bill-of-Rights.pdf], by the White House Office of Science and Technology Policy The mirage of AI deregulation [https://www.science.org/doi/10.1126/science.aee4900], by Alondra Nelson (Science) International AI Safety Report 2026 [https://internationalaisafetyreport.org/sites/default/files/2026-02/international-ai-safety-report-2026_0.pdf], by Yoshua Bengio et al Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com [https://pcm.adswizz.com] for information about our collection and use of personal data for advertising.

24. helmi 202637 min