Kansikuva näyttelystä Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Podcast by Machine Learning Street Talk (MLST)

englanti

Teknologia & tieteet

Rajoitettu tarjous

1 kuukausi hintaan 1 €

Sitten 7,99 € / kuukausiPeru milloin tahansa.

  • Podimon podcastit
  • Lataa offline-käyttöön
Aloita nyt

Lisää Machine Learning Street Talk (MLST)

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).

Kaikki jaksot

237 jaksot
episode The Mathematical Foundations of Intelligence [Professor Yi Ma] artwork

The Mathematical Foundations of Intelligence [Professor Yi Ma]

What if everything we think we know about AI understanding is wrong? Is compression the key to intelligence? Or is there something more—a leap from memorization to true abstraction? In this fascinating conversation, we sit down with **Professor Yi Ma**—world-renowned expert in deep learning, IEEE/ACM Fellow, and author of the groundbreaking new book *Learning Deep Representations of Data Distributions*. Professor Ma challenges our assumptions about what large language models actually do, reveals why 3D reconstruction isn't the same as understanding, and presents a unified mathematical theory of intelligence built on just two principles: **parsimony** and **self-consistency**. **SPONSOR MESSAGES START** — Prolific - Quality data. From real people. For faster breakthroughs. https://www.prolific.com/?utm_source=mlst — cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst Submit investment deck: https://cyber.fund/contact?utm_source=mlst — **END** Key Insights: **LLMs Don't Understand—They Memorize** Language models process text (*already* compressed human knowledge) using the same mechanism we use to learn from raw data. **The Illusion of 3D Vision** Sora and NeRFs etc that can reconstruct 3D scenes still fail miserably at basic spatial reasoning **"All Roads Lead to Rome"** Why adding noise is *necessary* for discovering structure. **Why Gradient Descent Actually Works** Natural optimization landscapes are surprisingly smooth—a "blessing of dimensionality" **Transformers from First Principles** Transformer architectures can be mathematically derived from compression principles — INTERACTIVE AI TRANSCRIPT PLAYER w/REFS (ReScript): https://app.rescript.info/public/share/Z-dMPiUhXaeMEcdeU6Bz84GOVsvdcfxU_8Ptu6CTKMQ About Professor Yi Ma Yi Ma is the inaugural director of the School of Computing and Data Science at Hong Kong University and a visiting professor at UC Berkeley. https://people.eecs.berkeley.edu/~yima/ https://scholar.google.com/citations?user=XqLiBQMAAAAJ&hl=en https://x.com/YiMaTweets **Slides from this conversation:** https://www.dropbox.com/scl/fi/sbhbyievw7idup8j06mlr/slides.pdf?rlkey=7ptovemezo8bj8tkhfi393fh9&dl=0 **Related Talks by Professor Ma:** - Pursuing the Nature of Intelligence (ICLR): https://www.youtube.com/watch?v=LT-F0xSNSjo - Earlier talk at Berkeley: https://www.youtube.com/watch?v=TihaCUjyRLM TIMESTAMPS: 00:00:00 Introduction 00:02:08 The First Principles Book & Research Vision 00:05:21 Two Pillars: Parsimony & Consistency 00:09:50 Evolution vs. Learning: The Compression Mechanism 00:14:36 LLMs: Memorization Masquerading as Understanding 00:19:55 The Leap to Abstraction: Empirical vs. Scientific 00:27:30 Platonism, Deduction & The ARC Challenge 00:35:57 Specialization & The Cybernetic Legacy 00:41:23 Deriving Maximum Rate Reduction 00:48:21 The Illusion of 3D Understanding: Sora & NeRF 00:54:26 All Roads Lead to Rome: The Role of Noise 00:59:56 All Roads Lead to Rome: The Role of Noise 01:00:14 Benign Non-Convexity: Why Optimization Works 01:06:35 Double Descent & The Myth of Overfitting 01:14:26 Self-Consistency: Closed-Loop Learning 01:21:03 Deriving Transformers from First Principles 01:30:11 Verification & The Kevin Murphy Question 01:34:11 CRATE vs. ViT: White-Box AI & Conclusion REFERENCES: Book: [00:03:04] Learning Deep Representations of Data Distributions https://ma-lab-berkeley.github.io/deep-representation-learning-book/ [00:18:38] A Brief History of Intelligence https://www.amazon.co.uk/BRIEF-HISTORY-INTELLIGEN-HB-Evolution/dp/0008560099 [00:38:14] Cybernetics https://mitpress.mit.edu/9780262730099/cybernetics/ Book (Yi Ma): [00:03:14] 3-D Vision book https://link.springer.com/book/10.1007/978-0-387-21779-6 refs on ReScript link/YT

Eilen - 1 h 39 min
episode Pedro Domingos: Tensor Logic Unifies AI Paradigms artwork

Pedro Domingos: Tensor Logic Unifies AI Paradigms

Pedro Domingos, author of the bestselling book "The Master Algorithm," introduces his latest work: Tensor Logic - a new programming language he believes could become the fundamental language for artificial intelligence. Think of it like this: Physics found its language in calculus. Circuit design found its language in Boolean logic. Pedro argues that AI has been missing its language - until now. **SPONSOR MESSAGES START** — Build your ideas with AI Studio from Google - http://ai.studio/build — Prolific - Quality data. From real people. For faster breakthroughs. https://www.prolific.com/?utm_source=mlst — cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst Submit investment deck: https://cyber.fund/contact?utm_source=mlst — **END** Current AI is split between two worlds that don't play well together: Deep Learning (neural networks, transformers, ChatGPT) - great at learning from data, terrible at logical reasoning Symbolic AI (logic programming, expert systems) - great at logical reasoning, terrible at learning from messy real-world data Tensor Logic unifies both. It's a single language where you can: Write logical rules that the system can actually learn and modify Do transparent, verifiable reasoning (no hallucinations) Mix "fuzzy" analogical thinking with rock-solid deduction INTERACTIVE TRANSCRIPT: https://app.rescript.info/public/share/NP4vZQ-GTETeN_roB2vg64vbEcN7isjJtz4C86WSOhw TOC: 00:00:00 - Introduction 00:04:41 - What is Tensor Logic? 00:09:59 - Tensor Logic vs PyTorch & Einsum 00:17:50 - The Master Algorithm Connection 00:20:41 - Predicate Invention & Learning New Concepts 00:31:22 - Symmetries in AI & Physics 00:35:30 - Computational Reducibility & The Universe 00:43:34 - Technical Details: RNN Implementation 00:45:35 - Turing Completeness Debate 00:56:45 - Transformers vs Turing Machines 01:02:32 - Reasoning in Embedding Space 01:11:46 - Solving Hallucination with Deductive Modes 01:16:17 - Adoption Strategy & Migration Path 01:21:50 - AI Education & Abstraction 01:24:50 - The Trillion-Dollar Waste REFS Tensor Logic: The Language of AI [Pedro Domingos] https://arxiv.org/abs/2510.12269 The Master Algorithm [Pedro Domingos] https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543 Einsum is All you Need (TIM ROCKTÄSCHEL) https://rockt.ai/2018/04/30/einsum https://www.youtube.com/watch?v=6DrCq8Ry2cw Autoregressive Large Language Models are Computationally Universal (Dale Schuurmans et al - GDM) https://arxiv.org/abs/2410.03170 Memory Augmented Large Language Models are Computationally Universal [Dale Schuurmans] https://arxiv.org/pdf/2301.04589 On the computational power of NNs [95/Siegelmann] https://binds.cs.umass.edu/papers/1995_Siegelmann_JComSysSci.pdf Sebastian Bubeck https://www.reddit.com/r/OpenAI/comments/1oacp38/openai_researcher_sebastian_bubeck_falsely_claims/ I am a strange loop - Hofstadter https://www.amazon.co.uk/Am-Strange-Loop-Douglas-Hofstadter/dp/0465030793 Stephen Wolfram https://www.youtube.com/watch?v=dkpDjd2nHgo The Complex World: An Introduction to the Foundations of Complexity Science [David C. Krakauer] https://www.amazon.co.uk/Complex-World-Introduction-Foundations-Complexity/dp/1947864629 Geometric Deep Learning https://www.youtube.com/watch?v=bIZB1hIJ4u8 Andrew Wilson (NYU) https://www.youtube.com/watch?v=M-jTeBCEGHc Yi Ma https://www.patreon.com/posts/yi-ma-scientific-141953348 Roger Penrose - road to reality https://www.amazon.co.uk/Road-Reality-Complete-Guide-Universe/dp/0099440687 Artificial Intelligence: A Modern Approach [Russel and Norvig] https://www.amazon.co.uk/Artificial-Intelligence-Modern-Approach-Global/dp/1292153962

08.12.2025 - 1 h 27 min
episode He Co-Invented the Transformer. Now: Continuous Thought Machines - Llion Jones and Luke Darlow [Sakana AI] artwork

He Co-Invented the Transformer. Now: Continuous Thought Machines - Llion Jones and Luke Darlow [Sakana AI]

The Transformer architecture (which powers ChatGPT and nearly all modern AI) might be trapping the industry in a localized rut, preventing us from finding true intelligent reasoning, according to the person who co-invented it. Llion Jones and Luke Darlow, key figures at the research lab Sakana AI, join the show to make this provocative argument, and also introduce new research which might lead the way forwards. **SPONSOR MESSAGES START** — Build your ideas with AI Studio from Google - http://ai.studio/build — Tufa AI Labs is hiring ML Research Engineers https://tufalabs.ai/ — cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst Submit investment deck: https://cyber.fund/contact?utm_source=mlst — **END** The "Spiral" Problem – Llion uses a striking visual analogy to explain what current AI is missing. If you ask a standard neural network to understand a spiral shape, it solves it by drawing tiny straight lines that just happen to look like a spiral. It "fakes" the shape without understanding the concept of spiraling. Introducing the Continuous Thought Machine (CTM) Luke Darlow deep dives into their solution: a biology-inspired model that fundamentally changes how AI processes information. The Maze Analogy: Luke explains that standard AI tries to solve a maze by staring at the whole image and guessing the entire path instantly. Their new machine "walks" through the maze step-by-step. Thinking Time: This allows the AI to "ponder." If a problem is hard, the model can naturally spend more time thinking about it before answering, effectively allowing it to correct its own mistakes and backtrack—something current Language Models struggle to do genuinely. https://sakana.ai/ https://x.com/YesThisIsLion https://x.com/LearningLukeD TRANSCRIPT: https://app.rescript.info/public/share/crjzQ-Jo2FQsJc97xsBdfzfOIeMONpg0TFBuCgV2Fu8 TOC: 00:00:00 - Stepping Back from Transformers 00:00:43 - Introduction to Continuous Thought Machines (CTM) 00:01:09 - The Changing Atmosphere of AI Research 00:04:13 - Sakana’s Philosophy: Research Freedom 00:07:45 - The Local Minimum of Large Language Models 00:18:30 - Representation Problems: The Spiral Example 00:29:12 - Technical Deep Dive: CTM Architecture 00:36:00 - Adaptive Computation & Maze Solving 00:47:15 - Model Calibration & Uncertainty 01:00:43 - Sudoku Bench: Measuring True Reasoning REFS: Why Greatness Cannot be planned [Kenneth Stanley] https://www.amazon.co.uk/Why-Greatness-Cannot-Planned-Objective/dp/3319155237 https://www.youtube.com/watch?v=lhYGXYeMq_E The Hardware Lottery [Sara Hooker] https://arxiv.org/abs/2009.06489 https://www.youtube.com/watch?v=sQFxbQ7ade0 Continuous Thought Machines [Luke Darlow et al / Sakana] https://arxiv.org/abs/2505.05522 https://sakana.ai/ctm/ LSTM: The Comeback Story? [Prof. Sepp Hochreiter] https://www.youtube.com/watch?v=8u2pW2zZLCs Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis [Kumar/Stanley] https://arxiv.org/pdf/2505.11581 A Spline Theory of Deep Networks [Randall Balestriero] https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf https://www.youtube.com/watch?v=86ib0sfdFtw https://www.youtube.com/watch?v=l3O2J3LMxqI On the Biology of a Large Language Model [Anthropic, Jack Lindsey et al] https://transformer-circuits.pub/2025/attribution-graphs/biology.html The ARC Prize 2024 Winning Algorithm [Daniel Franzen and Jan Disselhoff] “The ARChitects” https://www.youtube.com/watch?v=mTX_sAq--zY Neural Turing Machine [Graves] https://arxiv.org/pdf/1410.5401 Adaptive Computation Time for Recurrent Neural Networks [Graves] https://arxiv.org/abs/1603.08983 Sudoko Bench [Sakana] https://pub.sakana.ai/sudoku/

23.11.2025 - 1 h 12 min
episode Why Humans Are Still Powering AI [Sponsored] artwork

Why Humans Are Still Powering AI [Sponsored]

Ever wonder where AI models actually get their "intelligence"? We reveal the dirty secret of Silicon Valley: behind every impressive AI system are thousands of real humans providing crucial data, feedback, and expertise.Guest: Phelim Bradley, CEO and Co-founder of ProlificPhelim Bradley runs Prolific, a platform that connects AI companies with verified human experts who help train and evaluate their models. Think of it as a sophisticated marketplace matching the right human expertise to the right AI task - whether that's doctors evaluating medical chatbots or coders reviewing AI-generated software.Prolific: https://prolific.com/?utm_source=mlsthttps://uk.linkedin.com/in/phelim-bradley-84300826The discussion dives into:**The human data pipeline**: How AI companies rely on human intelligence to train, refine, and validate their models - something rarely discussed openly**Quality over quantity**: Why paying humans well and treating them as partners (not commodities) produces better AI training data**The matching challenge**: How Prolific solves the complex problem of finding the right expert for each specific task, similar to matching Uber drivers to riders but with deep expertise requirements**Future of work**: What it means when human expertise becomes an on-demand service, and why this might actually create more opportunities rather than fewer**Geopolitical implications**: Why the centralization of AI development in US tech companies should concern Europe and the UK

03.11.2025 - 24 min
episode The Universal Hierarchy of Life - Prof. Chris Kempes [SFI] artwork

The Universal Hierarchy of Life - Prof. Chris Kempes [SFI]

"What is life?" - asks Chris Kempes, a professor at the Santa Fe Institute. Chris explains that scientists are moving beyond a purely Earth-based, biological view and are searching for a universal theory of life that could apply to anything, anywhere in the universe. He proposes that things we don't normally consider "alive"—like human culture, language, or even artificial intelligence; could be seen as life forms existing on different "substrates". To understand this, Chris presents a fascinating three-level framework: - Materials: The physical stuff life is made of. He argues this could be incredibly diverse across the universe, and we shouldn't expect alien life to share our biochemistry. - Constraints: The universal laws of physics (like gravity or diffusion) that all life must obey, regardless of what it's made of. This is where different life forms start to look more similar. - Principles: At the highest level are abstract principles like evolution and learning. Chris suggests these computational or "optimization" rules are what truly define a living system. A key idea is "convergence" – using the example of the eye. It's such a complex organ that you'd think it evolved only once. However, eyes evolved many separate times across different species. This is because the physics of light provides a clear "target", and evolution found similar solutions to the problem of seeing, even with different starting materials. **SPONSOR MESSAGES** — Prolific - Quality data. From real people. For faster breakthroughs. https://www.prolific.com/?utm_source=mlst — Check out NotebookLM from Google here - https://notebooklm.google.com/ - it’s really good for doing research directly from authoritative source material, minimising hallucinations. — cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst Submit investment deck: https://cyber.fund/contact?utm_source=mlst — Prof. Chris Kempes: https://www.santafe.edu/people/profile/chris-kempes TRANSCRIPT: https://app.rescript.info/public/share/Y2cI1i0nX_-iuZitvlguHvaVLQTwPX1Y_E1EHxV0i9I TOC: 00:00:00 - Introduction to Chris Kempes and the Santa Fe Institute 00:02:28 - The Three Cultures of Science 00:05:08 - What Makes a Good Scientific Theory? 00:06:50 - The Universal Theory of Life 00:09:40 - The Role of Material in Life 00:12:50 - A Hierarchy for Understanding Life 00:13:55 - How Life Diversifies and Converges 00:17:53 - Adaptive Processes and Defining Life 00:19:28 - Functionalism, Memes, and Phylogenies 00:22:58 - Convergence at Multiple Levels 00:25:45 - The Possibility of Simulating Life 00:28:16 - Intelligence, Parasitism, and Spectrums of Life 00:32:39 - Phase Changes in Evolution 00:36:16 - The Separation of Matter and Logic 00:37:21 - Assembly Theory and Quantifying Complexity REFS: Developing a predictive science of the biosphere requires the integration of scientific cultures [Kempes et al] https://www.pnas.org/doi/10.1073/pnas.2209196121 Seeing with an extra sense (“Dangerous prediction”) [Rob Phillips] https://www.sciencedirect.com/science/article/pii/S0960982224009035 The Multiple Paths to Multiple Life [Christopher P. Kempes & David C. Krakauer] https://link.springer.com/article/10.1007/s00239-021-10016-2 The Information Theory of Individuality [David Krakauer et al] https://arxiv.org/abs/1412.2447 Minds, Brains and Programs [Searle] https://home.csulb.edu/~cwallis/382/readings/482/searle.minds.brains.programs.bbs.1980.pdf The error threshold https://www.sciencedirect.com/science/article/abs/pii/S0168170204003843 Assembly theory and its relationship with computational complexity [Kempes et al] https://arxiv.org/abs/2406.12176

25.10.2025 - 40 min
Loistava design ja vihdoin on helppo löytää podcasteja, joista oikeasti tykkää
Loistava design ja vihdoin on helppo löytää podcasteja, joista oikeasti tykkää
Kiva sovellus podcastien kuunteluun, ja sisältö on monipuolista ja kiinnostavaa
Todella kiva äppi, helppo käyttää ja paljon podcasteja, joita en tiennyt ennestään.

Valitse tilauksesi

Rajoitettu tarjous

Premium

  • Podimon podcastit

  • Lataa offline-käyttöön

  • Peru milloin tahansa

1 kuukausi hintaan 1 €
Sitten 7,99 € / kuukausi

Aloita nyt

Premium

20 tuntia äänikirjoja

  • Podimon podcastit

  • Lataa offline-käyttöön

  • Peru milloin tahansa

30 vrk ilmainen kokeilu
Sitten 9,99 € / month

Aloita maksutta

Premium

100 tuntia äänikirjoja

  • Podimon podcastit

  • Lataa offline-käyttöön

  • Peru milloin tahansa

30 vrk ilmainen kokeilu
Sitten 19,99 € / month

Aloita maksutta

Vain Podimossa

Suosittuja äänikirjoja

Aloita nyt

1 kuukausi hintaan 1 €. Sitten 7,99 € / kuukausi. Peru milloin tahansa.