
The AI Fundamentalists
Podcast de Dr. Andrew Clark & Sid Mangalik
Empieza 7 días de prueba
$99.00 / mes después de la prueba.Cancela cuando quieras.

Más de 1 millón de oyentes
Podimo te va a encantar, y no estás solo/a
Rated 4.7 in the App Store
Acerca de The AI Fundamentalists
A podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses.
Todos los episodios
34 episodios
We continue with our series about building agentic AI systems from the ground up and for desired accuracy. In this episode, we explore linear programming and optimization methods that enable reliable decision-making within constraints. Show notes: * Linear programming allows us to solve problems with multiple constraints, like finding optimal flights that meet budget requirements * The Lagrange multiplier method helps find optimal solutions within constraints by reformulating utility functions * Combinatorial optimization handles discrete choices like selecting specific flights rather than continuous variables * Dynamic programming techniques break complex problems into manageable subproblems to find solutions efficiently * Mixed integer programming combines continuous variables (like budget) with discrete choices (like flights) * Neurosymbolic approaches potentially offer conversational interfaces with the reliability of mathematical solvers * Unlike pattern-matching LLMs, mathematical optimization guarantees solutions that respect user constraints Make sure you check out Part 1: Mechanism design [https://www.buzzsprout.com/2186686/episodes/17193538-mechanism-design-building-smarter-ai-agents-from-the-fundamentals-part-1] and Part 2: Utility functions [https://www.buzzsprout.com/2186686/episodes/17323103-utility-functions-building-smarter-ai-agents-from-the-fundamentals-part-2]. In the next episode, we'll pull all of the components from these three episodes to demonstrate a complete travel agent AI implementation with code examples and governance considerations. What we're reading: * Burn Book [https://www.amazon.com/Burn-Book-Tech-Love-Story/dp/1982163909] - Kara Swisher, March 2025 * Signal and the Noise [https://www.amazon.com/dp/159420411X/] - Nate Silver, 2012 * Leadership in Turbulent Times [https://www.amazon.com/Leadership-Turbulent-Doris-Kearns-Goodwin/dp/1476795924] - Doris Kearns Goodwin What did you think? Let us know. [https://www.buzzsprout.com/twilio/text_messages/2186686/open_sms] Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: * LinkedIn [https://www.linkedin.com/company/the-ai-fundamentalists/] - Episode summaries, shares of cited articles, and more. * YouTube [https://www.youtube.com/@TheAIFundamentalists-bm1zn] - Was it something that we said? Good. Share your favorite quotes. * Visit our page [https://www.monitaur.ai/podcast] - see past episodes and submit your feedback! It continues to inspire future episodes.

The hosts look at utility functions as the mathematical basis for making AI systems. They use the example of a travel agent that doesn’t get tired and can be increased indefinitely to meet increasing customer demand. They also discuss the difference between this structured, economic-based approach with the problems of using large language models for multi-step tasks. This episode is part 2 of our series about building smarter AI agents from the fundamentals. Listen to Part 1 about mechanism design HERE [https://www.buzzsprout.com/2186686/episodes/17193538-mechanism-design-building-smarter-ai-agents-from-the-fundamentals-part-1]. Show notes: • Discussing the current AI landscape where companies are discovering implementation is harder than anticipated • Introducing the travel agent use case requiring ingestion, reasoning, execution, and feedback capabilities • Explaining why LLMs aren't designed for optimization tasks despite their conversational abilities • Breaking down utility functions from economic theory as a way to quantify user preferences • Exploring concepts like indifference curves and marginal rates of substitution for preference modeling • Examining four cases of utility relationships: independent goods, substitutes, complements, and diminishing returns • Highlighting how mathematical optimization provides explainability and guarantees that LLMs cannot • Setting up for future episodes that will detail the technical implementation of utility-based agents Subscribe so that you don't miss the next episode. In part 3, Andrew and Sid will explain linear programming and other optimization techniques to build upon these utility functions and create truly personalized travel experiences. What did you think? Let us know. [https://www.buzzsprout.com/twilio/text_messages/2186686/open_sms] Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: * LinkedIn [https://www.linkedin.com/company/the-ai-fundamentalists/] - Episode summaries, shares of cited articles, and more. * YouTube [https://www.youtube.com/@TheAIFundamentalists-bm1zn] - Was it something that we said? Good. Share your favorite quotes. * Visit our page [https://www.monitaur.ai/podcast] - see past episodes and submit your feedback! It continues to inspire future episodes.

What if we've been approaching AI agents all wrong? While the tech world obsesses over larger language models (LLMs) and prompt engineering, there'a a foundational approach that could revolutionize how we build trustworthy AI systems: mechanism design. This episode kicks off an exciting series where we're building AI agents "the hard way"—using principles from game theory and microeconomics to create systems with predictable, governable behavior. Rather than hoping an LLM can magically handle complex multi-step processes like booking travel, Sid and Andrew explore how to design the rules of the game so that even self-interested agents produce optimal outcomes. Drawing from our conversation with Dr. Michael Zarham (Episode 32 [https://www.buzzsprout.com/2186686/episodes/17119026-principles-agents-and-the-chain-of-accountability-in-ai-systems]), we break down why LLM-based agents struggle with transparency and governance. The "surface area" for errors expands dramatically when you can't explain how decisions are made across multiple steps. Instead, mechanism design creates clear states with defined optimization parameters at each stage—making the entire system more reliable and accountable. We explore the famous Prisoner's Dilemma to illustrate how individual incentives can work against collective benefits without proper system design. Then we introduce the Vickrey-Clark-Groves mechanism, which ensures AI agents truthfully reveal preferences and actively participate in multi-step processes—critical properties for enterprise applications. Beyond technical advantages, this approach offers something profound: a way to preserve humanity in increasingly automated systems. By explicitly designing for values, fairness, and social welfare, we're not just building better agents—we're ensuring AI serves human needs rather than replacing human thought. Subscribe now to follow our journey as we build an agentic travel system from first principles, applying these concepts to real business challenges. Have questions about mechanism design for AI? Send them our way for future episodes! What did you think? Let us know. [https://www.buzzsprout.com/twilio/text_messages/2186686/open_sms] Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: * LinkedIn [https://www.linkedin.com/company/the-ai-fundamentalists/] - Episode summaries, shares of cited articles, and more. * YouTube [https://www.youtube.com/@TheAIFundamentalists-bm1zn] - Was it something that we said? Good. Share your favorite quotes. * Visit our page [https://www.monitaur.ai/podcast] - see past episodes and submit your feedback! It continues to inspire future episodes.

Dr. Michael Zargham [https://scholar.google.com/citations?user=bbdc3vkAAAAJ&hl=en] provides a systems engineering perspective on AI agents, emphasizing accountability structures and the relationship between principals who deploy agents and the agents themselves. In this episode, he brings clarity to the often misunderstood concept of agents in AI by grounding them in established engineering principles rather than treating them as mysterious or elusive entities. Show highlights • Agents should be understood through the lens of the principal-agent relationship, with clear lines of accountability • True validation of AI systems means ensuring outcomes match intentions, not just optimizing loss functions • LLMs by themselves are "high-dimensional word calculators," not agents - agents are more complex systems with LLMs as components • Guardrails provide deterministic constraints ("musts" or "shalls") versus constitutional AI's softer guidance ("shoulds") • Systems engineering approaches from civil engineering and materials science offer valuable frameworks for AI development • Authority and accountability must align - people shouldn't be held responsible for systems they don't have authority to control • The transition from static input-output to closed-loop dynamical systems represents the shift toward truly agentic behavior • Robust agent systems require both exploration (lab work) and exploitation (hardened deployment) phases with different standards Explore Dr. Zargham's work * Protocols and Institutions [https://zenodo.org/records/15116453] (Feb 27, 2025) * Comments Submitted by BlockScience, University of Washington APL Information Risk and Synthetic Intelligence Research Initiative (IRSIRI), Cognitive Security and Education Forum (COGSEC), and the Active Inference Institute (AII) to the Networking and Information Technology Research and Development National Coordination Office's Request for Comment on The Creation of a National Digital Twins R&D Strategic Plan NITRD-2024-13379 [https://zenodo.org/records/13273682] (Aug 8, 2024) What did you think? Let us know. [https://www.buzzsprout.com/twilio/text_messages/2186686/open_sms] Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: * LinkedIn [https://www.linkedin.com/company/the-ai-fundamentalists/] - Episode summaries, shares of cited articles, and more. * YouTube [https://www.youtube.com/@TheAIFundamentalists-bm1zn] - Was it something that we said? Good. Share your favorite quotes. * Visit our page [https://www.monitaur.ai/podcast] - see past episodes and submit your feedback! It continues to inspire future episodes.

Part 2 of this series could have easily been renamed "AI for science: The expert’s guide to practical machine learning.” We continue our discussion with Christoph Molnar and Timo Freiesleben to look at how scientists can apply supervised machine learning techniques from the previous episode [https://www.buzzsprout.com/2186686/episodes/16853528-supervised-machine-learning-for-science-with-christoph-molnar-and-timo-freiesleben-part-1] into their research. Introduction to supervised ML for science (0:00) * Welcome back to Christoph Molnar and Timo Freiesleben, co-authors of “Supervised Machine Learning for Science: How to Stop Worrying and Love Your Black Box [https://leanpub.com/ml-for-science]” The model as the expert? (1:00) * Evaluation metrics have profound downstream effects on all modeling decisions * Data augmentation offers a simple yet powerful way to incorporate domain knowledge * Domain expertise is often undervalued in data science despite being crucial Measuring causality: Metrics and blind spots (10:10) * Causality approaches in ML range from exploring associations to inferring treatment effects Connecting models to scientific understanding (18:00) * Interpretation methods must stay within realistic data distributions to yield meaningful insights Robustness across distribution shifts (26:40) * Robustness requires understanding what distribution shifts affect your model * Pre-trained models and transfer learning provide promising paths to more robust scientific ML Reproducibility challenges in ML and science (35:00) * Reproducibility challenges differ between traditional science and machine learning Go back to listen to part one [https://www.buzzsprout.com/2186686/episodes/16853528-supervised-machine-learning-for-science-with-christoph-molnar-and-timo-freiesleben-part-1] of this series for the conceptual foundations that support these practical applications. Check out Christoph and Timo's book “Supervised Machine Learning for Science: How to Stop Worrying and Love Your Black Box [https://leanpub.com/ml-for-science]” available online now. What did you think? Let us know. [https://www.buzzsprout.com/twilio/text_messages/2186686/open_sms] Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: * LinkedIn [https://www.linkedin.com/company/the-ai-fundamentalists/] - Episode summaries, shares of cited articles, and more. * YouTube [https://www.youtube.com/@TheAIFundamentalists-bm1zn] - Was it something that we said? Good. Share your favorite quotes. * Visit our page [https://www.monitaur.ai/podcast] - see past episodes and submit your feedback! It continues to inspire future episodes.

Rated 4.7 in the App Store
Empieza 7 días de prueba
$99.00 / mes después de la prueba.Cancela cuando quieras.
Podcasts exclusivos
Sin anuncios
Podcast gratuitos
Audiolibros
20 horas / mes