Grounded Truth
Podcast af Watchful
Denne podcast er gratis at lytte på alle podcastafspillere og Podimo-appen uden abonnement.
Alle episoder
17 episoderWelcome to the "Grounded Truth Podcast," where we bring together some of the brightest minds in AI to explore the most pressing topics shaping our future. Our latest episode, "The Future of AI: Doom or Boom?"—promises to be a riveting discussion. Joining host John Singleton, Co-founder and Head of Success at Watchful, are: Shayan Mohanty, CEO and Co-founder of Watchful, and the podcast "AI FYI" host Andy Butkovic, Joe Cloughley, and Kiran Vajapey. Together, we'll delve into the fascinating world of AI, covering a wide range of topics: • LLMs adoption • AI ethics and cultural impact • AI's transformative effect on traditional industries. • The rapid pace of AI's technological advancement Whether you're a seasoned AI expert or simply curious about its impact, this episode promises something for everyone. Learn more about The AI FYI Podcast by visiting: http://www.aifyipod.com.
In this episode of "Grounded Truth," we dive into the world of Generative AI and the complexities of placing it into production. Our special guests for this episode are Manasi Vartak, Founder and CEO of Verta, and Shayan Mohanty, Co-founder and CEO of Watchful. 🌐 Verta: Empowering Gen AI Application Development * Learn about Verta's end-to-end automation platform for Gen AI application development. * Explore how Verta's expertise in model management and serving has evolved to address the challenges of scaling and managing Gen AI models. * Visit www.verta.ai [http://www.verta.ai] for more insights. 🚀 Evolution in the AI Landscape * Discover the tectonic shift in the AI landscape over the past year, marked by the release of Chat GPT and the rise of Gen AI. * Manasi shares insights into how Gen AI has democratized AI, making it a focal point in boardrooms and team discussions. 🤔 Challenges in Gen AI Application Production * Uncover the challenges and changes in workflow when transitioning from classical ML model development to Gen AI application production. * Manasi provides valuable insights into the business hunger for AI and the increasing demand for data science resources. 🌟 What's Changed Since Chat GPT's Release? * Reflect on the transformative impact of Chat GPT and how it has influenced the priorities of data science leaders and organizations. 🔮 Predictions for the AI Industry in 2024 * Listen as Manasi and Shayan share their predictions on the future of the AI industry in 2024. Gain valuable insights into the trends and advancements that will shape the landscape.
🎙️ RAG vs. Fine Tuning - Dive into the latest episode of "Grounded Truth" hosted by John Singleton as he discusses "Retrieval Augmented Generation (RAG) versus Fine Tuning in LLM Workflows" with Emmanuel Turlay, Founder & CEO of Sematic and Airtrain.ai and Shayan Mohanty, Co-founder & CEO of Watchful. 🤖 RAG: Retrieval Augmented Generation - RAG involves putting content inside the prompt/context window to make models aware of recent events, private information, or company documents. The process includes retrieving the most relevant information from sources like Bing, Google, or internal databases, feeding it into the model's context window, and generating user-specific responses. Ideal for ensuring factual answers by extracting data from a specified context. ⚙️ Fine Tuning - Fine tuning entails training models for additional epochs on more data, allowing customization of the model's behavior, tone, or output format. Used to make models act in specific ways, such as speaking like a lawyer or adopting the tone of Harry Potter. Unlike RAG, it focuses more on the form and tone of the output rather than knowledge augmentation. 🤔 Decision Dilemma: RAG or Fine Tuning? Emmanuel highlights the misconception that fine tuning injects new knowledge, emphasizing its role in shaping the output according to user needs. RAG is preferred for factual answers, as it extracts information directly from a specified context, ensuring higher accuracy. Fine tuning, on the other hand, is more about customizing the form and tone of the output. 🔄 The Verdict: A Balanced Approach? It's not a one-size-fits-all decision. The choice between RAG and fine tuning depends on specific use cases. Evaluating the decision involves understanding the goals: knowledge augmentation (RAG) or customization of form and tone (Fine Tuning). Perhaps a balanced approach, leveraging both techniques based on the desired outcomes. AirTrain YouTube Channel: https://www.youtube.com/@AirtrainAI
Welcome to another riveting episode of "Grounded Truth"! In this episode, your host John Singleton, co-founder and Head of Success at Watchful, is joined by Shayan Mohanty, CEO of Watchful. Together, they embark on a deep dive into the intricacies of Large Language Models (LLMs). In Watchful's journey through language model exploration, we've uncovered fascinating insights into putting the "engineering" back into prompt engineering. Our latest research focuses on introducing meaningful observability metrics to enhance our understanding of language models. If you'd like to explore on your own, feel free to play with a demo here: https://uncertainty.demos.watchful.io/ Repo can be found here: https://github.com/Watchfulio/uncertainty-demo 💡 What to expect in this episode: - Recap of our last exploration, where we unveiled the role of perceived ambiguity in LLM prompts and its alignment with the "ground truth." - Introduction of two critical measures: Structural Uncertainty (using normalized entropy) and Conceptual Uncertainty (revealing internal cohesion through cosine distances). - Why these measures matter: Assessing predictability in prompts, guiding decisions on fine-tuning versus prompt engineering, and setting the stage for objective model comparisons. 🚀 Join John and Shayan on this quest to make language model interactions more transparent and predictable. The episode aims to unravel complexities, provide actionable insights, and pave the way for a clearer understanding of LLM uncertainties.
Welcome to another captivating episode of "Grounded Truth." Today, our host, John Singleton, engages in a deep dive into the world of prompt engineering, interpretability in closed-source LLMs, and innovative techniques to enhance transparency in AI models. Joining us as a special guest is Shayan Mohanty, the visionary CEO and co-founder of Watchful. Shayan brings to the table his latest groundbreaking research, which centers around a remarkable free tool designed to elevate the transparency of prompts used with large language models. In this episode, we'll explore Shayan's research, including: 🔍 Estimating token importances in prompts for powerhouse language models like ChatGPT. 🧠 Transitioning from the art to the science of prompt crafting. 📊 Uncovering the crucial link between model embeddings and interpretations. 💡 Discovering intriguing insights through comparisons of various model embeddings. 🚀 Harnessing the potential of embedding quality to influence model output. 🌟 Taking the initial strides towards the automation of prompt engineering. To witness the real impact of Shayan's research, don't miss the opportunity to experience a live demo at https://heatmap.demos.watchful.io/ [https://heatmap.demos.watchful.io/].
Tilgængelig overalt
Lyt til Podimo på din telefon, tablet, computer eller i bilen!
Et univers af underholdning på lyd
Tusindvis af lydbøger og eksklusive podcasts
Ingen reklamer
Spild ikke tiden på at lytte til reklamepauser, når du lytter til Podimos indhold.