The Daily AI Show

The Daily AI Show

Podcast de The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran

The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh

Disfruta 90 días gratis

4,99 € / mes después de la prueba.Cancela cuando quieras.

Prueba gratis

Todos los episodios

532 episodios
episode The AI Sincerity Conundrum artwork
The AI Sincerity Conundrum

People have long accepted mass-produced connection. A birthday card signed by a celebrity, a form letter from a company CEO, or a Christmas message from a president—these still carry meaning, even though everyone knows thousands received the same words. The message mattered because it felt chosen, even if not personal. Now, AI makes personalized mass connection possible. Companies and individuals can send unique, “handwritten” messages in your tone, remembering details only a model can track. To the receiver, it may feel like a thoughtful, one-of-a-kind note. But at scale, sincerity itself starts to blur. Did the words come from the sender’s heart—or from their software? The conundrum If AI lets us send thousands of unique, heartfelt messages that feel personal, does that deepen connection—or hollow it out? Is sincerity about the words received, or the presence of the human who chose to send them? This podcast is created by AI. We used ChatGPT, Perplexity and Google NotebookLM's audio overview to create the conversation you are hearing. We do not make any claims to the validity of the information provided and see this as an experiment around deep discussions fully generated by AI.

Ayer - 12 min
episode Real AI Demos That Show Real Results (Ep. 510) artwork
Real AI Demos That Show Real Results (Ep. 510)

Want to keep the conversation going? Join our Slack community at thedailyaishowcommunity.com Intro In this July 18th episode of The Daily AI Show, the team showcases real-world AI use cases in what they call their “Be About It” show. Hosts demonstrate live projects and workflows using tools like GenSpark, Perplexity Spaces, ChatGPT Projects, MidJourney, and OpenAI’s Sora, focusing on actual tasks they’ve automated or solved using AI. This episode emphasizes practical wins—how AI is saving them hours on complex work, from document audits to image generation and business operations. Key Points Discussed Andy demoed a 50-lesson course built using Lovable, ChatGPT Projects, and infographics generated through iterative feedback inside ChatGPT 4. GenSpark agents were used to analyze complex tax payments and vehicle purchase discrepancies, leading to actionable insights and letters for the DMV. Beth showcased image generation pipelines using Sora, ChatGPT image generation, and MidJourney’s editing tools to produce YouTube thumbnails and animated video intros. Brian demonstrated using Perplexity Spaces to generate dynamic travel planning prompts, showing how to create reusable agentic workflows inside Spaces without heavy prompting skills. Karl walked through OpenAI’s Agent Mode analyzing folder-based invoice matching against Google Sheets, automating tasks that typically take hours for finance teams. The group criticized OpenAI’s consumer-focused demos (like shoe shopping), urging labs to highlight complex business use cases that show real time savings. Agent Mode’s strength lies in handling document-heavy, tedious tasks where traditional no-code platforms falter. MidJourney’s seamless image background expansion and animation were highlighted as powerful tools for visual content creators. Perplexity Spaces can act like lightweight document research agents when properly configured, making knowledge extraction easier for non-coders. Real-world stories included AI helping with dermatology guidance, audio hardware troubleshooting, and reducing content production bottlenecks with Opus Clip’s multi-speaker cropping tool. The show concluded with reflections on the importance of UI and workflow design in AI tool adoption—features alone aren’t enough without good user experience. Timestamps & Topics 00:00:00 🎬 Show kickoff and intro to “Be About It” 00:01:37 📚 Andy’s 50-lesson AI prompting course build 00:06:14 📊 Infographic generation via ChatGPT projects 00:13:30 🎨 Beth’s YouTube thumbnail image pipeline 00:20:45 🐃 MidJourney image extension and animation demo 00:27:23 ⚙️ GenSpark for complex tax error investigation 00:31:45 ✉️ GenSpark drafts demand letters for refunds 00:32:05 🛫 Brian builds a travel assistant in Perplexity Spaces 00:40:49 🛠️ Agent Mode vs. Perplexity for structured forms 00:43:52 📂 Karl’s invoice matching with Agent Mode and Google Drive 00:51:08 ⚒️ Agent Mode better for complex, document-heavy work 00:56:26 🎙️ Beth uses AI to fix audio gear and routing 01:01:19 🩺 ChatGPT solves Brian’s daughter’s skincare routine 01:02:32 🎥 Brian demos Opus Clip’s multi-speaker video cropping 01:07:09 🖥️ Why UI beats small feature wins 01:10:55 🐘 Beth’s animated elephant video thumbnails 01:12:08 🎥 Animated thumbnails as future YouTube preview 01:13:44 📅 Show wrap-up and sci-fi show preview Hashtags #AIUseCases #AgentMode #GenSpark #Perplexity #ChatGPTProjects #MidJourney #SoraAI #Automation #AIAgents #ImageGeneration #WorkflowAutomation #DailyAIShow The Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Karl Yeh

18 jul 2025 - 1 h 13 min
episode Is Agent Mode Really What We Need? (Ep 509) artwork
Is Agent Mode Really What We Need? (Ep 509)

Want to keep the conversation going? Join our Slack community at thedailyaishowcommunity.com Intro In this July 17th episode of The Daily AI Show, the team breaks down OpenAI’s upcoming Agent Mode, speculating on its design, impact, and strategic importance ahead of a live announcement. They debate whether Agent Mode represents a true agentic leap for ChatGPT or simply OpenAI catching up to Claude, GenSpark, and other multi-step tools. The episode highlights possible browser automation, DOM-level actions, and workflow orchestration directly inside ChatGPT. Key Points Discussed OpenAI teased “Agent Mode” as an upcoming feature combining Deep Research, Operator, and Connectors for ChatGPT. Screenshots suggest Agent Mode will allow document analysis across Google Drive, Slack, HubSpot, and other connectors. Andy proposed that OpenAI’s Agent Mode may shift from pixel-level mouse emulation to DOM (Document Object Model) browser control, offering precise web navigation and interaction. DOM-based browsing would let agents interact with page elements like buttons and forms, avoiding prior layout shift problems that broke Operator. Unlike Operator, which mimicked a human user, Agent Mode could act more like a browser API, enabling efficient deep research workflows. The team debated whether this represents OpenAI catching up to competitors like Claude, GenSpark, and Perplexity Labs, or establishing a new standard. Claude’s MCP+ connectors already allow file control, SaaS integrations, and desktop operations—Agent Mode may be OpenAI’s response. The group stressed that Agent Mode will likely not be fast; latency will be acceptable if accuracy and hands-off execution improve. For businesses, Agent Mode may automate document processing, report generation, and data gathering across dispersed resources. Karl highlighted the browser-building trend across AI companies: OpenAI’s rumored browser, Perplexity’s Comet, Arc Browser, DS Browser, and GenSpark’s efforts. Future potential includes agents learning repeatable workflows via observation and offering automation proactively. The group emphasized that organizations with poor data management will struggle, as agents cannot extract accurate insights from chaotic document stores. Agent Mode could eventually replace no-code workflow platforms like Make and Zapier if triggers, memory, and scheduling are integrated. While excitement is high, skepticism remains about how much Agent Mode can deliver immediately, especially without robust data foundations. Timestamps & Topics 00:00:00 🚨 Agent Mode speculation intro 00:01:11 🛠️ Deep Research + Operator + Connectors = Agent Mode? 00:04:16 🕸️ DOM-level browsing explained 00:06:48 🔎 Browser-based agents vs. API-only agents 00:10:24 🧭 Claude and GenSpark comparison 00:14:00 ⏳ Why Agent Mode won’t prioritize speed 00:17:30 📁 Document analysis and report generation use cases 00:21:25 🌐 Browser-building trend across AI labs 00:24:40 🛡️ Data governance as Agent Mode bottleneck 00:28:30 🧹 Data cleansing before document automation 00:32:00 🏗️ Trigger, memory, and workflow gaps 00:38:00 🤖 Future of proactive workflow suggestions 00:44:00 ⚙️ Agent Mode as OpenAI’s AI operating system 00:47:30 📊 Claude’s connectors and desktop control edge 00:50:20 📈 Scheduling, triggers, and prompt history needed 00:54:00 🗣️ Live reaction show planned after OpenAI event 00:57:00 📅 Upcoming demos, sci-fi show, and conundrum drop Hashtags #AgentMode #ChatGPT #OpenAI #AgenticAI #WorkflowAutomation #BrowserAgents #Connectors #Claude #AIOperatingSystem #DeepResearch #AIWorkflow #DailyAIShow The Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Jyunmi Hatcher, and Karl Yeh

18 jul 2025 - 58 min
episode Claude, Mistral, Moonshot and More AI News (Ep. 508) artwork
Claude, Mistral, Moonshot and More AI News (Ep. 508)

the team dives into the latest AI news, covering model releases, open-source momentum, government contracts, science wins, Claude’s new connectors, and major upgrades in AI video generation. From Meta’s internal struggles to self-running labs and cyborg-controlled race cars, the episode showcases both industry shifts and human impact stories. Key Points Discussed Mistral released Voxel, an open-source voice model for transcription and speech tasks, expanding open alternatives in the audio space. Moonshot AI’s new Kimi 2 model is a 1 trillion parameter mixture-of-experts designed for agentic tasks with native tool interaction, showing open-source models rivaling closed frontier models. Perplexity is integrating Kimi 2, following its previous work with DeepSeek, highlighting the shift of open models into production platforms. Meta’s Superintelligence Labs may shut down open-source releases as leadership debates internal strategy, marking a potential shift from their previous open commitment. Sam Altman signaled delays in OpenAI’s open-source model plans, officially for safety reasons but likely reflecting market dynamics. Meta’s acquisition of Play AI and new $200M+ DoD contracts underscore how military funding is shaping foundational model development. Meta’s Hyperion and Prometheus projects will deliver multi-gigawatt data centers, aiming for the world’s largest compute infrastructure. Claude’s connectors now integrate with local file systems, macOS controls, Asana, Canva, Slack, and Zapier, enabling agentic control over personal and enterprise workflows. Runway’s Act 2 video model offers next-gen motion capture without mocap suits, enabling hand and facial gesture capture from raw video for character animation. Nvidia is cleared to resume low-end chip sales to China, unlocking $5B to $15B in revenue and pushing its market cap over $4 trillion. Amazon launched Hero, a free AI-assisted IDE designed to guide novice coders through development tasks. NotebookLM now offers “Featured Notebooks” from institutions like Harvard and The Atlantic, expanding knowledge bases for structured research. AI-powered labs are accelerating materials science research by 10x, using dynamic scheduling to optimize chemical testing workflows. AI-enhanced breast cancer detection models improve MRI accuracy, aiding early tumor identification. AI-designed prosthetics and brain-machine interfaces are enabling mind-controlled race cars and advanced robotic hands, marking real-world AI for good breakthroughs. OpenAI’s internal Slack-based structure and decentralized decision-making were revealed in an engineer’s blog post, offering insights into how frontier AI labs operate. Timestamps & Topics 00:00:00 📜 AI news day poetic intro 00:02:45 🎙️ Mistral’s Voxel open-source voice model 00:04:02 🧠 Kimi 2: Moonshot’s trillion-parameter agent model 00:06:51 🛠️ Perplexity to integrate Kimi 2 00:08:22 🏛️ Chain-of-thought monitorability for AI safety 00:13:25 🔒 Meta considering closing future LLaMA models 00:15:02 📉 Sam Altman delays OpenAI’s open-source model 00:18:01 📞 Meta acquires Play AI, builds $200M+ DoD deals 00:19:36 ⚡ Hyperion and Prometheus mega data centers 00:21:00 🛡️ Meta joins military-industrial complex 00:25:10 🤖 Claude’s new connectors and desktop control 00:29:28 📊 Claude as true agent via MCP+ 00:30:46 🎥 Runway Act 2: next-gen mocap without suits 00:34:45 💻 Nvidia reopens H20 chip sales, stock soars 00:42:32 💡 Amazon Hero AI coding IDE released 00:45:07 📚 NotebookLM featured notebooks launch 00:48:30 🧪 AI-powered labs accelerate materials research 00:50:37 🩺 AI models improve breast cancer detection 00:52:45 🤖 AI-enhanced prosthetics and mind-controlled cars 00:57:43 👓 Holiday glasses startup delays: AI wearables are hard 01:00:13 🏢 OpenAI’s Slack-based ops and decentralized org chart 01:02:34 📅 Wrap-up and upcoming shows: Google’s ADK, AI for good

16 jul 2025 - 1 h 1 min
episode AI Companions or Digital Delusions? (EP. 507) artwork
AI Companions or Digital Delusions? (EP. 507)

Want to keep the conversation going? Join our Slack community at thedailyaishowcommunity.com Intro In this July 15th episode of The Daily AI Show, the team explores the booming AI companion market, now drawing over 200 million users globally. They break down the spectrum from romantic and platonic digital companions to mental health support bots, debating whether these AI systems are filling a human connection gap or deepening social isolation. The discussion blends psychology, culture, tech, and personal stories to examine where AI companionship is taking society next. Key Points Discussed Replica AI and Character.AI report combined user counts over 200 million, with China’s Xiao Bing chatbot surpassing 30 billion conversations. Digital companions range from friendship and romantic partners to productivity aides and therapy-lite interactions. AI companion demand rises alongside what some call a loneliness epidemic, though not everyone agrees on that framing. COVID-era isolation accelerated declines in traditional social evenings, fueling digital connection trends. Digital intimacy offers ease, predictability, and safety compared to unpredictable human interactions. Some users prefer AI’s non-judgmental interaction, especially those with social anxiety or physical isolation. Risks include over-dependence, emotional addiction, and avoidance of imperfect but necessary human relationships. Future embodied AI companions (robots) could amplify these trends, moving digital companionship from screen to physical presence. AI companions may evolve from “yes-man” validation models to systems capable of constructive pushback and human-like unpredictability. The group debated whether AI companionship could someday outperform humans in emotional support and presence. Safety concerns, especially for women, introduce distinct use cases for AI companionship as protection or reassurance tools. Social stigma toward AI companionship remains, though the panel hopes society evolves toward acceptance without shame. AI companionship’s impact may parallel social media: connecting people in new ways while also amplifying isolation for some. Timestamps & Topics 00:00:00 🤖 Rise of AI companions and digital intimacy 00:01:30 📊 Market growth: Replica, Character.AI, Xiao Bing 00:04:00 🧠 Loneliness debate and digital substitutes 00:07:00 🏠 COVID acceleration of digital companionship 00:10:50 📱 Safety, ease, and rejection avoidance 00:14:30 🧍‍♂️ Embodied AI companions and future robots 00:18:00 🏡 Companion norms: meeting friends with their bots? 00:23:40 🚪 AI replacing the hard parts of human interaction 00:27:00 🧩 Therapy bots, safety tools, and ethics gaps 00:31:10 💬 Pushback, sycophants, and human-like AI personalities 00:35:40 🚻 Gender differences in AI companionship adoption 00:42:00 🚨 AI companions as safety for women 00:47:00 🏷️ Social stigma and the hope for acceptance 00:51:00 📦 Future business of emotional support robots 00:54:00 📅 Wrap-up and upcoming show previews Hashtags #AICompanions #DigitalIntimacy #AIrelationships #ReplicaAI #CharacterAI #XiaoBing #Loneliness #AIEthics #AIrobots #MentalHealthAI #SocialAI #DailyAIShow The Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Jyunmi Hatcher, and Karl Yeh

16 jul 2025 - 54 min
Soy muy de podcasts. Mientras hago la cama, mientras recojo la casa, mientras trabajo… Y en Podimo encuentro podcast que me encantan. De emprendimiento, de salid, de humor… De lo que quiera! Estoy encantada 👍
Soy muy de podcasts. Mientras hago la cama, mientras recojo la casa, mientras trabajo… Y en Podimo encuentro podcast que me encantan. De emprendimiento, de salid, de humor… De lo que quiera! Estoy encantada 👍
MI TOC es feliz, que maravilla. Ordenador, limpio, sugerencias de categorías nuevas a explorar!!!
Me suscribi con los 14 días de prueba para escuchar el Podcast de Misterios Cotidianos, pero al final me quedo mas tiempo porque hacia tiempo que no me reía tanto. Tiene Podcast muy buenos y la aplicación funciona bien.
App ligera, eficiente, encuentras rápido tus podcast favoritos. Diseño sencillo y bonito. me gustó.
contenidos frescos e inteligentes
La App va francamente bien y el precio me parece muy justo para pagar a gente que nos da horas y horas de contenido. Espero poder seguir usándola asiduamente.

Disfruta 90 días gratis

4,99 € / mes después de la prueba.Cancela cuando quieras.

Podcasts exclusivos

Sin anuncios

Podcast gratuitos

Audiolibros

20 horas / mes

Prueba gratis

Sólo en Podimo

Audiolibros populares