
inglés
Tecnología y ciencia
$99 / mes después de la prueba.Cancela cuando quieras.
Acerca de The Infra Pod
The Infra Pod brings you insightful and thought-provoking discussions on the world of infrastructure software. This podcast is started by two engineers, Ian Livingstone (tech advisor for Snyk) and Tim Chen (General Partner at Essence VC), team up with a rotating cast of guests to dive deep into the latest trends and hot topics in the software infrastructure space.
Building a successful infra product between all the AI apps and model providers (chat with Louis from OpenRouter)
Tim (Essence VC) and Ian (Keycard) interviewed Louis Vichy, co-founder of OpenRouter, about why he built OpenRouter to de-risk AI app development (end-user pays LLM costs), how it scaled to processing ~5–6T tokens/week, and what OpenRouter is today: a reliable inference routing/control layer across ~60 providers with consolidated billing and reduced vendor lock-in. Louis explains why teams adopt OpenRouter (constant new model integrations, pricing/billing, differing API shapes), how routing focuses on practical heuristics (fallbacks, cost, throughput, latency), and how reliability is achieved via provider failover (e.g., alternate endpoints like Vertex/Bedrock). They discuss agent trends (longer-running agents, small models for routing/classification with specialized downstream models), possible memory support, developer conveniences (e.g., PDF parsing), and enterprise features (security/compliance guardrails, presets). The episode ends with links to OpenRouter chat/rankings pages and hiring for high-agency TypeScript-focused engineers.00:00 Welcome & Meet Louis (OpenRouter Co‑Founder)00:27 Origin Story: De‑Risking AI App Costs (Hackathon Lessons)01:35 First Big Feature: End‑User Pays for Tokens (Sign in with OpenRouter)02:34 From Routing to Rankings: Scaling to Trillions of Tokens03:42 What OpenRouter Is Today: Reliable Inference Across 60+ Providers05:55 Why Teams Adopt It: Avoiding Model API Churn, Billing, and Vendor Lock‑In08:37 Winning Strategy: Don’t Build a “Magic Router”—Optimize Cost/Latency/Throughput18:58 From Chat to RAG + Memory: Building Persistent Agent Context20:37 Developer Bells & Whistles: Auto PDF Parsing and More21:11 Enterprise Readiness: Compliance, Security Guardrails & Model Presets22:22 Customer Growth at Warp Speed in the AI Era23:03 Spicy Future!
From 30 Seconds to 20ms: Solving Browser Speed for AI Agents (Chat with Catherine from Kernel)
In this episode of The Infra Pod, hosts Tim Chen (Essence VC) and Ian Livingstone (Keycard) sat down with Catherine Jue, co-founder and CEO of Kernel, to explore the cutting-edge world of browser infrastructure for AI agents. Catherine shares her journey from Cash App to founding Kernel, explaining how she discovered the critical need for scalable browser automation when AI agents need to interact with the web. The conversation dives deep into the technical innovations behind Kernel's use of unikernels and micro VMs, which enable blazingly fast browser startup times (20ms vs 30+ seconds) and unique snapshot/restore capabilities. Catherine discusses the evolution from deterministic browser automation to truly agentic behavior, the challenges of optimizing for variable web workloads, and her optimistic vision for an AI-powered future where the pie expands rather than consolidates. This episode is packed with technical insights about infrastructure, agent tooling, and the future of how software interfaces will evolve in an agent-native world. 0:24 [http:/#] Catherine's startup journey and founding Kernel 1:30 [http:/#] Cash App's OpenAI experiment sparks the idea 3:56 [http:/#] Why browser infrastructure for AI agents? 6:36 [http:/#] Unikernels: 20ms startup vs 30+ seconds 15:02 [http:/#] Optimizing for variable web workloads 23:25 [http:/#] Future of agent-native software 32:05 [http:/#] Hot takes!
Coding agents need infra to apply code changes! (Chat with Tejas from Morph)
Tim (Essence VC) and Ian (Keycard) sat down with Tejas Bhakta (CEO of Morph) to chat about building infrastructure for the fastest file edit APIs for coding agents. He shares how Morph delivers 10,000 tokens/second through speculative decoding, why cursor removed fast apply, and his vision for autonomous software that updates without prompts. The conversation covers subagent architecture, code search optimization, and the path to reliable AI coding at scale. Timestamps: 0:00 - Introduction 0:29 - Why start Morph and pivoting through YC 1:23 - The fast apply insight from Cursor 3:42 - How fast apply works and speculative decoding 6:09 - Use cases: when and where fast apply matters 8:19 - Why Cursor removed fast apply 9:22 - Morph's value prop beyond speed 11:58 - Subagent architecture and SDK approach 14:45 - Semantic search and code-specific tooling 19:52 - Building custom coding agents vs platforms 22:42 - Adoption inhibitors and the future of codegen 23:26 - Spicy take: Autonomous software and reliability
Let's chat about vibe coding & Ralph! (Chat with Dexter at Humanlayer)
In this episode of The Infra Pod, hosts Tim and Ian sit down with Dexter Horthy, CEO of Human Layer, to explore the evolution of AI coding agents and the future of software development. Dexter shares his journey from building data tools to discovering the real problem: making AI coding agents actually productive for senior engineers, not just juniors. The conversation dives deep into the research-plan-implement workflow that enables engineers to ship 99% of their code with AI assistance, the challenges of getting staff engineers to adopt AI tools, and why most AI coding ecosystems don't actually help you sell to enterprises. Dexter also shares his spicy take on how Ralph-style agents can be even further enhanced. Whether you're a skeptical senior engineer or an AI-curious developer, this episode offers practical insights into what actually works in production AI coding today. [0:00] Introduction & Dexter's Journey Why Dexter finally started a company, the failed data catalog pivot, and building an AI janitor for data warehouses [8:00] The Hard Lessons of AI Ecosystem Hype Why there's no "SAML for AI agents" and what enterprises actually need versus what the hype machine promises [13:00] The Research-Plan-Implement Breakthrough How to make senior engineers productive with AI, staying objective during research, and making decisions at the top of the context window [26:00] The Vibe Shift & Where We Are Today When respected engineers started believing, the role of Ralph and spec-driven development, and what's working in production [37:00] Spicy Take: Ralph Goes to the Supreme
Building a bug-free vibe coding world (Chat with Akshay from Antithesis)
In this episode of the Infra Pod, hosts Ian Livingston (Keycard) and Tim Chen (Essence VC) interviewed the Field CTO Akshay Shah of Antithesis, diving deep into the world of distributed systems, reliability, and the future of software testing. The conversation covers the challenges of building bug-free distributed systems, the story behind Antithesis, lessons from major outages, and the evolving landscape of infrastructure and AI-driven operations. Timeline with Timestamps: * 00:00 – Introduction & guest background * 02:00 – What Antithesis does and why it matters * 06:00 – Real-world impact: Testing distributed systems (etcd, Kubernetes) * 09:00 – Major outages & lessons learned (AWS, Knight Capital) * 12:00 – The origins and philosophy behind Antithesis * 16:00 – The future of reliability, testing, and AI in infrastructure * 28:00 – Closing thoughts & where to learn more Links: * Learn more about Antithesis: https://antithesis.com [https://antithesis.com/] * Antithesis on YouTube: @AntithesisHQ
Elige tu suscripción
Más populares
Premium
20 horas de audiolibros
Podcasts solo en Podimo
Disfruta los shows de Podimo sin anuncios
Cancela cuando quieras
Empieza 7 días de prueba
Después $99 / mes
Empieza 7 días de prueba. $99 / mes después de la prueba. Cancela cuando quieras.