Cover image of show AI Security Podcast

AI Security Podcast

Podcast by Kaizenteq Team

English

Technology & science

Limited Offer

2 months for 19 kr.

Then 99 kr. / monthCancel anytime.

  • 20 hours of audiobooks / month
  • Podcasts only on Podimo
  • All free podcasts
Get Started

About AI Security Podcast

The #1 source for AI Security insights for CISOs and cybersecurity leaders. Hosted by two former CISOs, the AI Security Podcast provides expert, no-fluff discussions on the security of AI systems and the use of AI in Cybersecurity. Whether you're a CISO, security architect, engineer, or cyber leader, you'll find practical strategies, emerging risk analysis, and real-world implementations without the marketing noise. These conversations are helping cybersecurity leaders make informed decisions and lead with confidence in the age of AI.

All episodes

42 episodes
episode Why AI Agents Fail in Production: Governance, Trust & The "Undo" Button artwork

Why AI Agents Fail in Production: Governance, Trust & The "Undo" Button

Is your organization stuck in "read-only" mode with AI agents? You're not alone. In this episode, Dev Rishi [https://www.linkedin.com/in/devvret-rishi-b0857684/](GM of AI at Rubrik [https://www.rubrik.com/], formerly CEO of Predibase) joins Ashish and Caleb to dissect why enterprise AI adoption is stalling at the experimentation phase and how to safely move to production . Dev reveals the three biggest fears holding IT leaders back: shadow agents, lack of real-time governance, and the inability to "undo" catastrophic mistakes . We dive deep into the concept of "Agent Rewind", a capability to roll back changes made by rogue AI agents, like deleting a production database and why this remediation layer is critical for trust . The conversation also explores the technical architecture needed for safe autonomous agents, including the debate between MCP (Model Context Protocol) and A2A (Agent to Agent) standards . Dev explains why traditional "anomaly detection" fails for AI and proposes a new model of AI-driven policy enforcement using small language models (SLMs) as judges . Questions asked: (00:00) Introduction(02:50) Who is Dev Rishi? From Predibase to Rubrik(04:00) The Shift from Fine-Tuning to Foundation Models (07:20) Enterprise AI Use Cases: Background Checks & Call Centers (11:30) The 4 Phases of AI Adoption: Where are most companies? (13:50) The 3 Biggest Fears of IT Leaders: Shadow Agents, Governance, & Undo (18:20) "Agent Rewind": How to Undo a Rogue Agent's Actions (23:00) Why Agents are Stuck in "Read-Only" Mode (27:40) Why Anomaly Detection Fails for AI Security (30:20) Using AI Judges (SLMs) for Real-Time Policy Enforcement (34:30) LLM Firewalls vs. Bespoke Policy Enforcement (44:00) Identity for Agents: Scoping Permissions & Tools (46:20) MCP vs. A2A: Which Protocol Wins? (48:40) Why A2A is Technically Superior but MCP Might Win

23 Jan 2026 - 51 min
episode AI Security 2025 Wrap: 9 Predictions Hit & The AI Bubble Burst of 2026 artwork

AI Security 2025 Wrap: 9 Predictions Hit & The AI Bubble Burst of 2026

It's the season finale of the AI Security Podcast! Ashish Rajan [https://www.linkedin.com/in/ashishrajan/]and Caleb Sima [https://www.linkedin.com/in/calebsima/] look back at their 2025 predictions and reveal that they went 9 for 9. We wrap up the year by dissecting exactly what the industry got right (and wrong) about the trajectory of AI, providing a definitive "state of the union" for AI security. We analyze why SOC Automation became the undisputed king of real-world AI impact in 2025 , while mature AI production systems failed to materialize beyond narrow use cases due to skyrocketing costs and reliability issues . They also review the accuracy of their forecasts on the rise of AI Red Teaming , the continued overhyping of Agentic AI , and why Data Security emerged as a critical winner in a geo-locked world . Looking ahead to 2026, the conversation shifts to bold new predictions: the inevitable bursting of the "AI Bubble" as valuations detach from reality and the rise of self-fine-tuning models . We also explore the controversial idea that the "AI Engineer" is merely a rebrand for data scientists and a lot more… Questions asked: (00:00) Introduction: 2025 Season Wrap Up(02:50) State of AI Utility in late 2025: From coding to daily tasks(09:30) 2025 Report Card: Mature AI Production Systems? (Verdict: Correct)(10:45) The Cost Barrier: Why Production AI is Expensive(13:50) 2025 Report Card: SOC Automation is #1 (Verdict: Correct)(16:00) 2025 Report Card: The Rise of AI Red Teaming (Verdict: Correct)(17:20) 2025 Report Card: AI in the Browser & OS(21:00) Security Reality: Prompt Injection is still the #1 Risk(22:30) 2025 Report Card: Data Security is the Winner(24:45) 2025 Report Card: Geo-locking & Data Sovereignty(28:00) 2026 Outlook: Age Verification & Adult Content Models(33:00) 2025 Report Card: "Agentic AI" is Overhyped (Verdict: Correct)(39:50) 2025 Report Card: CISOs Should NOT Hire "AI Engineers" Yet(44:00) The "AI Engineer" is just a rebranded Data Scientist(46:40) 2026 Prediction: Self-Training & Self-Fine-Tuning Models(47:50) 2026 Prediction: The AI Bubble Will Burst(49:50) Bold Prediction: Will OpenAI Disappear?(01:01:20) Final Thoughts: Looking ahead to Season 4

19 Dec 2025 - 1 h 3 min
episode AI Paywall for Browsers & The End of the Open Web? artwork

AI Paywall for Browsers & The End of the Open Web?

Cloudflare announced this year that AI bots must pay to crawl content. In this episode, Ashish Rajan [https://www.linkedin.com/in/ashishrajan/] and Caleb Sima [https://www.linkedin.com/in/calebsima/] dive deep into what this means for the future of the "open web" and why search engines as we know them might be dying . We explore Cloudflare's new model where websites can whitelist AI crawlers in exchange for payment, effectively putting a price tag on the world's information . Caleb spoke about the potential security implications, predicting a shift towards a web that requires strict identity and authentication for both humans and AI agents . The conversation also covers Cloudflare's new open-source browser, Ladybird, positioning itself as a competitor to the dominant Chromium engine . Is this the beginning of Web 3.0 where "information becomes currency"? Tune in to understand the massive shifts coming to browser security, AI agent identity, and the economics of the internet . Questions asked: (00:00) Introduction(01:55) Cloudflare's Announcement: Blocking AI Bots Unless They Pay (03:50) Why Search Engines Are Dying & The "Oracle" of AI (05:40) How the Payment Model Works: Bidding for Content Access (09:30) Will This Adoption Come from Enterprise or Bloggers?(11:45) Security Implications: The Web Requires Identity & Auth (13:50) Phase 2: Cloudflare's New Browser "Ladybird" vs. Chromium (19:00) Moving from B2B to Consumer: Paying Per Article via Browser (21:50) Managing AI Agent Identity: Who is Buying This Dinner? (23:20) Why Did We Switch to Chrome? (Performance vs. Memory) (27:00) Jony Ive & Sam Altman's AI Device: The Future Interface? (30:20) Google's Response: New Tools like "Opal" to Compete with n8n (33:15) The Controversy: Is This the End of the Free Open Web? (36:20) The New Economics of the Internet: Information as Currency Resources discussed during the interview: Cloudflare Just Changed How AI Crawlers Scrape the Internet-at-Large; Permission-Based Approach Makes Way for A New Business Model [https://www.cloudflare.com/en-gb/press/press-releases/2025/cloudflare-just-changed-how-ai-crawlers-scrape-the-internet-at-large/]

10 Dec 2025 - 39 min
episode Build vs. Buy in AI Security: Why Internal Prototypes Fail & The Future of CodeMender artwork

Build vs. Buy in AI Security: Why Internal Prototypes Fail & The Future of CodeMender

Should you build your own AI security tools or buy from a vendor? In this episode, Ashish Rajan and Caleb Sima dive deep into the "Build vs. Buy" debate, sparked by Google DeepMind's release of CodeMender, an AI agent that autonomously finds, root-causes, and patches software vulnerabilities . While building an impressive AI prototype is easy, maintaining and scaling it into a production-grade security product is "very, very difficult" and often leads to failure after 18 months of hidden costs and consistency issues . We get into the incentives driving internal "AI sprawl," where security teams build tools just to secure budget and promotions, potentially fueling an AI bubble waiting to pop . We also discuss the "overhyped" state of AI security marketing, why nobody can articulate the specific risks of "agentic AI," and the future where third-party security products use AI to automatically personalize themselves to your environment, eliminating the need for manual tuning . Questions asked: (00:00) Introduction: The "Most Innovative" Episode Ever(01:40) DeepMind's CodeMender: Autonomously Finding & Patching Vulnerabilities(05:00) The "Build vs. Buy" Debate: Can You Just Slap an LLM on It?(06:50) The Prototype Trap: Why Internal AI Tools Fail at Scale(11:15) The "Data Lake" Argument: Can You Replace a SIEM with DIY AI?(14:30) Bank of America vs. Capital One: Are Banks Building AI Products?(18:30) The Failure of Traditional Threat Intel & Building Your Own(23:00) Perverse Incentives: Why Teams Build AI Tools for Promotions & Budget(26:30) The Coming AI Bubble Pop & The Fate of "AI Wrapper" Startups(31:30) AI Sprawl: Repeating the Mistakes of Cloud Adoption(33:15) The Frustration with "Agentic AI" Hype & Buzzwords(38:30) The Future: AI Platforms & Auto-Personalized Security Products(46:20) Secure Coding as a Black Box: The End of DevSecOps?

03 Dec 2025 - 50 min
episode Inside the 29.5 Million DARPA AI Cyber Challenge: How Autonomous Agents Find & Patch Vulns artwork

Inside the 29.5 Million DARPA AI Cyber Challenge: How Autonomous Agents Find & Patch Vulns

What does it take to build a fully autonomous AI system that can find, verify, and patch vulnerabilities in open-source software? Michael Brown [https://www.linkedin.com/in/michael-brown-47949515/], Principal Security Engineer at Trail of Bits, joins us to go behind the scenes of the 3-year DARPA AI Cyber Challenge (AICC), where his team's agent, "Buttercup," won second place. Michael, a self-proclaimed "AI skeptic," shares his surprise at how capable LLMs were at generating high-quality patches . However, he also shared the most critical lesson from the competition: "AI was actually the commodity" The real differentiator wasn't the AI model itself, but the "best of both worlds" approach, robust engineering, intelligent scaffolding, and using "AI where it's useful and conventional stuff where it's useful" . This is a great listen for any engineering or security team building AI solutions. We cover the multi-agent architecture of Buttercup, the real-world costs and the open-source future of this technology . Questions asked: (00:00) Introduction: The DARPA AI Hacking Challenge(03:00) Who is Michael Brown? (Trail of Bits AI/ML Research)(04:00) What is the DARPA AI Cyber Challenge (AICC)?(04:45) Why did the AICC take 3 years to run?(07:00) The AICC Finals: Trail of Bits takes 2nd place(07:45) The AICC Goal: Autonomously find AND patch open source(10:45) Competition Rules: No "virtual patching"(11:40) AICC Scoring: Finding vs. Patching(14:00) The competition was fully autonomous(14:40) The 3-month sprint to build Buttercup v1(15:45) The origin of the name "Buttercup" (The Princess Bride)(17:40) The original (and scrapped) concept for Buttercup(20:15) The critical difference: Finding vs. Verifying a vulnerability(26:30) LLMs were allowed, but were they the key?(28:10) Choosing LLMs: Using OpenAI for patching, Anthropic for fuzzing(30:30) What was the biggest surprise? (An AI skeptic is blown away)(32:45) Why the latest models weren't always better(35:30) The #1 lesson: The importance of high-quality engineering(39:10) Scaffolding vs. AI: What really won the competition?(40:30) Key Insight: AI was the commodity, engineering was the differentiator(41:40) The "Best of Both Worlds" approach (AI + conventional tools)(43:20) Pro Tip: Don't ask AI to "boil the ocean"(45:00) Buttercup's multi-agent architecture (Engineer, Security, QA)(47:30) Can you use Buttercup for your enterprise? (The $100k+ cost)(48:50) Buttercup is open source and runs on a laptop(51:30) The future of Buttercup: Connecting to OSS-Fuzz(52:45) How Buttercup compares to commercial tools (RunSybil, XBOW)(53:50) How the 1st place team (Team Atlanta) won(56:20) Where to find Michael Brown & Buttercup Resources discussed during the interview: * Trail of Bits [trailofbits.com⁠] * Buttercup (Open Source Project) [https://www.trailofbits.com/buttercup/] * DARPA AI Cyber Challenge (AICC) [ ⁠aicyberchallenge.com] * Movie: The Princess Bride [https://www.imdb.com/title/tt0093779/⁠]

06 Nov 2025 - 58 min
En fantastisk app med et enormt stort udvalg af spændende podcasts. Podimo formår virkelig at lave godt indhold, der takler de lidt mere svære emner. At der så også er lydbøger oveni til en billig pris, gør at det er blevet min favorit app.
En fantastisk app med et enormt stort udvalg af spændende podcasts. Podimo formår virkelig at lave godt indhold, der takler de lidt mere svære emner. At der så også er lydbøger oveni til en billig pris, gør at det er blevet min favorit app.
Rigtig god tjeneste med gode eksklusive podcasts og derudover et kæmpe udvalg af podcasts og lydbøger. Kan varmt anbefales, om ikke andet så udelukkende pga Dårligdommerne, Klovn podcast, Hakkedrengene og Han duo 😁 👍
Podimo er blevet uundværlig! Til lange bilture, hverdagen, rengøringen og i det hele taget, når man trænger til lidt adspredelse.

Choose your subscription

Limited Offer

Premium

20 hours of audiobooks

  • Podcasts only on Podimo

  • All free podcasts

  • Cancel anytime

2 months for 19 kr.
Then 99 kr. / month

Get Started

Premium Plus

Unlimited audiobooks

  • Podcasts only on Podimo

  • All free podcasts

  • Cancel anytime

Start 7 days free trial
Then 129 kr. / month

Start for free

Only on Podimo

Popular audiobooks

Get Started

2 months for 19 kr. Then 99 kr. / month. Cancel anytime.