Billede af showet The MLSecOps Podcast

The MLSecOps Podcast

Podcast af MLSecOps.com

engelsk

Videnskab & teknologi

Begrænset tilbud

1 måned kun 9 kr.

Derefter 99 kr. / månedOpsig når som helst.

  • 20 lydbogstimer pr. måned
  • Podcasts kun på Podimo
  • Gratis podcasts
Kom i gang

Læs mere The MLSecOps Podcast

Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today.Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.

Alle episoder

58 episoder
episode Season 3 Finale: Top Insights, Hacks, and Lessons from the Frontlines of AI Security artwork

Season 3 Finale: Top Insights, Hacks, and Lessons from the Frontlines of AI Security

Send us a text [https://www.buzzsprout.com/twilio/text_messages/2147790/open_sms] To close out Season 3, we’re revisiting the standout insights, wildest vulnerabilities, and most practical lessons shared by 20+ AI practitioners, researchers, and industry leaders shaping the future of AI security. If you're building, breaking, or defending AI/ML systems, this is your must-listen roundup. Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/season-3-finale-top-insights-hacks-and-lessons-from-the-frontlines-of-ai-security Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com [https://community.mlsecops.com]. Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models [https://protectai.com/guardian] Recon: Automated Red Teaming for GenAI [https://protectai.com/recon] Protect AI’s ML Security-Focused Open Source Tools [https://bit.ly/ProtectAIGitHub] LLM Guard Open Source Security Toolkit for LLM Interactions [https://github.com/protectai/llm-guard?__hstc=79449099.3be041ae02c5115fce1bf8cf1ab777a7.1730242702306.1730242702306.1730242702306.1&__hssc=79449099.1.1730242702306&__hsfp=1566720259&hsCtaTracking=5a8b78de-0e28-44b3-9e1f-1aa77fc7ac3c%7C69700feb-1b61-4dc1-b66a-9accb96a4863] Huntr - The World's First AI/Machine Learning Bug Bounty Platform [https://bit.ly/aimlhuntr]

21. jul. 2025 - 24 min
episode Breaking and Securing Real-World LLM Apps artwork

Breaking and Securing Real-World LLM Apps

Send us a text [https://www.buzzsprout.com/twilio/text_messages/2147790/open_sms] Fresh off their OWASP AppSec EU talk, Rico Komenda and Javan Rasokat join Charlie McCarthy to share real-world insights on breaking and securing LLM-integrated systems. Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/breaking-and-securing-real-world-llm-apps Ask ChatGPT Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com [https://community.mlsecops.com]. Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models [https://protectai.com/guardian] Recon: Automated Red Teaming for GenAI [https://protectai.com/recon] Protect AI’s ML Security-Focused Open Source Tools [https://bit.ly/ProtectAIGitHub] LLM Guard Open Source Security Toolkit for LLM Interactions [https://github.com/protectai/llm-guard?__hstc=79449099.3be041ae02c5115fce1bf8cf1ab777a7.1730242702306.1730242702306.1730242702306.1&__hssc=79449099.1.1730242702306&__hsfp=1566720259&hsCtaTracking=5a8b78de-0e28-44b3-9e1f-1aa77fc7ac3c%7C69700feb-1b61-4dc1-b66a-9accb96a4863] Huntr - The World's First AI/Machine Learning Bug Bounty Platform [https://bit.ly/aimlhuntr]

16. jul. 2025 - 53 min
episode How Red Teamers Are Exposing Flaws in AI Pipelines artwork

How Red Teamers Are Exposing Flaws in AI Pipelines

Send us a text [https://www.buzzsprout.com/twilio/text_messages/2147790/open_sms] Prolific bug bounty hunter and Offensive Security Lead at Toreon, Robbe Van Roey (PinkDraconian), joins the MLSecOps Podcast to break down how he discovered RCEs in BentoML and LangChain, the risks of unsafe model serialization, and his approach to red teaming AI systems.  Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/how-red-teamers-are-exposing-flaws-in-ai-pipelines Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com [https://community.mlsecops.com]. Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models [https://protectai.com/guardian] Recon: Automated Red Teaming for GenAI [https://protectai.com/recon] Protect AI’s ML Security-Focused Open Source Tools [https://bit.ly/ProtectAIGitHub] LLM Guard Open Source Security Toolkit for LLM Interactions [https://github.com/protectai/llm-guard?__hstc=79449099.3be041ae02c5115fce1bf8cf1ab777a7.1730242702306.1730242702306.1730242702306.1&__hssc=79449099.1.1730242702306&__hsfp=1566720259&hsCtaTracking=5a8b78de-0e28-44b3-9e1f-1aa77fc7ac3c%7C69700feb-1b61-4dc1-b66a-9accb96a4863] Huntr - The World's First AI/Machine Learning Bug Bounty Platform [https://bit.ly/aimlhuntr]

09. jul. 2025 - 41 min
episode Securing AI for Government: Inside the Leidos + Protect AI Partnership artwork

Securing AI for Government: Inside the Leidos + Protect AI Partnership

Send us a text [https://www.buzzsprout.com/twilio/text_messages/2147790/open_sms] On this episode of the MLSecOps Podcast, Rob Linger, Information Advantage Practice Lead at Leidos, join hosts Jessica Souder, Director of Government and Defense at Protect AI, and Charlie McCarthy to explore what it takes to deploy secure AI/ML systems in government environments. Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/securing-ai-for-government-inside-the-leidos-protect-ai-partnership [https://mlsecops.com/podcast/securing-ai-for-government-inside-the-leidos-protect-ai-partnership]. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com [https://community.mlsecops.com]. Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models [https://protectai.com/guardian] Recon: Automated Red Teaming for GenAI [https://protectai.com/recon] Protect AI’s ML Security-Focused Open Source Tools [https://bit.ly/ProtectAIGitHub] LLM Guard Open Source Security Toolkit for LLM Interactions [https://github.com/protectai/llm-guard?__hstc=79449099.3be041ae02c5115fce1bf8cf1ab777a7.1730242702306.1730242702306.1730242702306.1&__hssc=79449099.1.1730242702306&__hsfp=1566720259&hsCtaTracking=5a8b78de-0e28-44b3-9e1f-1aa77fc7ac3c%7C69700feb-1b61-4dc1-b66a-9accb96a4863] Huntr - The World's First AI/Machine Learning Bug Bounty Platform [https://bit.ly/aimlhuntr]

25. jun. 2025 - 34 min
episode Holistic AI Pentesting Playbook artwork

Holistic AI Pentesting Playbook

Send us a text [https://www.buzzsprout.com/twilio/text_messages/2147790/open_sms] Jason Haddix, CEO of Arcanum Information Security, joins the MLSecOps Podcast to share his methods for assessing and defending AI systems. Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/holistic-ai-pentesting-playbook [https://mlsecops.com/podcast/holistic-ai-pentesting-playbook]. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com [https://community.mlsecops.com]. Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models [https://protectai.com/guardian] Recon: Automated Red Teaming for GenAI [https://protectai.com/recon] Protect AI’s ML Security-Focused Open Source Tools [https://bit.ly/ProtectAIGitHub] LLM Guard Open Source Security Toolkit for LLM Interactions [https://github.com/protectai/llm-guard?__hstc=79449099.3be041ae02c5115fce1bf8cf1ab777a7.1730242702306.1730242702306.1730242702306.1&__hssc=79449099.1.1730242702306&__hsfp=1566720259&hsCtaTracking=5a8b78de-0e28-44b3-9e1f-1aa77fc7ac3c%7C69700feb-1b61-4dc1-b66a-9accb96a4863] Huntr - The World's First AI/Machine Learning Bug Bounty Platform [https://bit.ly/aimlhuntr]

13. jun. 2025 - 49 min
En fantastisk app med et enormt stort udvalg af spændende podcasts. Podimo formår virkelig at lave godt indhold, der takler de lidt mere svære emner. At der så også er lydbøger oveni til en billig pris, gør at det er blevet min favorit app.
En fantastisk app med et enormt stort udvalg af spændende podcasts. Podimo formår virkelig at lave godt indhold, der takler de lidt mere svære emner. At der så også er lydbøger oveni til en billig pris, gør at det er blevet min favorit app.
Rigtig god tjeneste med gode eksklusive podcasts og derudover et kæmpe udvalg af podcasts og lydbøger. Kan varmt anbefales, om ikke andet så udelukkende pga Dårligdommerne, Klovn podcast, Hakkedrengene og Han duo 😁 👍
Podimo er blevet uundværlig! Til lange bilture, hverdagen, rengøringen og i det hele taget, når man trænger til lidt adspredelse.

Vælg dit abonnement

Begrænset tilbud

Premium

20 timers lydbøger

  • Podcasts kun på Podimo

  • Gratis podcasts

  • Opsig når som helst

1 måned kun 9 kr.
Derefter 99 kr. / måned

Kom i gang

Premium Plus

100 timers lydbøger

  • Podcasts kun på Podimo

  • Gratis podcasts

  • Opsig når som helst

Prøv gratis i 7 dage
Derefter 129 kr. / month

Prøv gratis

Kun på Podimo

Populære lydbøger

Kom i gang

1 måned kun 9 kr. Derefter 99 kr. / måned. Opsig når som helst.