
The MLSecOps Podcast
Podcast by MLSecOps.com
Rajoitettu tarjous
3 kuukautta hintaan 1 €
Sitten 7,99 € / kuukausiPeru milloin tahansa.

Enemmän kuin miljoona kuuntelijaa
Tulet rakastamaan Podimoa, etkä ole ainoa
Arvioitu 4.7 App Storessa
Lisää The MLSecOps Podcast
Welcome to The MLSecOps Podcast, presented by Protect AI. Here we explore the world of machine learning security operations, a.k.a., MLSecOps. From preventing attacks to navigating new AI regulations, we'll dive into the latest developments, strategies, and best practices with industry leaders and AI experts. Sit back, relax, and learn something new with us today.Learn more and get involved with the MLSecOps Community at https://bit.ly/MLSecOps.
Kaikki jaksot
58 jaksot
Send us a text [https://www.buzzsprout.com/twilio/text_messages/2147790/open_sms] To close out Season 3, we’re revisiting the standout insights, wildest vulnerabilities, and most practical lessons shared by 20+ AI practitioners, researchers, and industry leaders shaping the future of AI security. If you're building, breaking, or defending AI/ML systems, this is your must-listen roundup. Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/season-3-finale-top-insights-hacks-and-lessons-from-the-frontlines-of-ai-security Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com [https://community.mlsecops.com]. Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models [https://protectai.com/guardian] Recon: Automated Red Teaming for GenAI [https://protectai.com/recon] Protect AI’s ML Security-Focused Open Source Tools [https://bit.ly/ProtectAIGitHub] LLM Guard Open Source Security Toolkit for LLM Interactions [https://github.com/protectai/llm-guard?__hstc=79449099.3be041ae02c5115fce1bf8cf1ab777a7.1730242702306.1730242702306.1730242702306.1&__hssc=79449099.1.1730242702306&__hsfp=1566720259&hsCtaTracking=5a8b78de-0e28-44b3-9e1f-1aa77fc7ac3c%7C69700feb-1b61-4dc1-b66a-9accb96a4863] Huntr - The World's First AI/Machine Learning Bug Bounty Platform [https://bit.ly/aimlhuntr]

Send us a text [https://www.buzzsprout.com/twilio/text_messages/2147790/open_sms] Fresh off their OWASP AppSec EU talk, Rico Komenda and Javan Rasokat join Charlie McCarthy to share real-world insights on breaking and securing LLM-integrated systems. Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/breaking-and-securing-real-world-llm-apps Ask ChatGPT Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com [https://community.mlsecops.com]. Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models [https://protectai.com/guardian] Recon: Automated Red Teaming for GenAI [https://protectai.com/recon] Protect AI’s ML Security-Focused Open Source Tools [https://bit.ly/ProtectAIGitHub] LLM Guard Open Source Security Toolkit for LLM Interactions [https://github.com/protectai/llm-guard?__hstc=79449099.3be041ae02c5115fce1bf8cf1ab777a7.1730242702306.1730242702306.1730242702306.1&__hssc=79449099.1.1730242702306&__hsfp=1566720259&hsCtaTracking=5a8b78de-0e28-44b3-9e1f-1aa77fc7ac3c%7C69700feb-1b61-4dc1-b66a-9accb96a4863] Huntr - The World's First AI/Machine Learning Bug Bounty Platform [https://bit.ly/aimlhuntr]

Send us a text [https://www.buzzsprout.com/twilio/text_messages/2147790/open_sms] Prolific bug bounty hunter and Offensive Security Lead at Toreon, Robbe Van Roey (PinkDraconian), joins the MLSecOps Podcast to break down how he discovered RCEs in BentoML and LangChain, the risks of unsafe model serialization, and his approach to red teaming AI systems. Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/how-red-teamers-are-exposing-flaws-in-ai-pipelines Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com [https://community.mlsecops.com]. Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models [https://protectai.com/guardian] Recon: Automated Red Teaming for GenAI [https://protectai.com/recon] Protect AI’s ML Security-Focused Open Source Tools [https://bit.ly/ProtectAIGitHub] LLM Guard Open Source Security Toolkit for LLM Interactions [https://github.com/protectai/llm-guard?__hstc=79449099.3be041ae02c5115fce1bf8cf1ab777a7.1730242702306.1730242702306.1730242702306.1&__hssc=79449099.1.1730242702306&__hsfp=1566720259&hsCtaTracking=5a8b78de-0e28-44b3-9e1f-1aa77fc7ac3c%7C69700feb-1b61-4dc1-b66a-9accb96a4863] Huntr - The World's First AI/Machine Learning Bug Bounty Platform [https://bit.ly/aimlhuntr]

Send us a text [https://www.buzzsprout.com/twilio/text_messages/2147790/open_sms] On this episode of the MLSecOps Podcast, Rob Linger, Information Advantage Practice Lead at Leidos, join hosts Jessica Souder, Director of Government and Defense at Protect AI, and Charlie McCarthy to explore what it takes to deploy secure AI/ML systems in government environments. Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/securing-ai-for-government-inside-the-leidos-protect-ai-partnership [https://mlsecops.com/podcast/securing-ai-for-government-inside-the-leidos-protect-ai-partnership]. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com [https://community.mlsecops.com]. Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models [https://protectai.com/guardian] Recon: Automated Red Teaming for GenAI [https://protectai.com/recon] Protect AI’s ML Security-Focused Open Source Tools [https://bit.ly/ProtectAIGitHub] LLM Guard Open Source Security Toolkit for LLM Interactions [https://github.com/protectai/llm-guard?__hstc=79449099.3be041ae02c5115fce1bf8cf1ab777a7.1730242702306.1730242702306.1730242702306.1&__hssc=79449099.1.1730242702306&__hsfp=1566720259&hsCtaTracking=5a8b78de-0e28-44b3-9e1f-1aa77fc7ac3c%7C69700feb-1b61-4dc1-b66a-9accb96a4863] Huntr - The World's First AI/Machine Learning Bug Bounty Platform [https://bit.ly/aimlhuntr]

Send us a text [https://www.buzzsprout.com/twilio/text_messages/2147790/open_sms] Jason Haddix, CEO of Arcanum Information Security, joins the MLSecOps Podcast to share his methods for assessing and defending AI systems. Full transcript, video, and links to episode resources available at https://mlsecops.com/podcast/holistic-ai-pentesting-playbook [https://mlsecops.com/podcast/holistic-ai-pentesting-playbook]. Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com [https://community.mlsecops.com]. Additional tools and resources to check out: Protect AI Guardian: Zero Trust for ML Models [https://protectai.com/guardian] Recon: Automated Red Teaming for GenAI [https://protectai.com/recon] Protect AI’s ML Security-Focused Open Source Tools [https://bit.ly/ProtectAIGitHub] LLM Guard Open Source Security Toolkit for LLM Interactions [https://github.com/protectai/llm-guard?__hstc=79449099.3be041ae02c5115fce1bf8cf1ab777a7.1730242702306.1730242702306.1730242702306.1&__hssc=79449099.1.1730242702306&__hsfp=1566720259&hsCtaTracking=5a8b78de-0e28-44b3-9e1f-1aa77fc7ac3c%7C69700feb-1b61-4dc1-b66a-9accb96a4863] Huntr - The World's First AI/Machine Learning Bug Bounty Platform [https://bit.ly/aimlhuntr]

Arvioitu 4.7 App Storessa
Rajoitettu tarjous
3 kuukautta hintaan 1 €
Sitten 7,99 € / kuukausiPeru milloin tahansa.
Podimon podcastit
Mainoksista vapaa
Maksuttomat podcastit