MLSecOps Community
+00:00 GMT
MLSecOps Podcast
# AI Red Teaming
# Cybersecurity
# API Security

Rethinking AI Red Teaming: Lessons in Zero Trust and Model Protection

This episode is a follow up to Part 1 of our conversation with returning guest Brian Pendleton, as he challenges the way we think about red teaming and security for AI.
Popular topics
# MLSecOps
# AI Security
# Supply Chain Vulnerability
# Adversarial ML
# Governance, Risk, & Compliance
# AI Risk
# AdvML
# LLM
# Generative AI
# Trusted AI
# Model Provenance
# AI/ML Red Teaming
# Large Language Model
# Threat Research
# Prompt Injection
# Explainability
# Fairness
# AI Bias
# ChatGPT
# Model Scanning
All
In part one, Brian Pendleton reveals his hacker roots and AI security journey, stressing that cataloging all AI touchpoints and uniting ML & security teams is key to protecting your enterprise.
# AI Security
# AI Risk
# AI-BOM
# AI/ML Security Vulnerabilities
# Governance, Risk, & Compliance
# Security Vulnerabilities
In this episode, Dr. Gina Guillaume-Joseph shares her journey from predicting software failures to pioneering secure agentic AI at Camio, emphasizing data integrity, zero trust, bias audits, and conti
# Agentic AI
# Ethical AI
# AI Governance
# ML Security
Dan McInerney & Sierra Haex on MLSecOps Podcast explore AI security—from supply chain risks to LLM code analysis and AI agent challenges. Tune in now!
# AI/ML Security Vulnerabilities
# Supply Chain Vulnerability
# LLM
In this episode of the MLSecOps podcast, host Charlie McCarthy sits down with Chris McClean, Global Lead for Digital Ethics at Avanade, to explore the world of responsible AI governance.
# AI Audit
# AI Security
# Ethical AI
# EU AI Act
# Governance, Risk, & Compliance
# NIST
In this episode, we explore LLM red teaming.. You’ll learn why vulnerabilities live in context—how LLMs interact with users, tools, and documents—and discover best practices for mitigating attacks.
# AI Red Teaming
# AI/ML Red Teaming
# LLM
Explore model file vulnerabilities, the evolution of AI security, and how MLSecOps and tools like huntr drive proactive protection in AI pipelines.
# AI Risk
# AI Security
# AI/ML Red Teaming
# AI/ML Security Vulnerabilities
# Cybersecurity
# Model Scanning
# Model Security
# Supply Chain Vulnerability
# Vulnerability Reporting
Dr. Cari Miller shares insights from her work with the AI Procurement Lab regarding frameworks and strategies needed to mitigate risks in AI acquisitions.
# AI Audit
# AI Bias
# AI Risk
# Cari Miller
# Ethical AI
# Procurement
# Governance, Risk, & Compliance
# Generative AI
# Trusted AI
Join Nicole Nichols from PANW on the MLSecOps Podcast as she discusses the present and future of AI security & the growth mindset essential for cybersecurity professionals.
# AI Agents
# AI Security
# Cybersecurity
# Backdoor Attack
# LLM
# Generative AI
Caleb Sima joins us to discuss security considerations for building and using AI, drawing on his 25+ years of cybersecurity experience.
# AI Agents
# AI Risk
# AI Security
# AI-BOM
# AI/ML Security Vulnerabilities
# ChatGPT
# Generative AI
# LLM
# Model Provenance
# Retrieval-Augmented Generation
# Supply Chain Vulnerability
Explore how prompt engineering and prompt hacking are reshaping AI security, with insights on safeguarding generative AI in this MLSecOps Podcast episode.
# AI Security
# Generative AI
# LLM
# Prompt Injection
# Threat Research
Popular