MLSecOps Community
+00:00 GMT
Featured
# Secure by Design
# OWASP Top 10 for GenAI & LLM
# MITRE ATLAS
# NIST AI Risk Management Framework
# CISA

Protect AI Webinar on April 10, 2025

47:03

Key Insights for CISOs: Securing AI in Your Organization

Diana Kelley

Collections

All Collections
MLSecOps Connect: Ask the Experts
MLSecOps Connect: Ask the Experts
7 Items
MLSecOps Podcast
Learning Courses
AI Threat Research

All Content

Popular topics
# MLSecOps
# AI Security
# Supply Chain Vulnerability
# AI Risk
# Governance, Risk, & Compliance
# Adversarial ML
# Generative AI
# LLM
# Model Provenance
# Trusted AI
# AdvML
# AI/ML Red Teaming
# AI/ML Security Vulnerabilities
# Threat Research
# Model Scanning
# AI Impact
# Large Language Model
# Vulnerability Reporting
# Prompt Injection
# AI Bias
All
This episode is a follow up to Part 1 of our conversation with returning guest Brian Pendleton, as he challenges the way we think about red teaming and security for AI.
# AI Red Teaming
# Cybersecurity
# API Security
In part one, Brian Pendleton reveals his hacker roots and AI security journey, stressing that cataloging all AI touchpoints and uniting ML & security teams is key to protecting your enterprise.
# AI Security
# AI Risk
# AI-BOM
# AI/ML Security Vulnerabilities
# Governance, Risk, & Compliance
# Security Vulnerabilities
In this episode, Dr. Gina Guillaume-Joseph shares her journey from predicting software failures to pioneering secure agentic AI at Camio, emphasizing data integrity, zero trust, bias audits, and conti
# Agentic AI
# Ethical AI
# AI Governance
# ML Security
Dan McInerney & Sierra Haex on MLSecOps Podcast explore AI security—from supply chain risks to LLM code analysis and AI agent challenges. Tune in now!
# AI/ML Security Vulnerabilities
# Supply Chain Vulnerability
# LLM
In this episode of the MLSecOps podcast, host Charlie McCarthy sits down with Chris McClean, Global Lead for Digital Ethics at Avanade, to explore the world of responsible AI governance.
# AI Audit
# AI Security
# Ethical AI
# EU AI Act
# Governance, Risk, & Compliance
# NIST
In this episode, we explore LLM red teaming.. You’ll learn why vulnerabilities live in context—how LLMs interact with users, tools, and documents—and discover best practices for mitigating attacks.
# AI Red Teaming
# AI/ML Red Teaming
# LLM
Understand the OWASP Top 10 for LLMs, with a breakdown of the critical security risks specific to LLM applications.
# Generative AI
# LLM
# AI Security
# OWASP
Ruchir Patwa
Ruchir Patwa · Jan 14th, 2025
Learn about the bleeding-edge of Generative AI security in this live stream featuring Ruchir Patwa, former co-founder and CEO of SydeLabs and now VP of Engineering at Protect AI. Learn about the evolving practice of red teaming for AI—exploring innovative strategies, practical insights, and the intersection of tools, processes, and people in securing AI/ML systems.
Like
Comment
37:55
Ram Shankar Siva Kumar
Ram Shankar Siva Kumar · Jan 9th, 2025
Ram Shankar Siva Kumar answers some of the MLSecOps Community's burning questions about AI Red Teaming.
# AI Red Teaming
Like
Comment
Explore model file vulnerabilities, the evolution of AI security, and how MLSecOps and tools like huntr drive proactive protection in AI pipelines.
# AI Risk
# AI Security
# AI/ML Red Teaming
# AI/ML Security Vulnerabilities
# Cybersecurity
# Model Scanning
# Model Security
# Supply Chain Vulnerability
# Vulnerability Reporting
Popular