MLSecOps Community
+00:00 GMT
Sign in or Join the community to continue

ML Model Security - Is Your AI Protected?

Posted Aug 29, 2024 | Views 269
# AI Security
# AI-BOM
# AI-SPM
# AI/ML Red Teaming
# Model Scanning
# Model Security
# Supply Chain Vulnerability
Share
speaker
avatar
Sean Morgan
Chief Architect @ Protect AI

Sean Morgan is the Chief Architect at Protect AI. In prior roles he's led production AIML deployments in the semiconductor industry, evaluated adversarial machine learning defenses for DARPA research programs, and most recently scaled customers on interactive machine learning solutions at AWS. In his free time, Sean is an active open-source contributor and maintainer, and is the special interest group lead for TensorFlow Addons. Learn more about the platform for end-to-end AI Security from Protect AI.

+ Read More
SUMMARY

Welcome to "MLSecOps Connect: Ask the Experts," an educational live stream series from the MLSecOps Community where attendees have the opportunity to hear their own questions answered by a variety of insightful guest speakers.

This is a recording of the session we held on August 28, 2024 with the Chief Architect at Protect AI, Sean Morgan. In prior roles, Sean led production AI/ML deployments in the semiconductor industry, evaluated adversarial machine learning defenses for DARPA research programs, and most recently scaled customers on interactive machine learning solutions at Amazon Web Services (AWS). In his free time, Sean is an active open-source contributor and maintainer, and is the special interest group lead for TensorFlow Addons.

During this MLSecOps Connect session, Sean fielded questions from the community related to security for AI & machine learning (ML), including the importance of ML model scanning and how to get started with scanning models.

Explore with us:

  1. What are some solutions for model scanning?
  2. How can enterprises protect their AI models from insider threat?
  3. Which three design principles are the most important for secure AI infrastructures?
  4. What are some of the security threats we can't use automated scans for?
  5. Which resources does Sean use and recommend to advance personal knowledge in the context of AI security?
  6. How can someone design a zero trust approach for the current large language model (LLM) architectures?
  7. What are some recommendations for aligning best [AI security] practices and standards in a palatable, won't-slow-down-the-business way?
  8. Does Protect AI provide both Red Teaming and Guardrails for LLMs?
  9. What types of attacks does Protect AI's model scanning tool cover? Are the attacks domain specific (e.g., attacks on text vs image) or generic? Once the model vulnerabilities are detected, what defenses are available?
  10. How can we follow a shift left approach for model security? And more!
+ Read More
TRANSCRIPT

Session references & resources (including time stamp from video mention):

(3:36) ModelScan: Open Source Tool

(3:54) Guardian by Protect AI: Zero Trust for ML Models. Enable enterprise level scanning, enforcement, and management of model security to block unsafe models from being used in your environment, and keep your ML supply chain secure.

(4:35) Open Source Security Foundation - OSSF: AI/ML Security Working Group

(15:40) TL;DR: Every AI Talk from BSidesLV, Black Hat, and DEF CON 2024

(18:25) Automated AI Red Teaming

(24:36) LLM Guard Open Source Tool: A suite of tools to protect LLM applications by helping you detect, redact, and sanitize LLM prompts and responses, for real time safety, security and compliance.

Thanks for watching! Find more MLSecOps events & resources, and get involved with the community at https://community.mlsecops.com.

+ Read More
Sign in or Join the community

Create an account

Change email
e.g. https://www.linkedin.com/in/xxx or https://xx.linkedin.com/in/xxx
I agree to MLSecOps Community’s Code of Conduct and Privacy Policy.

Watch More

53:25
Essential Practices for Generative AI Security and Beyond
Posted Sep 11, 2024 | Views 353
# AI Agents
# AI Security
# Cybersecurity
# Generative AI
# LLM
# Retrieval-Augmented Generation
Securing AI/ML with Ian Swanson
Posted Jun 27, 2024 | Views 515
# AI Security
# AI Risk
# MLSecOps
# Model Scanning
# Model Provenance
# AI-SPM
# AI Agents
# AI/ML Red Teaming
# LLM