MLSecOps Community
+00:00 GMT
MLSecOps Connect: Ask the Experts

ML Model Security - Is Your AI Protected?

Welcome to "MLSecOps Connect: Ask the Experts," an educational live stream series from the MLSecOps Community where attendees have the opportunity to hear their own questions answered by a variety of insightful guest speakers. This is a recording of the session we held on August 28, 2024 with the Chief Architect at Protect AI, Sean Morgan. In prior roles, Sean led production AI/ML deployments in the semiconductor industry, evaluated adversarial machine learning defenses for DARPA research programs, and most recently scaled customers on interactive machine learning solutions at Amazon Web Services (AWS). In his free time, Sean is an active open-source contributor and maintainer, and is the special interest group lead for TensorFlow Addons. During this MLSecOps Connect session, Sean fielded questions from the community related to security for AI & machine learning (ML), including the importance of ML model scanning and how to get started with scanning models. Explore with us: 1. What are some solutions for model scanning? 2. How can enterprises protect their AI models from insider threat? 3. Which three design principles are the most important for secure AI infrastructures? 4. What are some of the security threats we can't use automated scans for? 5. Which resources does Sean use and recommend to advance personal knowledge in the context of AI security? 6. How can someone design a zero trust approach for the current large language model (LLM) architectures? 7. What are some recommendations for aligning best [AI security] practices and standards in a palatable, won't-slow-down-the-business way? 8. Does Protect AI provide both Red Teaming and Guardrails for LLMs? 9. What types of attacks does Protect AI's model scanning tool cover? Are the attacks domain specific (e.g., attacks on text vs image) or generic? Once the model vulnerabilities are detected, what defenses are available? 10. How can we follow a shift left approach for model security? And more!
Sean Morgan
Sean Morgan · Aug 29th, 2024
Popular topics
# MLSecOps
# AI Risk
# AI Security
# Governance, Risk, & Compliance
# Supply Chain Vulnerability
# Model Provenance
# Trusted AI
# Generative AI
# Adversarial ML
# AI Impact
# LLM
# Explainability
# AI Bias
# Data Science
# AdvML
# Fairness
# AI/ML Security Vulnerabilities
# Application Security
# Large Language Model
# Model Scanning
All
Scott M. Giordano, Esq.
Scott M. Giordano & Esq. · Jul 25th, 2024
Welcome to the fresh online series, "MLSecOps Connect: Ask the Experts," where community members can hear their own questions answered by a variety of insightful guest speakers. We're honored to welcome our next guest, Scott M. Giordano, Esq., to the show! Scott is an attorney based in the USA with more than 25 years of legal, technology, and risk management consulting experience. An IAPP Fellow of Information Privacy, a Certified Information Security Systems Professional (CISSP), a Certified Cloud Security Professional (CCSP), and an AI Governance Professional (AIGP), Scott most recently served as General Counsel of Spirion LLC, a privacy technology firm. There Scott also served as the company’s subject matter expert on multinational data protection and its intersection with technology, export compliance, internal investigations, information governance, and risk management. He is a member of the bar in Washington State, California, and the District of Columbia. Scott joins us to field questions from the MLSecOps Community regarding topics like AI regulations, Executive Order impact on cybersecurity posture, court endorsements of cybersecurity standards, AI cybersecurity resources, and more. Explore with us: - Are there cybersecurity laws or regulations that apply to AI? - How does Scott foresee the regulatory landscape evolving re: AI and cybersecurity both in the US and and globally? - What changes in cybersecurity law are most important for InfoSec/AppSec professionals to be aware of? - Are there already precedents in the context of AI security and/or privacy i.e. any early attempts at regulation that have set the stage for what we're seeing now? - From Scott's legal perspective, how does he envision an act like California SB 1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) being enforced if enacted, and what are potential consequences for violating? How might it impact the pace of innovation in the open source community? How likely is it that other US States and/or Congress, will move to enact something similar to SB 1047? - What do InfoSec and AppSec professionals need to know about the EU AI Act? - Once personal data is present in a machine learning model, it can by definition no longer be completely removed. How can this be handled? Should it simply be completely discouraged and Retrieval-Augmented Generation (RAG) architectures then be used? - What's the best way to stay updated on all of the new AI regulations that seem to be sprouting from the ground? - What are some recommended AI governance frameworks? Thanks for watching! Find more MLSecOps events and resources, and get involved with the community at https://community.mlsecops.com.
# AI Risk
# AI Security
# Cybersecurity
# Governance, Risk, & Compliance
# EU AI Act
# CA SB 1047
31:27
Ian Swanson
Ian Swanson · Jun 27th, 2024
Join us for the first in a new online series, "MLSecOps Connect: Ask the Experts," where community members can hear their own questions answered by a variety of insightful guest speakers. Kicking things off, our first esteemed speaker is Ian Swanson, Co-founder and CEO of Protect AI. Ian joined us to field community member questions about all things MLSecOps and security for AI & machine learning. Find future virtual and in-person MLSecOps events to attend in real-time at https://community.mlsecops.com/home/events.
# AI Security
# AI Risk
# MLSecOps
# Model Scanning
# Model Provenance
# AI-SPM
# AI Agents
# AI/ML Red Teaming
# LLM
40:27
Popular