MLSecOps Community
+00:00 GMT
Featured
7:36
# MLSecOps
# AI Risk
# AI Security

What is MLSecOps?

Diana Kelley
Diana Kelley

Collections

All Collections
MLSecOps Connect: Ask the Experts
MLSecOps Connect: Ask the Experts
3 Items
MLSecOps Podcast
36 Items
Learning Courses
1 Item

All Content

Popular topics
# MLSecOps
# AI Security
# Supply Chain Vulnerability
# Adversarial ML
# AI Risk
# Governance, Risk, & Compliance
# Model Provenance
# AdvML
# Generative AI
# Trusted AI
# LLM
# AI Impact
# Large Language Model
# Data Science
# AI/ML Red Teaming
# Explainability
# AI Bias
# AI/ML Security Vulnerabilities
# Application Security
# ChatGPT
All
Sean Morgan
Sean Morgan · Aug 29th, 2024
Welcome to "MLSecOps Connect: Ask the Experts," an educational live stream series from the MLSecOps Community where attendees have the opportunity to hear their own questions answered by a variety of insightful guest speakers.
39:34
Learn about AI regulation topics like the EU Artificial Intelligence Act, generative AI risk assessment, and challenges related to organizational compliance with upcoming AI regulations.
# AI Risk
# AI Bias
# Generative AI
# Governance, Risk, & Compliance
# Explainability
# EU AI Act
Scott M. Giordano, Esq.
Scott M. Giordano & Esq. · Jul 25th, 2024
Welcome to the fresh online series, "MLSecOps Connect: Ask the Experts," where community members can hear their own questions answered by a variety of insightful guest speakers. We're honored to welcome our next guest, Scott M. Giordano, Esq., to the show! Scott is an attorney based in the USA with more than 25 years of legal, technology, and risk management consulting experience. An IAPP Fellow of Information Privacy, a Certified Information Security Systems Professional (CISSP), a Certified Cloud Security Professional (CCSP), and an AI Governance Professional (AIGP), Scott most recently served as General Counsel of Spirion LLC, a privacy technology firm. There Scott also served as the company’s subject matter expert on multinational data protection and its intersection with technology, export compliance, internal investigations, information governance, and risk management. He is a member of the bar in Washington State, California, and the District of Columbia. Scott joins us to field questions from the MLSecOps Community regarding topics like AI regulations, Executive Order impact on cybersecurity posture, court endorsements of cybersecurity standards, AI cybersecurity resources, and more. Explore with us: - Are there cybersecurity laws or regulations that apply to AI? - How does Scott foresee the regulatory landscape evolving re: AI and cybersecurity both in the US and and globally? - What changes in cybersecurity law are most important for InfoSec/AppSec professionals to be aware of? - Are there already precedents in the context of AI security and/or privacy i.e. any early attempts at regulation that have set the stage for what we're seeing now? - From Scott's legal perspective, how does he envision an act like California SB 1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) being enforced if enacted, and what are potential consequences for violating? How might it impact the pace of innovation in the open source community? How likely is it that other US States and/or Congress, will move to enact something similar to SB 1047? - What do InfoSec and AppSec professionals need to know about the EU AI Act? - Once personal data is present in a machine learning model, it can by definition no longer be completely removed. How can this be handled? Should it simply be completely discouraged and Retrieval-Augmented Generation (RAG) architectures then be used? - What's the best way to stay updated on all of the new AI regulations that seem to be sprouting from the ground? - What are some recommended AI governance frameworks? Thanks for watching! Find more MLSecOps events and resources, and get involved with the community at https://community.mlsecops.com.
# AI Risk
# AI Security
# Cybersecurity
# Governance, Risk, & Compliance
# EU AI Act
# CA SB 1047
31:27
Dan McInerney
Marcello Salvati
Dan McInerney & Marcello Salvati · Jul 3rd, 2024
In the fourth chapter of navigating AI/ML security concerns, let’s explore Protect AI’s Threat Researchers, Dan McInerney and Marcello Salvati's lightning talk at the 2024 RSA Conference, on the critical roles and responsibilities of an AI Red Team, and why they are indispensable for Modern Cybersecurity. As Artificial Intelligence (AI) and Machine Learning (ML) continue to revolutionize industries, a new type of cybersecurity specialists is emerging. Enter the AI Red Team: the experts bridging the gap between traditional pen testing and the unique vulnerabilities present in AI systems.
# AI/ML Red Teaming
# Supply Chain Vulnerability
# Model Scanning
# Pen Testing
6:16
Co-Founder and CISO of Weights & Biases, Chris Van Pelt, to the MLSecOps Podcast discusses a range of topics, including the history of how W&B was formed, real-world ML and GenAI security concerns...
# MLSecOps
# AI Security
# MLOps
# Generative AI
# Data Science
Ian Swanson
Ian Swanson · Jun 27th, 2024
Join us for the first in a new online series, "MLSecOps Connect: Ask the Experts," where community members can hear their own questions answered by a variety of insightful guest speakers. Kicking things off, our first esteemed speaker is Ian Swanson, Co-founder and CEO of Protect AI. Ian joined us to field community member questions about all things MLSecOps and security for AI & machine learning. Find future virtual and in-person MLSecOps events to attend in real-time at https://community.mlsecops.com/home/events.
# AI Security
# AI Risk
# MLSecOps
# Model Scanning
# Model Provenance
# AI-SPM
# AI Agents
# AI/ML Red Teaming
# LLM
40:27
In the third chapter of navigating AI/ML security concerns, let’s explore the lightning talk given by Protect AI’s Co-Founder and President, Daryan Dehghanpisheh, at last month’s annual RSA Conference. During the talk, he introduced an AI Bill of Materials (AIBoM) that helps facilitate the adoption of AI security. This innovative concept transforms how businesses manage and secure their AI assets. Visit the Protect AI blog to learn more: https://protectai.com/blog/revolutionizing-ai-security-with-aibom
# AI-BOM
# Model Provenance
9:44
In the second chapter of navigating AI/ML security concerns, let’s explore Adam Nygate’s lightning talk at last month’s annual RSA Conference on Vulnerabilities in the AI supply chain. In this video, Adam sheds light on the unique vulnerabilities in the AI supply chain and highlights how they differ from traditional software security risks. With AI revolutionizing industries, understanding and fortifying this supply chain is more important than ever. Visit the Protect AI blog to learn more: https://protectai.com/blog/vulnerabilities-in-ai-supply-chain
# Supply Chain Vulnerability
16:21
Next on the MLSecOps Podcast, we have the honor of highlighting one of our MLSecOps Community members and Dropbox™ Red Teamers, Adrian Wood.
# Adversarial ML
# AI/ML Red Teaming
# OffSec
In this episode, host Neal Swaelens (EMEA Director of Business Development, Protect AI) catches up with Ken Huang, CISSP at RSAC 2024 to talk about security for generative AI.
# MLSecOps
# Generative AI
# LLM
# Large Language Model
Popular
Securing AI/ML with Ian Swanson
Ian Swanson