MLSecOps Community
+00:00 GMT
Featured
22:59
# MLSecOps
# Adversarial ML
# AI Security
# Data Poisoning
# Model Security
# Supply Chain Vulnerability
# Threat Research

Trojan Model Hubs: Hacking the ML Supply Chain and Defending Yourself from Threats

Sam Washko
William Armiros
Sam Washko & William Armiros

Collections

All Collections
MLSecOps Connect: Ask the Experts
MLSecOps Connect: Ask the Experts
4 Items
MLSecOps Podcast
Learning Courses
AI Threat Research

All Content

Popular topics
# MLSecOps
# AI Security
# Supply Chain Vulnerability
# Adversarial ML
# AI Risk
# Governance, Risk, & Compliance
# Generative AI
# LLM
# Model Provenance
# AdvML
# Trusted AI
# AI/ML Red Teaming
# AI/ML Security Vulnerabilities
# AI Impact
# Large Language Model
# Threat Research
# Data Science
# ChatGPT
# Prompt Injection
# Model Scanning
All
Looking to get into AI/ML bug bounty hunting? To help, the team at huntr.com put together a comprehensive guide to get you started.
# AI Security
# AI/ML Red Teaming
# AI/ML Security Vulnerabilities
# Bug Bounty
# huntr
# Supply Chain Vulnerability
Explore how prompt engineering and prompt hacking are reshaping AI security, with insights on safeguarding generative AI in this MLSecOps Podcast episode.
# AI Security
# Generative AI
# LLM
# Prompt Injection
# Threat Research
Ken Huang
Ken Huang · Sep 11th, 2024
Welcome to "MLSecOps Connect: Ask the Experts," an educational live stream series from the MLSecOps Community where attendees have the opportunity to hear their own questions answered by a variety of insightful guest speakers. This is a recording of the session we held on September 11, 2024 with Ken Huang, CISSP.
# AI Agents
# AI Security
# Cybersecurity
# Generative AI
# LLM
# Retrieval-Augmented Generation
53:25
This compilation contains highlights from every episode of Season 2 of the MLSecOps Podcast. Thanks to everyone who has supported this show, including our listeners, hosts, and stellar expert guests!
Sean Morgan
Sean Morgan · Aug 29th, 2024
Welcome to "MLSecOps Connect: Ask the Experts," an educational live stream series from the MLSecOps Community where attendees have the opportunity to hear their own questions answered by a variety of insightful guest speakers. This is a recording of the session we held on August 28, 2024 with the Chief Architect at Protect AI, Sean Morgan. In prior roles, Sean led production AI/ML deployments in the semiconductor industry, evaluated adversarial machine learning defenses for DARPA research programs, and most recently scaled customers on interactive machine learning solutions at Amazon Web Services (AWS). In his free time, Sean is an active open-source contributor and maintainer, and is the special interest group lead for TensorFlow Addons. During this MLSecOps Connect session, Sean fielded questions from the community related to security for AI & machine learning (ML), including the importance of ML model scanning and how to get started with scanning models. Explore with us: 1. What are some solutions for model scanning? 2. How can enterprises protect their AI models from insider threat? 3. Which three design principles are the most important for secure AI infrastructures? 4. What are some of the security threats we can't use automated scans for? 5. Which resources does Sean use and recommend to advance personal knowledge in the context of AI security? 6. How can someone design a zero trust approach for the current large language model (LLM) architectures? 7. What are some recommendations for aligning best [AI security] practices and standards in a palatable, won't-slow-down-the-business way? 8. Does Protect AI provide both Red Teaming and Guardrails for LLMs? 9. What types of attacks does Protect AI's model scanning tool cover? Are the attacks domain specific (e.g., attacks on text vs image) or generic? Once the model vulnerabilities are detected, what defenses are available? 10. How can we follow a shift left approach for model security? And more!
# AI Security
# AI-BOM
# AI-SPM
# AI/ML Red Teaming
# Model Scanning
# Model Security
# Supply Chain Vulnerability
39:34
Learn about AI regulation topics like the EU Artificial Intelligence Act, generative AI risk assessment, and challenges related to organizational compliance with upcoming AI regulations.
# AI Risk
# AI Bias
# Generative AI
# Governance, Risk, & Compliance
# Explainability
# EU AI Act
Scott M. Giordano, Esq.
Scott M. Giordano & Esq. · Jul 25th, 2024
Welcome to the fresh online series, "MLSecOps Connect: Ask the Experts," where community members can hear their own questions answered by a variety of insightful guest speakers. We're honored to welcome our next guest, Scott M. Giordano, Esq., to the show! Scott is an attorney based in the USA with more than 25 years of legal, technology, and risk management consulting experience. An IAPP Fellow of Information Privacy, a Certified Information Security Systems Professional (CISSP), a Certified Cloud Security Professional (CCSP), and an AI Governance Professional (AIGP), Scott most recently served as General Counsel of Spirion LLC, a privacy technology firm. There Scott also served as the company’s subject matter expert on multinational data protection and its intersection with technology, export compliance, internal investigations, information governance, and risk management. He is a member of the bar in Washington State, California, and the District of Columbia. Scott joins us to field questions from the MLSecOps Community regarding topics like AI regulations, Executive Order impact on cybersecurity posture, court endorsements of cybersecurity standards, AI cybersecurity resources, and more. Explore with us: - Are there cybersecurity laws or regulations that apply to AI? - How does Scott foresee the regulatory landscape evolving re: AI and cybersecurity both in the US and and globally? - What changes in cybersecurity law are most important for InfoSec/AppSec professionals to be aware of? - Are there already precedents in the context of AI security and/or privacy i.e. any early attempts at regulation that have set the stage for what we're seeing now? - From Scott's legal perspective, how does he envision an act like California SB 1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) being enforced if enacted, and what are potential consequences for violating? How might it impact the pace of innovation in the open source community? How likely is it that other US States and/or Congress, will move to enact something similar to SB 1047? - What do InfoSec and AppSec professionals need to know about the EU AI Act? - Once personal data is present in a machine learning model, it can by definition no longer be completely removed. How can this be handled? Should it simply be completely discouraged and Retrieval-Augmented Generation (RAG) architectures then be used? - What's the best way to stay updated on all of the new AI regulations that seem to be sprouting from the ground? - What are some recommended AI governance frameworks? Thanks for watching! Find more MLSecOps events and resources, and get involved with the community at https://community.mlsecops.com.
# AI Risk
# AI Security
# Cybersecurity
# Governance, Risk, & Compliance
# EU AI Act
# CA SB 1047
31:27
Dan McInerney
Marcello Salvati
Dan McInerney & Marcello Salvati · Jul 3rd, 2024
In the fourth chapter of navigating AI/ML security concerns, let’s explore Protect AI’s Threat Researchers, Dan McInerney and Marcello Salvati's lightning talk at the 2024 RSA Conference, on the critical roles and responsibilities of an AI Red Team, and why they are indispensable for Modern Cybersecurity. As Artificial Intelligence (AI) and Machine Learning (ML) continue to revolutionize industries, a new type of cybersecurity specialists is emerging. Enter the AI Red Team: the experts bridging the gap between traditional pen testing and the unique vulnerabilities present in AI systems.
# AI/ML Red Teaming
# Supply Chain Vulnerability
# Model Scanning
# Pen Testing
6:16
Co-Founder and CISO of Weights & Biases, Chris Van Pelt, to the MLSecOps Podcast discusses a range of topics, including the history of how W&B was formed, real-world ML and GenAI security concerns...
# MLSecOps
# AI Security
# MLOps
# Generative AI
# Data Science
Ian Swanson
Ian Swanson · Jun 27th, 2024
Join us for the first in a new online series, "MLSecOps Connect: Ask the Experts," where community members can hear their own questions answered by a variety of insightful guest speakers. Kicking things off, our first esteemed speaker is Ian Swanson, Co-founder and CEO of Protect AI. Ian joined us to field community member questions about all things MLSecOps and security for AI & machine learning. Find future virtual and in-person MLSecOps events to attend in real-time at https://community.mlsecops.com/home/events.
# AI Security
# AI Risk
# MLSecOps
# Model Scanning
# Model Provenance
# AI-SPM
# AI Agents
# AI/ML Red Teaming
# LLM
40:27
Popular