MLSecOps Community
+00:00 GMT
Featured
# Contest

2024 MLSecOps Community Appreciation Book Giveaway



Securing AI: Red Teaming & Attack Strategies for Machine Learning Systems

Johann Rehberger


Collections

All Collections
MLSecOps Connect: Ask the Experts
MLSecOps Connect: Ask the Experts
5 Items
MLSecOps Podcast
Learning Courses
AI Threat Research

All Content

Popular topics
# MLSecOps
# AI Security
# Supply Chain Vulnerability
# AI Risk
# Adversarial ML
# Governance, Risk, & Compliance
# Generative AI
# LLM
# Model Provenance
# Trusted AI
# AdvML
# AI/ML Red Teaming
# Threat Research
# AI/ML Security Vulnerabilities
# AI Impact
# Large Language Model
# Prompt Injection
# Model Scanning
# AI Bias
# Data Science
All
Dr. Cari Miller shares insights from her work with the AI Procurement Lab regarding frameworks and strategies needed to mitigate risks in AI acquisitions.
# AI Audit
# AI Bias
# AI Risk
# Cari Miller
# Ethical AI
# Procurement
# Governance, Risk, & Compliance
# Generative AI
# Trusted AI
Johann Rehberger
Johann Rehberger · Nov 1st, 2024
Welcome to "MLSecOps Connect: Ask the Experts," an educational live stream series from the MLSecOps Community where attendees have the opportunity to hear their own questions answered by a variety of insightful guest speakers.
# AI Security
# AI/ML Red Teaming
# Ethical Hacking
# Pen Testing
# Prompt Injection
# Threat Research
44:13
Join Nicole Nichols from PANW on the MLSecOps Podcast as she discusses the present and future of AI security & the growth mindset essential for cybersecurity professionals.
# AI Agents
# AI Security
# Cybersecurity
# Backdoor Attack
# LLM
# Generative AI
This report contains 34 vulnerabilities, including 3 critical and 18 high severity, found by the community at huntr.com in OSS AI/ML.
# huntr
# Protect AI
# Vulnerability Reporting
# Supply Chain Vulnerability
Protect AI and Hugging Face Partner to Secure the Machine Learning Supply Chain
# Model Scanning
# Model Security
# Supply Chain Vulnerability
Looking to get into AI/ML bug bounty hunting? To help, the team at huntr.com put together a comprehensive guide to get you started.
# AI Security
# AI/ML Red Teaming
# AI/ML Security Vulnerabilities
# Bug Bounty
# huntr
# Supply Chain Vulnerability
In the fast-moving world of Artificial Intelligence (AI) and Machine Learning (ML), ensuring model and data integrity is a must. Sam Washko and Will Armiros (Sr. Software Engineers, Protect AI) joined our MLSecOps Community Meetup on September 10, 2024 to talk about ML supply chain vulnerabilities and defenses. Some of their key insights on model serialization attacks, data poisoning, and the bleeding-edge tools developed to keep your AI safe are included below.
# MLSecOps
# Adversarial ML
# AI Security
# Data Poisoning
# Model Security
# Supply Chain Vulnerability
# Threat Research
22:59
Caleb Sima joins us to discuss security considerations for building and using AI, drawing on his 25+ years of cybersecurity experience.
# AI Agents
# AI Risk
# AI Security
# AI-BOM
# AI/ML Security Vulnerabilities
# ChatGPT
# Generative AI
# LLM
# Model Provenance
# Retrieval-Augmented Generation
# Supply Chain Vulnerability
This report contains 20 vulnerabilities found by the community at huntr.com in OSS AI/ML.
# huntr
# Protect AI
# Vulnerability Reporting
# Supply Chain Vulnerability
Explore how prompt engineering and prompt hacking are reshaping AI security, with insights on safeguarding generative AI in this MLSecOps Podcast episode.
# AI Security
# Generative AI
# LLM
# Prompt Injection
# Threat Research
Popular
Securing AI/ML with Ian Swanson
Ian Swanson