MLSecOps Community
+00:00 GMT
Sign in or Join the community to continue

The Trojan Horses Haunting Your AI Models

Posted Jun 12, 2024 | Views 217
# Supply Chain Vulnerability
# Model Scanning
# AI-BOM
# Data Poisoning Attack
Share
speakers
avatar
William Armiros
Senior Software Engineer @ Protect AI

William is a Senior Software Engineer at Protect AI, where he is building systems to help ML engineers and data scientists introduce security into their MLOps workflows effortlessly. Previously, he led a team at Amazon Web Services (AWS) working on application observability and distributed tracing. During that time he contributed to the industry-wide OpenTelemetry standard and helped lead the effort to release an AWS-supported distribution of it. He is passionate about making the observability and security of AI-enabled applications as seamless as possible.

+ Read More
avatar
Sam Washko
Senior Software Engineer @ Protect AI

Sam Washko is a senior software engineer passionate about the intersection of security and software development. She works for Protect AI developing tools for making machine learning systems more secure and is the lead engineer on ModelScan, an open source tool for scanning ML model files for attacks. She holds a BS in Computer Science from Duke University, and prior to joining Protect AI, she was part of the Azure SQL Security Feature Team and Blue Team at Microsoft, designing cryptography and telemetry systems. She has a passion for connecting theory and problem solving with engineering to produce solutions that make computing more secure for everyone.

+ Read More
SUMMARY

In the fast-moving world of Artificial Intelligence (AI) and Machine Learning (ML), ensuring model and data integrity is a must. Last month at the annual RSA Conference, Protect AI's Will Armiros and Sam Washko gave a lightning talk on ML supply chain vulnerabilities and defenses.

Visit the Protect AI blog to learn more: https://protectai.com/blog/the-trojan-horses-haunting-your-ai-models

+ Read More

Watch More

Trojan Model Hubs: Hacking the ML Supply Chain and Defending Yourself from Threats
Posted Oct 02, 2024 | Views 142
# MLSecOps
# Adversarial ML
# AI Security
# Data Poisoning
# Model Security
# Supply Chain Vulnerability
# Threat Research
ML Model Security - Is Your AI Protected?
Posted Aug 29, 2024 | Views 233
# AI Security
# AI-BOM
# AI-SPM
# AI/ML Red Teaming
# Model Scanning
# Model Security
# Supply Chain Vulnerability