MLSecOps Community
+00:00 GMT
Sign in or Join the community to continue

MLSecOps Connect: Ask the Experts - Securing AI/ML with Ian Swanson

Posted Jun 27, 2024 | Views 192
# AI Security
# AI Risk
# MLSecOps
# Model Scanning
# Model Provenance
# AI-SPM
# AI Agents
# AI/ML Red Teaming
# LLM
Share
speaker
avatar
Ian Swanson
Co-Founder and CEO @ Protect AI

CEO of Co Founder of Protect AI - Security for ML Systems and AI Applications.

Prior to Protect AI, Swanson was the Amazon Web Services Worldwide Leader for Artificial Intelligence and Machine Learning.

Prior to Amazon, Vice President of Machine Learning at Oracle. In this role, Ian oversaw the strategy for Oracle’s Artificial Intelligence and Machine Learning products.

Prior to Oracle, Swanson was CEO and Founder of DataScience.com which was acquired by Oracle May 2018. DataScience.com provided an industry leading enterprise data science platform that combined the tools, libraries, and languages data scientists loved with the infrastructure and workflows their organizations needed.

Earlier in his career, Swanson was an executive at American Express, Sprint, and CEO of Sometrics. Sometrics launched the industry's first global virtual currency platform in 2008 and was acquired by American Express in 2011. That platform -- for which he earned a patent -- managed more than 3.3 trillion units of virtual currency and served an online audience of 250 million in more than 180 countries.

A sought-after speaker and expert on digital transformation, data science, big data and performance-based analytics, Swanson actively advises Fortune 500 companies and invests in leading start-ups.

+ Read More
SUMMARY

Join us for the first in a new online series, "MLSecOps Connect: Ask the Experts," where community members can hear their own questions answered by a variety of insightful guest speakers.

Kicking things off, our first esteemed speaker is Ian Swanson, Co-founder and CEO of Protect AI. Ian joined us to field community member questions about all things MLSecOps and security for AI & machine learning.

Find future virtual and in-person MLSecOps events to attend in real-time at https://community.mlsecops.com/home/events.

+ Read More
TRANSCRIPT

Learn:

  • Why Ian started a company (Protect AI) focused on the security of artificial intelligence.
  • What exactly is security for AI & ML? What are we securing in particular?
  • What risks do ML models face despite being deployed in an encrypted format using AES-256 encryption, where the model file is decrypted during loading by the ML framework (e.g., TensorFlow C++ API)?
  • Which roles will play a major role in the future? AI Officer, AI Security roles?
  • How can Sec-Ops shops get ahead of ML/AI issues while not being in the loop on emerging projects in their organizations? How can we bring the concept of MLSec-Ops into the conversation without an invite to the table, especially when security is often seen as an inhibitor to innovations or development?
  • Are MLSecOps roles entry level and is there a career roadmap for this position? What project can I engage in to showcase skills to recruiters? What practical industry tool or business domain knowledge should I have to be successful at MLSecOps?
  • Is adversarial testing part of cyber response?
  • What is red teaming (for AI)?
  • What is the best source of information to collect vulnerability data on AI/ML, LLM models?
  • Are there MLSecOps/AI security tools available to get started with?
  • What are some techniques employed by Protect AI's LLM Guard for efficient and cost effective CPU and GPU inferences?
  • How will the AI/ML threat landscape will change?
  • Why is AI security important?
+ Read More
Sign in or Join the community

Create an account

Change email
e.g. https://www.linkedin.com/in/xxx
I agree to MLSecOps Community’s Code of Conduct and Privacy Policy.

Watch More

2024-January MLSecOps Community Meetup
Posted Jan 31, 2024 | Views 103
# MLSecOps
# Supply Chain Vulnerability
# Model Provenance
# Governance, Risk, & Compliance
# Trusted AI
# Adversarial ML
# LLM
# Threat Model
The Trojan Horses Haunting Your AI Models
Posted Jun 12, 2024 | Views 89
# Supply Chain Vulnerability
# Model Scanning
# AI-BOM
# Data Poisoning Attack