MLSecOps Community
+00:00 GMT
Sign in or Join the community to continue

Key Insights for CISOs: Securing AI in Your Organization

Posted Mar 17, 2025 | Views 105
# AI Risk
# AI Security
# Generative AI
# Threat Model
# MLSecOps
# Incident Response
# Governance, Risk, & Compliance
Share

speaker

avatar
Diana Kelley
CISO @ Protect AI

Diana Kelley is the Chief Information Security Officer (CISO) for Protect AI. She also serves on the boards of WiCyS, The Executive Women’s Forum (EWF), InfoSec World, CyberFuture Foundation, TechTarget Security Editorial, and DevNet AI/ML. Diana was Cybersecurity Field CTO for Microsoft, Global Executive Security Advisor at IBM Security, GM at Symantec, VP at Burton Group (now Gartner), a Manager at KPMG, CTO and co-founder of SecurityCurve, and Chief vCISO at SaltCybersecurity.

Her extensive volunteer work has included serving on the ACM Ethics & Plagiarism Committee, Cybersecurity Committee Advisor at CompTIA, CTO and Board Member at Sightline Security, Advisory Board Chair at WOPLLI Technologies, Advisory Council member Bartlett College of Science and Mathematics, Bridgewater State University, and RSAC US Program Committee.

She is a sought-after keynote speaker, the host of BrightTALK’s The (Security) Balancing Act, co-author of the books Practical Cybersecurity Architecture and Cryptographic Libraries for Developers, instructor for the LinkedIn Learning classes Security in AI and ML and Introduction to MLSecOps, has been a lecturer at Boston College's Masters program in cybersecurity, one of AuditBoard's Top 25 Resilient CISOs in 2024, a 2023 Global Cyber Security Hall of Fame Inductee, the EWF 2020 Executive of the Year and EWF Conference Chair 2021-Present, an SCMedia Power Player, and one of Cybersecurity Ventures 100 Fascinating Females Fighting Cybercrime.

+ Read More

SUMMARY

As AI technologies rapidly evolve, understanding the security risks and best practices for safeguarding AI systems is crucial for CISOs.

Cybersecurity expert and Protect AI's CISO, Diana Kelley, will provide expert advice on how to approach AI security, manage risks, and enhance your security strategy in the age of AI.

+ Read More

TRANSCRIPT

This is a recording of the "MLSecOps Connect: Ask the Experts" session we held on March 12, 2025 with Diana Kelley.

Explore with us:

-AI's popping up everywhere from hospitals to factories. Are there any industries that seem behind on AI security and need to catch up? (02:17)

-For organizations that are just starting to adopt AI, what’s a rookie mistake to avoid when it comes to security? (04:11)

-Protecting data without encryption—especially live data. Some data analytics people talk about stochastic masking, etc…is encryption is the go-to for sensitive info or are there other options security teams should be exploring? (06:38)

-In industries like finance and healthcare, compliance talk around AI is starting to feel pretty intense. How can CISOs try to keep up and preserve their sanity at the same time? (09:53)

-Noticing any red flags in AI compliance that it seems most companies aren’t even looking at yet? (13:40)

-AI regulations and their rapid changeover over time: do they hinder innovation? (17:19)

-Security risks of large language models (LLMs). Are there any threats here that security teams are sleeping on? (19:20)

-Data poisoning attacks on AI—are those real problems right now, or is that just hype? (24:18)

-Any real-world stories about worrisome AI security breaches? What went wrong, and could it have been avoided? (26:56) -How much of AI security is really about getting people to stop doing dumb stuff versus just fixing the tech itself? (29:05)

-"Worried about insider threats to my AI ecosystem, but I don’t even know WHAT to be worried about." What are the biggest insider threats to AI? (31:13)

-What AI security aspects does Protect AI focus on? (34:19)

-What are the top 3 measures to build an MLSecOps program? (39:10)

-Are ML/AI-BOMs now in practice in the industry? How could this be brought more into companies? (41:42)

-Security considerations for Agentic AI systems (43:40)

+ Read More
Sign in or Join the community
MLSecOps Community
Create an account
Change email
e.g. https://www.linkedin.com/in/xxx or https://xx.linkedin.com/in/xxx
I agree to MLSecOps Community’s Code of Conduct and Privacy Policy.
1
Comments (0)
Popular
avatar


Watch More

Securing AI: Red Teaming & Attack Strategies for Machine Learning Systems
Posted Nov 01, 2024 | Views 788
# AI Security
# AI/ML Red Teaming
# Ethical Hacking
# Pen Testing
# Prompt Injection
# Threat Research
Securing AI/ML with Ian Swanson
Posted Jun 27, 2024 | Views 684
# AI Security
# AI Risk
# MLSecOps
# Model Scanning
# Model Provenance
# AI-SPM
# AI Agents
# AI/ML Red Teaming
# LLM