Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes
With Diana Kelley
Liked by 5 users
Duration: 1h 46m
Skill level: Intermediate
Released: 11/7/2025
Course details
Like any software or process, machine learning (ML) is vulnerable to attack. In order to protect something, you must first understand where and how a system is vulnerable. In this course, Diana Kelley shows experienced threat modelers the ways that ML shifts the focus based on potential impact and from the vast amount of data that ML systems need to fuel their operation. Diana shows how ML can fail in a number of ways when under attack from adversaries and how design flaws can also lead to operational failure, data leakage, and other security and privacy risks.
Learn the importance of building resilient ML, the impacts of failure to build security into ML, and where and how ML is vulnerable from intentional adversaries and from design and implementation issues. Plus, discover some of the most effective approaches and techniques for building robust and resilient ML.
Skills you’ll gain
Earn a sharable certificate
Share what you’ve learned, and be a standout professional in your desired industry with a certificate showcasing your knowledge gained from the course.
LinkedIn Learning
Certificate of Completion
-
Showcase on your LinkedIn profile under “Licenses and Certificate” section
-
Download or print out as PDF to share with others
-
Share as image online to demonstrate your skill
Meet the instructor
Learner reviews
Contents
What’s included
- Learn on the go Access on tablet and phone