From the course: Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes
Unlock this course with a free trial
Join today to access over 24,900 courses taught by industry experts.
Physical domain: 3D adversarial objects
From the course: Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes
Physical domain: 3D adversarial objects
- [Instructor] Think about autonomous cars manufacturing robots on a shop floor, agricultural robots like automated tractors and laboratory automation systems. What all of these systems have in common is that they operate in and interact with the physical world. Physical domain attacks include attempts to mislead an AI system using physical modes like sensory input from vision systems, tactile sensors, audio or other physical or environmental signals. Vision-based attacks, sometimes referred to as machine learning optical illusions are the most well-studied attacks of this type. 3D image classification is different from 2D because the item being classified can be viewed from so many different angles in three dimensional space, unlike 2D where we've only got the two dimensions. While perturbations in 2D space may not work to fool a 3D classifier, perturbation created specifically for 3D just might. So to test the…
Contents
-
-
-
-
(Locked)
Perturbation attacks and malicious input7m 26s
-
(Locked)
Poisoning attacks6m 54s
-
(Locked)
Reprogramming3m 30s
-
(Locked)
Physical domain: 3D adversarial objects3m 55s
-
(Locked)
Supply chain attacks4m 4s
-
(Locked)
Model inversion5m 19s
-
(Locked)
System manipulation4m 49s
-
(Locked)
Membership inference and model stealing4m 26s
-
(Locked)
Backdoors and existing exploits3m 45s
-
(Locked)
-
-
-