Invited talk: Safety Verification for Deep Neural Networks: https://popl20.sigplan.org/details/VMCAI-2020-papers/22/Safety-and-Robustness-for-Deep-Learning-with-Provable-Guarantees How do we verify that a DNN is robust to adversarial attacks? How do we quantify safety? This approach looks at image features (Sift) and verifies all perturbations within a region.