Invited talk: Safety Verification for Deep Neural Networks: https://popl20.sigplan.org/details/VMCAI-2020-papers/22/Safety-and-Robustness-for-Deep-Learning-with-Provable-Guarantees
How do we verify that a DNN is robust to adversarial attacks? How do we quantify safety? This approach looks at image features (Sift) and verifies all perturbations within a region.
Related Posts
Go has an elegant approach to defining example functions, which are shown in docs as `main()` with the output: https://go.dev/blog/examples
I'm playing with DOT output for debugging syntax trees from difftastic. Here's an F# snippet, the Debug representation, and the DOT rendered as an image.
I'm pleased with the information density on the graphic, but we'll see how often I end up using it.
In praise of Tcl, and reflecting on syntax features for a good command shell: https://yosefk.com/blog/i-cant-believe-im-praising-tcl.html