It's a bittersweet feeling when you trace a crash back to a "TODO: make this robust against X" comment you wrote.
miniblog.
Related Posts
Invited talk: Safety Verification for Deep Neural Networks: https://popl20.sigplan.org/details/VMCAI-2020-papers/22/Safety-and-Robustness-for-Deep-Learning-with-Provable-Guarantees
How do we verify that a DNN is robust to adversarial attacks? How do we quantify safety? This approach looks at image features (Sift) and verifies all perturbations within a region.
Commonmark.js has a lovely feature that's rare in markdown renderers: it exposes an AST! https://github.com/commonmark/commonmark.js/#usage
This makes it so much easier to extend/modify the syntax in a robust manner.
Really interesting paper exploring adversarial inputs to ML models: https://arxiv.org/abs/1905.02175
They conclude:
* It's a property of the input data, not the training
* You can even train a model on non-robust features and obtain a model that works well on the original input data!
