Really interesting paper exploring adversarial inputs to ML models: https://arxiv.org/abs/1905.02175
They conclude:
* It's a property of the input data, not the training
* You can even train a model on non-robust features and obtain a model that works well on the original input data!
Related Posts
The games console market is fascinating: there's incentive to *not* provide upgraded models.
You want the guarantee that a game for $X just works on any $X purchased.
E.g. the Switch OLED has a bigger screen, and a better CPU than the original, but it's downclocked to match the original Switch's CPU.
"Example Driven Development" using Glamorous and Pharo Smalltalk: https://medium.com/feenk/an-example-of-example-driven-development-4dea0d995920
Tests returning values and composing is a really interesting model. It establishes structure and shows which test failure is the most 'fundamental'.
I really like the MELPA model of packaging directly from git. It solves the problem of forgetting to release something -- just merge a PR and you're done.
It also makes version number bumps much less important.
You could go even further in a statically typed language and also figure out when breaking changes occur.