Tried fuzz-testing difftastic this morning (using cargo fuzz), and didn't find any crashes. I guess that's a good thing? I was slightly disappointed.
Judging by the output, I think the tree-sitter parsers were exercised much more heavily than the tree diffing logic.
miniblog.
Related Posts
One fun way of testing new AI models: take an existing codebase you have and just ask them to "review it and fix bugs".
In principle this should find more issues over time as models get smarter. I've found a few bugs this way at least.
Do any tech streamers try new software live? It'd be a really fun way of doing UX testing.
I've been using a 'golden tests' library for testing my parser, and it's just delightful. Rather than writing a verbose assertion about the resulting AST, I can just re-run my tests until the output looks good!