It seems that a major security bug is a great way to get code review on your project.
miniblog.
Related Posts
One fun way of testing new AI models: take an existing codebase you have and just ask them to "review it and fix bugs".
In principle this should find more issues over time as models get smarter. I've found a few bugs this way at least.
One interesting technique to reduce the review burden in Home Assistant, a project with a large community: ask PR authors to review other PRs!
Screenshot is from
Difftastic is effectively computing the "tree edit distance" between two ASTs, and there's a bunch of papers on this topic. Literature review is hard though: sometimes a paper takes a while to digest, only to realise that they're solving a slightly different problem.
