Sometimes I feel coding is simply a matter of 'how do I implement these features whilst minimising the added evil to the codebase?'
miniblog.
Related Posts
One fun way of testing new AI models: take an existing codebase you have and just ask them to "review it and fix bugs".
In principle this should find more issues over time as models get smarter. I've found a few bugs this way at least.
I'm intrigued to see that Google has quantified that new code is generally buggier and less secure than code that has existed in your codebase for longer:
I'm pretty impressed with Cursor: I've successfully asked it to perform codebase transforms in English, and it's worked!
E.g. "Replace all calls foo(..., true) with foo_immediate(...) define a foo_immediate function".
I'm still reading the diff and checking tests -- it's still AI after all.