A new speculative execution vulnerability in CPUs: https://www.theregister.co.uk/2019/03/05/spoiler_intel_processor_flaw/
Attacks only get sophisticated over time, and this is a great example of other researchers finding similar issues. This vulnerability is independent of Spectre.
miniblog.
Designing wire protocols: https://esr.ibiblio.org/?p=8254
Discusses extensibility, bug prone features, and network and CPU overheads in low traffic systems.
Ramping up with a new technical team, and asking the right questions:
https://boz.com/articles/career-cold-start.html
Blogged: How High Are Your Tests?
One sign your blog posts are successful: they give you something to refer to in conversation with others!
Blogged: How High Are Your Tests?
Several of the best jobs I've had, I heard about them through Twitter. When Twitter works well, it can be a fabulous source of 'watercooler conversation' across the industry.
Amazon moving to a serial number verification program to deter fakes:
https://www.theguardian.com/technology/2019/mar/04/amazon-to-give-power-to-brands-to-delete-fakes-from-website
Backdoored versions of popular software being distributed on GitHub: https://www.zdnet.com/article/researchers-uncover-ring-of-github-accounts-promoting-300-backdoored-apps/
Perhaps it's only a matter time before we see blue checkmarks on repos? I suppose the star count does help distinguish real repos.
I've worked on many projects where tests are have discrete levels, usually something like unit test, integration test, end-to-end test.
I've also seen elaborate arguments over what counts as a unit, especially in heavily OO codebases.
Given coverage data, how would you build a tool to decide which untested parts of a codebase most need tests?
The best I can think of is using profile data to ensure hotspots are tested. Not sure if it's ideal though: top level code and well-exercised logic would be highlighted.
Flux is an app for digitising receipts and business loyalty cards: https://www.tryflux.com/
These are both pieces of paper you often need to keep around. Receipts are particularly something you might need if you're expecting to need the warranty. Could be convenient.
Adding a tail-call optimisation macro (for self-calls) is a really fun lispy project: https://github.com/Wilfred/tco.el/blob/179b82cacbd59692e3c187b98f87a1f453923878/tco.el#L51-L63
Ironically, I've implemented TCO using a recursive function that can blow the stack.
For some websites, I'm logged in so often I almost never see the "why you should sign up" blurb.
GitHub is one of these: the UI is very familiar, but the logged-out homepage feels completely new whenever I see it. It's darker than the logged-in UI.
It's important to update your system regularly, to pick up security updates. This should ideally be automated.
But if you could only update one thing, I think it should probably your browser. It's exposed to data from a huge range of sources and regularly has nasty bugs.
Showing 601-615 of 736 posts
