A new speculative execution vulnerability in CPUs: https://www.theregister.co.uk/2019/03/05/spoiler_intel_processor_flaw/
Attacks only get sophisticated over time, and this is a great example of other researchers finding similar issues. This vulnerability is independent of Spectre.
miniblog.
Designing wire protocols: https://esr.ibiblio.org/?p=8254
Discusses extensibility, bug prone features, and network and CPU overheads in low traffic systems.
Ramping up with a new technical team, and asking the right questions:
https://boz.com/articles/career-cold-start.html
Blogged: How High Are Your Tests? http://www.wilfred.me.uk/blog/2019/03/04/how-high-are-your-tests/
One sign your blog posts are successful: they give you something to refer to in conversation with others! https://guzey.com/personal/why-have-a-blog/
Blogged: How High Are Your Tests? https://www.wilfred.me.uk/blog/2019/03/04/how-high-are-your-tests/
Several of the best jobs I've had, I heard about them through Twitter. When Twitter works well, it can be a fabulous source of 'watercooler conversation' across the industry.
Amazon moving to a serial number verification program to deter fakes:
https://www.theguardian.com/technology/2019/mar/04/amazon-to-give-power-to-brands-to-delete-fakes-from-website
Backdoored versions of popular software being distributed on GitHub: https://www.zdnet.com/article/researchers-uncover-ring-of-github-accounts-promoting-300-backdoored-apps/
Perhaps it's only a matter time before we see blue checkmarks on repos? I suppose the star count does help distinguish real repos.
I've worked on many projects where tests are have discrete levels, usually something like unit test, integration test, end-to-end test.
I've also seen elaborate arguments over what counts as a unit, especially in heavily OO codebases.
Given coverage data, how would you build a tool to decide which untested parts of a codebase most need tests?
The best I can think of is using profile data to ensure hotspots are tested. Not sure if it's ideal though: top level code and well-exercised logic would be highlighted.
Flux is an app for digitising receipts and business loyalty cards: https://www.tryflux.com/
These are both pieces of paper you often need to keep around. Receipts are particularly something you might need if you're expecting to need the warranty. Could be convenient.
Adding a tail-call optimisation macro (for self-calls) is a really fun lispy project: https://github.com/Wilfred/tco.el/blob/179b82cacbd59692e3c187b98f87a1f453923878/tco.el#L51-L63
Ironically, I've implemented TCO using a recursive function that can blow the stack.
For some websites, I'm logged in so often I almost never see the "why you should sign up" blurb.
GitHub is one of these: the UI is very familiar, but the logged-out homepage feels completely new whenever I see it. It's darker than the logged-in UI.
It's important to update your system regularly, to pick up security updates. This should ideally be automated.
But if you could only update one thing, I think it should probably your browser. It's exposed to data from a huge range of sources and regularly has nasty bugs.
Challenges in Pharo 7: https://www.slideshare.net/pharoproject/pharo-7-the-key-challenges
Includes screenshots of the new class browser and message browser!
Hopefully it's only a matter of time before we start integrating corpus data and NLP analysis into dictionaries. I'd love to know: how often is each meaning used?
I am really excited to see more Pharo projects move to GitHub. Smalltalkers have a great culture of dogfooding, and pull requests will help the positive cycle of improvement!
For example, even the installer is a Pharo project: https://github.com/pharo-project/pharo-launcher
Whilst the concept of a web portal seems very dated now, some aspects still exist. Google has doodles and Bing has different photos, so you get new content when you visit regularly.
How is the content you're consuming? How does that impact your worldview and ideas? https://www.perell.com/blog/never-ending-now
Showing 601-620 of 736 posts