A new speculative execution vulnerability in CPUs: https://www.theregister.co.uk/2019/03/05/spoiler_intel_processor_flaw/
Attacks only get sophisticated over time, and this is a great example of other researchers finding similar issues. This vulnerability is independent of Spectre.
miniblog.
Related Posts
A new class of typosquatting attacks for malicious packages: register package names that are hallucinated by ChatGPT: https://vulcan.io/blog/ai-hallucinations-package-risk
(h/t @rauschma)
Unicode attacks creating invisible variables in JS: https://certitude.consulting/blog/en/invisible-backdoor/
Unicode in string literals or comments seems worthwhile, but non-ASCII in variable names seems fraught.
Invited talk: Safety Verification for Deep Neural Networks: https://popl20.sigplan.org/details/VMCAI-2020-papers/22/Safety-and-Robustness-for-Deep-Learning-with-Provable-Guarantees
How do we verify that a DNN is robust to adversarial attacks? How do we quantify safety? This approach looks at image features (Sift) and verifies all perturbations within a region.