miniblog.

Powershell 7, and how the Linux userbase has overtaken Windows! https://devblogs.microsoft.com/powershell/the-next-release-of-powershell-powershell-7/
Reflections on Maxis, SimCity, and world views in computer games: https://www.theatlantic.com/technology/archive/2015/03/video-games-are-better-without-characters/387556/
Computer Utopias: notes from a course with some fabulous discussion of tech today, how we think about it, and what it could be: https://chrisnovello.com/teaching/risd/computer-utopias/
Rust 1.34.0 had a security vulnerability that allowed reading/writing memory out of bounds!
Markets are bigger, and companies are more valuable, because the internet has allowed them reach many more people: https://blog.eladgil.com/2019/05/markets-are-10x-bigger-than-ever.html
@cwebber@octodon.social How so? What conclusion did you draw?
Exploring creation-oriented UI design for tablets with a touch screen:
Many services are built on open source services, but today's model is monetising *state*:
Online videos are often a great historical reference for obsolete computer games, because they show the game in context of an active community: https://www.rockpapershotgun.com/2019/05/06/how-youtube-lets-plays-are-preserving-video-game-history/ (I suppose this also applies to MUDs and online communities that aren't gaming related.)
Yet another side channel attack in CPUs: https://www.zdnet.com/article/intel-cpus-impacted-by-new-zombieload-side-channel-attack/ (Big incentive to upgrade when fixed chips are available!)
Deciding how many nodes and layers to use in a neutral network: many functions can be expressed in 1 or 2 hidden layers, more is often better and faster, and you usually have to experiment. https://machinelearningmastery.com/how-to-configure-the-number-of-layers-and-nodes-in-a-neural-network/
Generating photos of fictional people using generative adversarial networks: https://thispersondoesnotexist.com/
Really interesting paper exploring adversarial inputs to ML models: https://arxiv.org/abs/1905.02175 They conclude: * It's a property of the input data, not the training * You can even train a model on non-robust features and obtain a model that works well on the original input data!
Exploring garbage collection accelerators in CPUs: https://spectrum.ieee.org/tech-talk/computing/hardware/this-little-device-relieves-a-cpu-from-its-garbage-collection-duties
A story of exploiting Android through releasing packages with the same name and ID on public repositories: https://blog.autsoft.hu/a-confusing-dependency/
Showing 451-465 of 736 posts