I'm intrigued to learn that some Stockfish developers think that it would beat AlphaZero under conventional rules for chess engine competitions.
https://en.wikipedia.org/wiki/AlphaZero#Reactions_and_criticisms
Even if that's fair, AlphaZero is still a very impressive demonstration of their ML approach.
miniblog.
Related Posts
Interesting, nuanced discussion of Leela Chess Zero (a neural net) beating Stockfish in a recent competition: https://news.ycombinator.com/item?id=20027838
Whilst the original DeepMind result was impressive, it's great to see reproducible results and a project that's available to the public.
The efficiency of wasm is really impressive. An optimised build of Stockfish with POPCNT evaluates positions at ~1500kn/s on a single core of machine. By contrast, the wasm build on https://lichess.org/analysis/r4rk1/p5b1/2p2n2/1p2p3/4P2q/P1N1B3/1PP1B1Q1/2KR3n_w#0 can compute ~42kn/s in the browser for the same position!
The Stockfish chess engine requires patches to pass a test: it must beat the old version a sufficient proportion of the time.
This introduces an interesting problem: what if a patch set makes it stronger, but applied individually they make it worse?
