It still seems odd to me that there are many different NaN values in floating point arithmetic. Those bits could have improved precision!
miniblog.
Related Posts
A new 16-bit floating point format for machine learning! https://hub.packtpub.com/why-intel-is-betting-on-bfloat16-to-be-a-game-changer-for-deep-learning-training-hint-range-trumps-precision/
It increases range at the expense of precision, and often allows 16-bit computation (smaller, faster hardware) to replace 32-bit ML logic.
One fascinating property of chess engine design is that a deeper tree search can be more valuable than a smarter board value metric.
If a metric is more accurate but more computationally expensive, it might not be worthwhile! It's a precision/brute force tradeoff.
Fun post showing how to emulate double precision with single precision: https://web.archive.org/web/20110831040222/https://www.thasler.org/blog/?p=93 (illustrated with mandelbrot fractals)
