A new 16-bit floating point format for machine learning! https://hub.packtpub.com/why-intel-is-betting-on-bfloat16-to-be-a-game-changer-for-deep-learning-training-hint-range-trumps-precision/ It increases range at the expense of precision, and often allows 16-bit computation (smaller, faster hardware) to replace 32-bit ML logic.