Are you comparing floating point numbers? Choosing a correct value for epsilon is impossible in the general case:
miniblog.
Related Posts
A new 16-bit floating point format for machine learning! https://hub.packtpub.com/why-intel-is-betting-on-bfloat16-to-be-a-game-changer-for-deep-learning-training-hint-range-trumps-precision/
It increases range at the expense of precision, and often allows 16-bit computation (smaller, faster hardware) to replace 32-bit ML logic.
Interestingly, Emacs lisp considers the literal 1. to be an integer literal, whereas most languages consider a decimal point to always mean a floating point number.
It still seems odd to me that there are many different NaN values in floating point arithmetic. Those bits could have improved precision!
