Epsilon is the smallest value in a number encoding scheme that can be added to 1
to generate a number that has a distinctly different representation.
Can anyone help me intuit why the loss of precision is greater in the latter example here? 👇
console.log(Number.EPSILON > (0.1 + 0.2 - 0.3)) // true
console.log(Number.EPSILON > (10000.1 + 10000.2 - 20000.3)) // false
Is it that the significand required to exactly represent many easily-written decimal numbers is larger than the 52 bits available, and that therefore an inaccuracy is introduced by truncation. This inaccuracy is then multiplied by the exponent, and if the exponent is large, the inaccuracy is magnified?