A factor which impacts this behavior is that the numeric value of zero can be represented EXACTLY in the standard IEEE floating-point format, whereas virtually no other whole numbers can be represented EXACTLY (nor can nice fractions or even low-precision small decimal values, such as 1/3 or 0.1 or 0.01 or 0.001).
The magic and the drawback of the floating-point format is that the values which can be represented EXACTLY are discretely and non-uniformly distributed on the number line, packed fairly densely near the zero origin and spread quite far apart at huge offsets. This behavior is how floating-point (with fairly small binary representations such as 64 bits) can simultaneously represent huge values (with correspondingly sparse resolution) - such as 10 raised to the power of 300 - and also represent small values (with very high resolution) in the ordinary human range closer to zero.
The term “floating point” comes from the fact that the decimal point “floats” or effectively moves back and forth within the binary representation, yielding varying precision in order to accommodate such a wide range of number space.