# Weird zeros

Hi.
I get a lot of weird zeros resulting from computations. Such as 1.29009e-14. Is it advisable for computational efficiency to change these to 0 or 0.0, of leave them as they are?

Not surprising with finite precision floating point arithmetic. There are many calculations for which the result canâ€™t be represented exactly (e.g. 1/3). They contaminate subsequent calculations, so one should always compare values to within a tolerance. I havenâ€™t tested, but unless the hardware explicitly traps zero vs near zero, I doubt that resetting to exact zero would have a noticeable effect on performance.

1 Like

I see. Thankyou, that makes sense. Yes, it doesnâ€™t seem to make a noticeable difference, so far for me at least.

well 1.29009e-14 is 0,0000000000000129
itâ€™s pretty small
Dan posted a wiki link explaining the why to the whatâ€™s going on with the floating point arithmetic,

phew. another rabbit holeâ€¦

1 Like

This whole thing is the reason why SketchUp has an internal tolerance of about 0.001 inch and merges vertices that are closer together than that. In may circumstances, such as vertices on circles or arcs and intersections, SketchUp must calculate the location of a vertex, and the calculation is subject to this precision limitation. The tolerance takes care of that. However, it was chosen to accomodate typical architectural precision and is too large when working with small objects such as one might 3D print. In such cases the recommendation is to work at a large scale (e.g. treat meters as if they are mm). Provided you tell the exporter that the values are meters, it will generate an stl that does not have precision issues and the slicer can import the stl thinking it is mm (stl does not convey units, just numbers).

I can remember back in the 80â€™s my uncle, an architect, starting to use CAD and struggling to get his edges to convergeâ€¦

Another wrinkle in this mess: Iâ€™ve wondered why the advice to avoid too-close vertices in SketchUp is to scale up by 100 or even 1000, do the modeling, and then scale back down by 0.01 or 0.001. Neither of those values has an exact finite binary representation! Why not use 128 and 1024, which are powers of two and have exact binary representations for their inverses?

1 Like

if I need a 10mm cube I can make it as a 10m cube. 1000x factor.

Nobody in their right mind will make a 10,24m cube.

1 Like