Small Radius: Distorted Geometry

I’ve never heard a definitive explanation for this feature, but if it were me, I would keep everything as integers in millionths of an inch with a dynamic range of about 15 digits and convert to floating point when necessary. However, if your smallest native value is 0.000001", the accuracy of a square root calculation (for example), is only 0.001" (which would be a proper limit to impose in this case).

If I create a 1" x 1" square and make it a component, I can scale it down by 0.0000001 (x0.001 / x0.001 / x0.1) … but the entity info reports it as 0.000000" x 0.000000" with an area of 0 square inches. The original component edges are still 1" x 1" but the scale factor allows for values much less than 0.001". When you explode it, however, it disappears.

Similarly, if I create a 1" x 1" square and don’t group it and scale it 0.001, then any attempt to shrink it further results in the geometry disappearing.

Personally, I think it would be nice if someone in the know would fully explain why this limitation occurs … facts are much better than speculation :wink:

[added for extra thought] I can store 0.1" exactly as 100,000 millionths of an inch, but I can’t store it exactly as a decimal inch. See this for more info.

maybe @jbacus can come along and explain it all again…

He had answered on Google Groups and much of Steve explanations reflect the content of that reply… IMFM

edit: the forum with the post was removed…

john

1 Like

Decimal inch is workable.

-Gully

1 Like

Provided you are good at remembering things like 5/32 = 0.15625 or keep a calculator by your side :wink: (or like working in decimal inches in the first place)

When designing in decimal fractions, there is no reason to base dimensions on common fractions, and one does not do so. That would defeat the whole idea of using decimals. It sometimes becomes necessary to perform these conversions when working with common-fraction-based materials or, say, fractional drills. Increasingly, though, common fractions have disappeared from mainstream American engineering drawings, as well as from material specs and common tools.

(In my earlier days as a designer, it was necessary to come up with the decimal equivalent to common fractions rather more often. If you’re working with the things regularly, though, it takes about a week or two before you can rattle them all off without even thinking about it, although it’s doubtful one would ever specify a dimension to the hundred-thousandth of an inch.)

-Gully

Edit: Incidentally, your example of 5/32 reminds me of the old US military standard for engineering drawing letter height: 5/32 caps. In later years this was commonly referred to as .16-high lettering, not .15625-high.

@Cotty, Scaling down the geometry doesn’t prevent this from happening again if no unwanted outside corner edges and the inner large arc have been deleted yet. For only then there will be no endpoints close to any edges anymore. (Also see @slbaumgartner’s quote).

The only way to avoid this from happening is to draw your small geometry inside a scaled down component’s environment while there’s another instance 10x as large (or whatever factor is needed), where the largest instance holds the component’s definition (i.e. 1:1)

I understand the joining of vertices when they are “about” 0.001", but I’ve never heard it explained why 0.001" is the magic number. Why not 0.0001" (for example)? This limit (yes, it is a hard limit) is the source of much discussion about the workarounds adapted to deal with this issue (the tangent circle effect the OP noted is but one of them).

Do you recall if @jbacus explained why SketchUp settled on 0.001" as the limiting size of an edge? It’s something I’ve wondered about for many years now and would perhaps find some closure in finally knowing why this is so. My gut instinct tells me that the limitations of floating-point behavior is the root cause for whatever reason SketchUp chose to implement this behavior.

Like John, I can’t find Bacus’s discussion, so I’m going on the basis of recollection from then and also many years of writing software myself.

There are really two parts to your post:

  1. Why is a threshold needed?
  2. Why does SketchUp use 0.001"

The first is a fundamental issue with computer software that I’ll get to in a moment.

The second was an arbitrary choice by the SketchUp designers. They were targeting architects who design buildings or complexes of buildings. These users almost never draw anything tiny (architects: before you get in a twist, I don’t mean to say that architects don’t need accurate drawings, I only mean that the things modeled are usually much larger than what people are now modeling for 3D printing). So could it have been 0.0001" instead? Possibly. I would imagine the designers did some tradeoff experiments to decide what threshold provided an acceptable balance between maximum model size and smallest feature size. But the choice was ultimately arbitrary, not fundamental. That’s why there is sound basis for requesting some user control over the threshold - with the caveat “on your head be it if your model gets messed up!”.

Returning to the first issue, having a threshold and cleanup is necessary because of the computer’s arithmetic. There are calculations and values whose result can not even theoretically be expressed exactly on a computer because they have no finite representation. One consequence is that computed values cannot be compared absolutely. The classic example is that on a computer the calculation 10.0 *(1.0/10.0) is not exactly equal to 1.0 (it is usually 0.999999999…). So, you have to set a threshold on how close two values must be to be considered the same else you will very rarely find a match.

This matters to SketchUp because there are many situations in which it must calculate (as opposed to just take from input) the location of a vertex, or in which the specified location can’t be represented exactly. One example relevant to the original post in this topic is to find the locations of the vertices for a circle represented as a loop of edges. The locations depend on the center, plane, radius, and number of segments. They have to be calculated and will only rarely land on nice finite values. Intersections between edges also have to be calculated using the formula for the locus of points between the endpoints, leading to imprecision in the intersection vertex location.

When a later drawing operation calculates a vertex that comes close to a pre-existing one, SketchUp must decide whether they were really meant to be the same and differ only because of computer arithmetic limits. If it didn’t do this, there would be serious consequences for your model: loops of edges would often not close to generate a face, edges would not intersect each other, the model would become bloated with vertices that are almost but not quite at the same location. These effects would happen all the time, as opposed to the occasional problems that result from the threshold and merging operations losing small edges or warping edges. On this forum we occasionally see the consequences in models that are imported from other apps that don’t take such precautions.

Number one has been a problem ever since binary floating-point was applied to storing decimal numbers … the workaround has historically been to pick a threshold value. I first ran into this problem in 1972 and, most recently, with SketchUp. Like you, I have posited a number of reasons why 0.001" was chosen instead of some other number, but I don’t know the answer for sure. Somewhere back along week one or so of developing SketchUp 0, this decision was made. Maybe it’s passed through so many hands that no one really knows the answer to this question anymore. :frowning:

I have a vague recollection of reading that the intrinsic Alchemy openGL settings has fixed upper and lower limits…

if one goes down so does the other and conversely…

when combined with the need to cater for the fractional notation desired by American Architects, the base unit was set to 1" and the lower limit was set within ‘construction’ norms for house building…

I also recall reading that Metric was ‘later’ shoe horned in primarily for the Japanese market…

john

I think you guys have hit on all the right points already in this discussion— SketchUp is like all CAD systems in that it handles floating point numbers with fixed precision. Practically speaking, SketchUp carries 16 significant digits— 12 in front of the decimal place, 4 after.

Internally, SketchUp uses ‘inches’ as its dimensional unit. Without taking on position on the metric vs. imperial system, technically this is really only a semantic difference. But the net result is that SketchUp can reliably represent dimensions in a range from thousands of an inch to thousands of miles. SketchUp was designed to support folks making things as small as furniture and as large as cities.

MCAD tools like Solidworks handle smaller objects better, but struggle with larger ones. GIS tools from ESRI handle even larger objects than cities (like ‘the world’), but struggle even more at small scales. Microchip CAD systems (like those from MentorGraphics) are capable of modeling at angstrom scales, but would struggle to model a toothbrush. All these systems suffer from the same basic problem, though all have set their precision thresholds in different places to accommodate the particular needs of their users. Pure DCC tools (like Maya or Blender) just dispense with named units entirely, operating essentially in a unitless cartesian space.

The floating point number representation is only part of the story, but it is enough to explain why you see merging/face closing problems with points smaller than .001" Essentially, SketchUp can’t tell the difference between points closer together than the lower precision threshold, and so you get unpredictable results due to rounding errors.

Since more people are making small parts in SketchUp today, primarily due to the growth in interest in 3D printing, they tend to hit the lower threshold more often than the upper threshold. Scaling the manually object while you’re modeling serves to move you away from the lower precision threshold and into the happier middle ground between extremes.

john
.

8 Likes

cheers john

john

Inside a scaled down component SketchUp seems to cope with dimensions smaller than 0.001" (< 0.00254mm) as long as there is a “normal” instance much larger than the one scaled down.
One can work inside one or the other. I’m not sure how this works out when exporting to *.stl for printing.

There doesn’t seem to be any problem with STLs. I created a scaled down 0.00001" x 0.00002" x 0.00003" component copy of a nominal 1" x 2" x 3" box. While it no longer displayed properly, I was able to select it and export it as an STL file:

solid test21
facet normal -1.0 0.0 0.0
  outer loop
    vertex 0.0 1.9999999999999995e-05 9.999999999999997e-06
    vertex 0.0 0.0 0.0
    vertex 0.0 0.0 9.999999999999997e-06
  endloop
endfacet
facet normal -1.0 0.0 0.0
  outer loop
    vertex 0.0 0.0 0.0
    vertex 0.0 1.9999999999999995e-05 9.999999999999997e-06
    vertex 0.0 1.9999999999999995e-05 0.0
  endloop
endfacet
facet normal 0.0 -1.0 0.0
  outer loop
    vertex 2.9999999999999987e-05 0.0 9.999999999999997e-06
    vertex 0.0 0.0 0.0
    vertex 2.9999999999999987e-05 0.0 0.0
  endloop
endfacet
facet normal 0.0 -1.0 0.0
  outer loop
    vertex 0.0 0.0 0.0
    vertex 2.9999999999999987e-05 0.0 9.999999999999997e-06
    vertex 0.0 0.0 9.999999999999997e-06
  endloop
endfacet
facet normal 1.0 0.0 0.0
  outer loop
    vertex 2.9999999999999987e-05 1.9999999999999995e-05 0.0
    vertex 2.9999999999999987e-05 0.0 9.999999999999997e-06
    vertex 2.9999999999999987e-05 0.0 0.0
  endloop
endfacet
facet normal 1.0 0.0 0.0
  outer loop
    vertex 2.9999999999999987e-05 0.0 9.999999999999997e-06
    vertex 2.9999999999999987e-05 1.9999999999999995e-05 0.0
    vertex 2.9999999999999987e-05 1.9999999999999995e-05 9.999999999999997e-06
  endloop
endfacet
facet normal 0.0 0.0 -1.0
  outer loop
    vertex 2.9999999999999987e-05 1.9999999999999995e-05 0.0
    vertex 0.0 0.0 0.0
    vertex 0.0 1.9999999999999995e-05 0.0
  endloop
endfacet
facet normal 0.0 0.0 -1.0
  outer loop
    vertex 0.0 0.0 0.0
    vertex 2.9999999999999987e-05 1.9999999999999995e-05 0.0
    vertex 2.9999999999999987e-05 0.0 0.0
  endloop
endfacet
facet normal 0.0 1.0 0.0
  outer loop
    vertex 0.0 1.9999999999999995e-05 9.999999999999997e-06
    vertex 2.9999999999999987e-05 1.9999999999999995e-05 0.0
    vertex 0.0 1.9999999999999995e-05 0.0
  endloop
endfacet
facet normal 0.0 1.0 0.0
  outer loop
    vertex 2.9999999999999987e-05 1.9999999999999995e-05 0.0
    vertex 0.0 1.9999999999999995e-05 9.999999999999997e-06
    vertex 2.9999999999999987e-05 1.9999999999999995e-05 9.999999999999997e-06
  endloop
endfacet
facet normal 0.0 0.0 1.0
  outer loop
    vertex 2.9999999999999987e-05 0.0 9.999999999999997e-06
    vertex 0.0 1.9999999999999995e-05 9.999999999999997e-06
    vertex 0.0 0.0 9.999999999999997e-06
  endloop
endfacet
facet normal 0.0 0.0 1.0
  outer loop
    vertex 0.0 1.9999999999999995e-05 9.999999999999997e-06
    vertex 2.9999999999999987e-05 0.0 9.999999999999997e-06
    vertex 2.9999999999999987e-05 1.9999999999999995e-05 9.999999999999997e-06
  endloop
endfacet
endsolid test21

The numbers get exported like they should.

1 Like

I consulted with David, our resident Computational Geometry expert and long-time member of SketchUp’s development team. He gave me the following answer:


Double precision numbers can accurately store ~16 decimal digits. To be safe, lets assume 15 digits. I.e. 999,999,999,999,999.

Since SketchUp requires accuracy to 1/1000”, SketchUp’s maximum dynamic range is +/-999,999,999,999.999 – say 1 trillion inches or 83,333,333,333 ft or 15,782,827.7 miles or ~½ the distance from the Earth to Mars, etc…

So in SketchUp you could…
Model the Sun.
Model the Earth.
Model the Moon.
Model the Earth and the Moon together.

but you could not…
Model the Sun and (Mercury or Venus or Earth…) together.
Model the solar system.
Model the milky way.
or model things smaller than 1/1000 inch.

Anything much larger than 15 million miles or smaller than 1/1000” WON’T WORK!


After reading this I have to play with the upper limit I think :wink:

1 Like

The working size range of SU is badly skewed toward the gargantuan–worse than I had thought. Models of star-system proportions just don’t come along very often, whereas models of, say, fist-sized objects fail all the time. Even if these smaller models do not approach the lower size limit imposed by two points merging at a distance of .001", they still fail all the time because they have faces smaller than about 1mm in any dimension (that’s the number I’ve most often heard used for missing faces, some forty times larger than the point-failure size).

In practical terms, it’s hard to model typical mechanical engineering-sized objects because they have small faces below the size limit. This means that a huge number of manufactured objects simply can’t be modeled successfully in SU without using impromptu scaling or some other kludged workaround.

Since SU has such a wide dynamic range, why not just transpose the limits down (or the dimensional values up) by an order of magnitude or two, enabling it to model all reasonable subject matter at full size?

-Gully

3 Likes

There is also the practical upper limit imposed by OpenGL - clipping creeps in when your model reaches the size of a city block or two, and beyond that things look increasingly weird. It would be interesting to read an expert comment about what the actual limits are in this respect. Archicad and Revit documentation sets their practical upper limit at about 35 kilometers from the origin, if I remember right (the first uses OpenGL, the second DirectX).

Anssi

This explains a lot— Thank you.

Sometimes a Push/Pulled object has faces a few millionths of a unit out of plane, and when the original face is altered, SU doesn’t remake that face. (perhaps I have that backwards: after altering, does SU distort that plane?) Anyhow, if SU doesn’t work down below .001 of a unit, why does it judge planar to the millionth?

I heartily agree with Gully. A few orders of magnitude would still let us model the Earth, but also allow for common objects.

The tough thing to appreciate about precision in CAD systems is that computers deal only in discrete mathematical operations. Floating point numbers are only precise to a fixed number of decimal places, which is somewhat counterintuitive.

We could have built SketchUp to hide the internal units, or to treat them as purely abstract except at display time in the UI. That way you wouldn’t be troubled by the idea of "scaling up, scaling down’ that you have to do today to accommodate finer precision that thousandth of an inch. We didn’t do that for reasons that felt pretty reasonable at the time, especially given our vision for a product targeted at objects from furniture scale to city scale. That worked pretty well until folks started using SketchUp to scale large models (like cities) down to 3D printable scales.

In software, it is always possible to change things in the future, but this is one that is so ingrained into the basic object/interaction model for SketchUp that we wouldn’t do it quickly or casually.

john
.

1 Like