Small Radius: Distorted Geometry

Is it just me or does Sketchup have a hard time handling sub-millimeter radii?
See the way the straight lines bend in towards the curve? See how the curve distorts when I delete the overhang segments? Is this normal when working at this scale?

I’ll assume you mean sub-millimeter. SU has trouble with features smaller than around 1mm. The answer is not to work that small. Try scaling up your model x100 while you are creating small geometry. If necessary, scale it back down to full size when construction is complete.

-Gully

Good call. Drawing the geometry at 10x and then scaling down gave me the intended result.
Is this a known issue with SU? Something the devs are addressing or is it by design?

[quote=“Helmanfrow, post:3, topic:23207, full:true”]Is this a known issue with SU? Something the devs are addressing or is it by design?
[/quote]
Yes, no, kind of.

Due to the limitations of 64-bit floating point calculations, some sort of minimum needs to be determined to allow for the widest possible dynamic range of values. Since SketchUp was primarily developed for large objects (such as a downtown building block or sub-division), the internal minimum was chosen to be relatively large compared to a fraction of a millimeter. The quirk you’ve discovered is the result of 64-bit rounding at the lower end of the range.

64 bits is 20-digit number.
Seems like a lot of dynamic range to me but what do I know about floating points or anything else. :slight_smile:

Almost. Actually, it is due to SketchUp applying a tolerance test to allow for floating point calculations of the locations of vertices for edge ends and intersections. When it finds vertices that are closer together than about 0.001" it merges them and breaks edges as necessary. In the OP’s specific example, this caused the neighboring vertex of the circle to “capture” the edge when it passed by too close to it.

This is probably a complicated answer but if SU can scale the feature down without artifact then why can’t it draw it in the first place?

SU does this only while creating those vertices, not for scaling.

This is true, and is why the recommended workaround of modeling at an enlarged size and then scaling down when finished works. It’s also why Jim’s explanation purely in terms of floating point isn’t quite correct: SketchUp can capture the values, it just “cleans them up” when they are originally created.

Cool, that’s what I’ll do from now on. Any pitfalls to avoid when working that way?

Not really. The only downside I know of is that if you work in fractional inches, they can be a pain to enter at the enlarged scale (quick: what is 3 7/16" when scaled up by 10?). Metric is much cleaner!

1 Like

I’ve never heard a definitive explanation for this feature, but if it were me, I would keep everything as integers in millionths of an inch with a dynamic range of about 15 digits and convert to floating point when necessary. However, if your smallest native value is 0.000001", the accuracy of a square root calculation (for example), is only 0.001" (which would be a proper limit to impose in this case).

If I create a 1" x 1" square and make it a component, I can scale it down by 0.0000001 (x0.001 / x0.001 / x0.1) … but the entity info reports it as 0.000000" x 0.000000" with an area of 0 square inches. The original component edges are still 1" x 1" but the scale factor allows for values much less than 0.001". When you explode it, however, it disappears.

Similarly, if I create a 1" x 1" square and don’t group it and scale it 0.001, then any attempt to shrink it further results in the geometry disappearing.

Personally, I think it would be nice if someone in the know would fully explain why this limitation occurs … facts are much better than speculation :wink:

[added for extra thought] I can store 0.1" exactly as 100,000 millionths of an inch, but I can’t store it exactly as a decimal inch. See this for more info.

maybe @jbacus can come along and explain it all again…

He had answered on Google Groups and much of Steve explanations reflect the content of that reply… IMFM

edit: the forum with the post was removed…

john

1 Like

Decimal inch is workable.

-Gully

1 Like

Provided you are good at remembering things like 5/32 = 0.15625 or keep a calculator by your side :wink: (or like working in decimal inches in the first place)

When designing in decimal fractions, there is no reason to base dimensions on common fractions, and one does not do so. That would defeat the whole idea of using decimals. It sometimes becomes necessary to perform these conversions when working with common-fraction-based materials or, say, fractional drills. Increasingly, though, common fractions have disappeared from mainstream American engineering drawings, as well as from material specs and common tools.

(In my earlier days as a designer, it was necessary to come up with the decimal equivalent to common fractions rather more often. If you’re working with the things regularly, though, it takes about a week or two before you can rattle them all off without even thinking about it, although it’s doubtful one would ever specify a dimension to the hundred-thousandth of an inch.)

-Gully

Edit: Incidentally, your example of 5/32 reminds me of the old US military standard for engineering drawing letter height: 5/32 caps. In later years this was commonly referred to as .16-high lettering, not .15625-high.

@Cotty, Scaling down the geometry doesn’t prevent this from happening again if no unwanted outside corner edges and the inner large arc have been deleted yet. For only then there will be no endpoints close to any edges anymore. (Also see @slbaumgartner’s quote).

The only way to avoid this from happening is to draw your small geometry inside a scaled down component’s environment while there’s another instance 10x as large (or whatever factor is needed), where the largest instance holds the component’s definition (i.e. 1:1)

I understand the joining of vertices when they are “about” 0.001", but I’ve never heard it explained why 0.001" is the magic number. Why not 0.0001" (for example)? This limit (yes, it is a hard limit) is the source of much discussion about the workarounds adapted to deal with this issue (the tangent circle effect the OP noted is but one of them).

Do you recall if @jbacus explained why SketchUp settled on 0.001" as the limiting size of an edge? It’s something I’ve wondered about for many years now and would perhaps find some closure in finally knowing why this is so. My gut instinct tells me that the limitations of floating-point behavior is the root cause for whatever reason SketchUp chose to implement this behavior.

Like John, I can’t find Bacus’s discussion, so I’m going on the basis of recollection from then and also many years of writing software myself.

There are really two parts to your post:

  1. Why is a threshold needed?
  2. Why does SketchUp use 0.001"

The first is a fundamental issue with computer software that I’ll get to in a moment.

The second was an arbitrary choice by the SketchUp designers. They were targeting architects who design buildings or complexes of buildings. These users almost never draw anything tiny (architects: before you get in a twist, I don’t mean to say that architects don’t need accurate drawings, I only mean that the things modeled are usually much larger than what people are now modeling for 3D printing). So could it have been 0.0001" instead? Possibly. I would imagine the designers did some tradeoff experiments to decide what threshold provided an acceptable balance between maximum model size and smallest feature size. But the choice was ultimately arbitrary, not fundamental. That’s why there is sound basis for requesting some user control over the threshold - with the caveat “on your head be it if your model gets messed up!”.

Returning to the first issue, having a threshold and cleanup is necessary because of the computer’s arithmetic. There are calculations and values whose result can not even theoretically be expressed exactly on a computer because they have no finite representation. One consequence is that computed values cannot be compared absolutely. The classic example is that on a computer the calculation 10.0 *(1.0/10.0) is not exactly equal to 1.0 (it is usually 0.999999999…). So, you have to set a threshold on how close two values must be to be considered the same else you will very rarely find a match.

This matters to SketchUp because there are many situations in which it must calculate (as opposed to just take from input) the location of a vertex, or in which the specified location can’t be represented exactly. One example relevant to the original post in this topic is to find the locations of the vertices for a circle represented as a loop of edges. The locations depend on the center, plane, radius, and number of segments. They have to be calculated and will only rarely land on nice finite values. Intersections between edges also have to be calculated using the formula for the locus of points between the endpoints, leading to imprecision in the intersection vertex location.

When a later drawing operation calculates a vertex that comes close to a pre-existing one, SketchUp must decide whether they were really meant to be the same and differ only because of computer arithmetic limits. If it didn’t do this, there would be serious consequences for your model: loops of edges would often not close to generate a face, edges would not intersect each other, the model would become bloated with vertices that are almost but not quite at the same location. These effects would happen all the time, as opposed to the occasional problems that result from the threshold and merging operations losing small edges or warping edges. On this forum we occasionally see the consequences in models that are imported from other apps that don’t take such precautions.