No, but I have a close friend who is an accomplished software developer. While I was enduring my brief stint as an IBM partner, I watched him correct mistakes in some of IBM’s C++ code. He’s a lot more than just a software developer. So, over the years, I’ve been thoroughly introduced to software logic.
It is NOT about comparing x, y & z values to one thousandth of SketchUp’s point tolerance. It is about comparing vector angles within SketchUp’s tolerance, which appears to be 0.001 radians and a proportional length ratio (as said above.)
It is not likely that this would change as it is by design. The most you might hope for is a tolerance slider that might allow you to see such edges turn black as you slid the handle for more precision. I suspect it is the way it was designed for rendering speed.
The way I see it is that the errors in the coordinate display are “muddying the waters”.
Can’t really argue the point, as, from a practical standpoint, it’s irrelevant. People working to a degree of precision where it might matter, should be using other software.
True. SketchUp is designed to model buildings. But, it has found use in 3D printing hand sized things that do not need high precision.
CAD (with NURBS and true arcs) is for designing small precision machine parts.
Some years ago, I routinely used imported lines from DWG to model with, i.e. keep the edges, create faces with them and incorporate them into the SU model. It would work at first, but seemed to break after modeling for a while and making manipulations - as if some creeping rounding errors were sneaking in. Now I mostly use DWG for inferencing, while drawing with native tools, and I guess that makes a “cleaner” model somewhere in the numbers.
I didn’t realize I was opening up a can of worms. Oh well, there they go.
I looked at the code, and interestingly, LayOut is ten times more accurate than SketchUp, with regard to the tolerance and whether something is on axis. I will ask the developers if there a reason for a tolerance, and if there is, could SketchUp use the one that LayOut uses. You could still make an example where a line that isn’t quite on axis gets the axis’ color, but it would be a lot closer to being on axis.
If you are looking at tolerance, you may want to consider all SU’s tolerance and UI displayed values and whether the rounding that is at play in DCs may also be impacting the native tools.
As an example, I use the Component Attributes panel to display & check the Position, Size & Rotation of my primary components, this coupled with the Tape Measure tool allows me to check my model for dimensional errors. I am using Metric (mm) and I have my Display Precision set to maximum decimal places. This morning I checked a simple beam. The CA panel failed to display the Position was displaced off the project grid and the beam Length was not a millimetre integer value. The tape measure tool was reporting corner point errors despite all rotation values appearing correct. Entering new 0.0 rotation values for Z & X fixed what should have been an orthogonal placement and the tape measure values. I see this problem with many of my placed components despite checking to maximum precision.
I get the binary math side of things, but unreliable dimensional feedback to the user doesn’t boost confidence in the abilities of SU. If precision only works to 6 decimal places of millimetres, at least give us the equivalent visual feedback and ensure the calculations exceed that degree of precision.
It might also be helpful if the Component Attribute panel displayed Position, Size & Rotation with the user set Display Precision setting by default.
Sorry, bit of a rant, but I’m getting frustrated chasing down accumulating small errors without a quick fix.
I have repeatedly seen SU users maintain that SU is not a CAD program. I have to strongly disagree, and say that, by definition, SU is a CAD program … and a ■■■■ good one at that. So, to have SU reporting errors is simply not defensible. As I said earlier in this thread, SU should be using an algorithm that compared the axis values down to 16 decimal places of accuracy. So when you talk to the developers, ask them why this is not done.
Maybe because it was tied to SketchUp’s treshold, the tolerance, the shortest distance possible between endpoints in the same drawing environment.
It (‘Color by Axis’) could indeed be made narrower. As it is now it represents a cone towards r=0.0254 mm, extended by a cylinder (radius 0.0254 mm) to infinity. All edges within this shape are colored by axis.
This shape could be made narrower towards a line by, like you say, comparing both endpoints (begin and end) in 16 decimals.
I did get a chance to show the issue to the developers. In the process of doing that I found that the new graphics engine has a problem, where lines that are outside of the tolerance value can still be drawn as being on axis. I will log a bug report about that.
I am guessing that the reason why it is not coordinate based (but angular) is likely calculation speed.
And as we we tried to explain to you multiple times, the reason is that double precision floating point numerals cannot express every numeric value especially not down at 16 decimal places. Therefor a tolerance (i.e., a range) must be used.
Step Size Behavior in Double Precision
In IEEE 754 double precision, the spacing between adjacent representable numbers—often called the Unit in the Last Place (ULP)—depends on the exponent. The key rule is:
The spacing between adjacent values is proportional to the magnitude of the number.
So as you move away from 1.0, the step size increases exponentially.
Concrete Examples
Let’s look at exact step sizes for powers of two:
| Value | Next Representable Value | Step Size (ULP) |
|---|---|---|
1.0 |
1.0000000000000002 |
2^-52 ≈ 2.22e-16 |
2.0 |
2.0000000000000004 |
2^-51 ≈ 4.44e-16 |
4.0 |
4.0000000000000010 |
2^-50 ≈ 8.88e-16 |
8.0 |
8.0000000000000020 |
2^-49 ≈ 1.78e-15 |
16.0 |
16.0000000000000040 |
2^-48 ≈ 3.55e-15 |
Each time the exponent increases by 1 (i.e., the value doubles), the step size also doubles. That’s because the mantissa remains fixed at 52 bits, so the resolution scales with the exponent.
General Formula
For any normalized number x = 2^e, the spacing between adjacent representable values is:
ULP(x) = 2^(e - 52)
So:
- Near
1.0(wheree = 0), spacing is2^-52 - Near
1024.0(wheree = 10), spacing is2^42≈2.27 * 10^-13
I think you may be taking a very complicated approach to a very simple problem. If I can use the Text tool to display the coordinates of a vertex, then, using a C++ subroutine to compare axis values is quite possible. Each point in SketchUp has an exact x,y,z value. They just have to be compared to determine if a line is on axis or not.
Angular based is half the story.
Edges rotated away from an axis less than or equal to 0.001 radian could be shown as being on axis. The second check is coordinate based. The difference between endpoints perpendicular to the axis needs to be less than or equal to 0.001” or 0.0254 mm. Otherwise (longer edges within 0.001 radian may still be shown black.
The longer the edge, the narrower the angle required to be shown as ‘Colored by Axis’.
No James, you are wrong. I have shown above clearly that the computer cannot represent all floating point numbers at 16 decimal places. Again, the further you get from 1.0 the larger the step between representable decimal places.
You can do a easy experiment in SketchUp at the Ruby console:
a = 1.0000000000000002
=> 1.0000000000000002
b = 1.0000000000000001
=> 1.0
Why didn’t the value for b get stored as assigned?
Because a 64-bit computer can not store the value between 1.0 and 1.0000000000000002.
Precision Is Relative, Not Absolute
- Near 1.0, the smallest representable increment is
2⁻⁵²≈2.22e-16. - Near 1024.0 (which is
2¹⁰), the smallest step becomes2⁻⁴²≈2.27e-13. - So while the number of steps between
1.0and2.0is the same as between1024.0and2048.0, the step size is 2¹⁰ times larger in the latter.
Why This Happens
In floating point, before the number is stored it is normalized so that the integer part is 1. This causes the exponent part to be scaled.
-
The mantissa gives you relative precision—you get ~15.95 decimal digits of precision, but that’s scaled by the exponent.
-
This means:
- Small numbers have tight spacing between representable values.
- Large numbers have wider spacing, even though the mantissa is still 52 bits.
Summary
- All normalized floats start with
1.in binary, yes. - But binary precision is not uniform across the number line.
- The reproducible step size grows with the magnitude of the number due to exponent scaling.
It would be rare in a model if all the coordinate values remained close to or less than 1 where they could enjoy the same precision, say 14 or 15 decimal points. But this doesn’t really happen in practice.
You just cannot expect to get 16 decimal places of accuracy, even if SketchUp display 16 decimal places. We’ve already noted that these displays (as text) are often not accurate.
Addressing the elephant in the room …
Following this logic …
My sister-in-law is a Physician’s Assistant (ARPN) and as both my mother an sister are LPNs all whom I’ve often discussed medical issues, and I’ve been to the doctor many times in my 64 years for various aliments, and been many times diagnosed. So I’ve been thoroughly introduced to medical knowledge.
Could it be that if anyone I meet is sick, I could diagnose them? Of course NOT.
I think you may be taking a very complicated approach to a very simple problem.
The problem is not simple. Your simple comparison just will not work at 16 decimal places. And the higher the coordinate value (on the left of the decimal point,) the worse the precision gets.