Strange behavior in Length Unit conversion

Hello,

Does anyone have an idea why on SketchUp 2026 only, I can get the following result ?

2520/25.4             => 99.21259842519686
2520.mm.to_inch       => 99.21259842519686
"2520mm".to_l.to_inch => 99.21259842519684  # Last digit is different

This behavior does not occur in SketchUp versions prior to 2026.

I see it also. But how could this hurt any building level computations?

The difference is 200 quadrillionths of an inch or half a femtometer. The AI says this distance is 20,000 times smaller than any thermal vibration of atoms themselves. It is a little over half the diameter of a proton.
Comparably, if an atom were the size of a football stadium, this difference would be the size of a grain of sand in the middle of the field.

The AI also correctly identified this, (without me telling it how I got the number,) as the “leftover dust” from a conversion of 252 cm to inches.


The AI also says “SketchUp’s .to_l parser is losing or rounding a single “least significant bit” compared to Ruby’s native / operator.” (Which is actually the Float#/() method.)

1 Like

Thank you but I don’t need AI to identify that the difference is negligible. But it’s primarily a sign of a change somewhere in the way 2026 convert this value and a difference from simple division.

As mentioned this behavior doesn’t occur in previous SketchUp version. Does AI know why :wink: ?

1 Like

As I said above …

It also said this implies that the String#to_l API method is doing something different than the Core Ruby Numeric division method does on the C-side.*

* Keep in mind that the AI cannot see SketchUp’s core code. It is making an educated guess.

But WHY CARE? If you use API comparison #== methods this difference is MEANINGLESS.

Floating Point Math on computers is not really “simple”. There are inherent errors and gaps in precision.

@bugra Do you have any idea?

Of course and that’s why I don’t understand why talking about AI, there…
If I ask a question here, on a human forum, it’s not to have a AI response.

Because my extension stores this value in material attributes, and if this storage was done in a version prior to 2026, the strict comparison of the FLOAT value no longer works if the model is opened in version 2026 or later.

I therefore wonder why this change was introduced in SU 2026, and I would like to point out that it is not even equivalent across the entire API, since 2520.mm.to_f does not give the same result as "2520mm".to_l.to_f… where .mm method creates a Length object too.

2520.mm.to_f == "2520mm".to_l.to_f
=> false

And at the risk of repeating myself, this difference did not exist in previous versions of SketchUp…
So, does the preceding inequality constitute a desired behavior?

Expecting an exact match between calculated values in floating point arithmetic is always risky. One must test equality to within a tolerance.

Unless you are a tech nerd, stop reading here!

I once had a long argument with a compiler writer at Microsoft about how a value assigned to a variable should not change spontaneously. I had one expression that set y=x and later tested y==x?. No calculations involved. The compiled code returned false! He argued they were floating point values so this is expected. I argued that it broke the semantics of the language, in effect causing x != x. He wouldn’t listen.

For the true tech nerds out there, it turned out to be a quirk of intel’s floating point accelerator, which had an 80-bit representation for a value stored in its registers and an external 64-bit representation for values in RAM. The compiler was generating code that pushed y out into external rep because it ran out of internal registers. The 64-bit was truncated from the 80-bit!

1 Like

Misalignment of objectives

I know, but I didn’t expect this 6.0 feature to change :wink:

Agreed.

And again, as I said before …

So this means that using the Ruby core Float#== is the wrong pattern to use if you are going to save and compare to 14 decimal places.

Instead, round or truncate the value to say 6 decimal places before storing or comparing.

Or, after reading it back, convert the Float to a Length and use the Length#== method to compare within SketchUp’s internal tolerance.

As Steve pointed out, this can be caused by either a compiler (or perhaps a compiler parameter when compiling,) the CPU (architecture) on which the application is running or perhaps the C runtime being used.

But …

2520.mm == "2520mm".to_l
=> true

Not desired, but not totally unexpected given the limitations of floating point arithmetic in computers.

  • I doubt seriously that the SketchUp team will spend any time attempting to “fix” this.

I’m not going to type out the long explanation. Take note of the last paragraph and last sentence in the AI explanation.

  • TL;DR - The limit of 64bit floating point numbers around the 16 decimal place, which means basically that there are often errors and discrepancies in the last 3 decimal places. So in reality a programmer should not expect precision to the 14th decimal place. It may be safer to truncate or round to the 12th decimal place.
... Click to expand an AI explanation of floating point discrepancies ...

This discrepancy typically arises from Floating-Point Non-Determinism or subtle differences in the Floating-Point Environment. While the IEEE 754 standard defines how numbers are stored, it does not always guarantee identical results across different execution contexts.

1. Extended Precision and Register Spilling

Many processors, particularly x86 architectures, perform calculations in 80-bit “extended precision” internally, even when the target variable is a 64-bit double.

  • The Difference: If a value remains in a high-precision CPU register during one calculation but is “spilled” (written) to 64-bit memory in another, the rounding occurs at different stages.
  • Result: A value that was truncated or rounded to fit into 64 bits will differ slightly from one kept in an 80-bit register, often appearing in the 14th to 17th decimal place.

2. Compiler Optimizations and FMA

Modern compilers often use Fused Multiply-Add (FMA) instructions. Instead of performing $a \times b = \text{intermediate}$ and then adding $c$, the CPU computes $a \times b + c$ in a single step with only one final rounding.

  • If one version of your code uses FMA and another (perhaps due to a slight change in logic or optimization level) uses separate instructions, the intermediate rounding error in the latter produces a different bit pattern.

3. Library Implementation (Math.h)

Standard library functions like pow(), sin(), or string-to-double converters (atof, strtod) are not always identical across different versions of a C++ runtime or OS.

  • One environment might use a SSE2 optimized version of a function, while another uses a generic implementation.
  • Because these functions often use polynomial approximations, the specific coefficients used can lead to a 1-unit difference in the Last Place Value (ULP).

4. Expression Evaluation Order

Floating-point math is not associative.

$$(a + b) + c \neq a + (b + c)$$

If the conversion involves multiple steps and the compiler reorders them—or if the values are processed in a different sequence—the accumulated rounding error will shift. In a 64-bit double, the precision limit is roughly 15–17 significant decimal digits; thus, the 14th or 15th place is the primary “battleground” for these rounding errors.


REF:

1 Like

Trouble is, compilers and hardware are designed to optimize throughput, and they may rearrange expressions to accomplish this, resulting in tiny differences in floating point results. Just changing to a different compiler could cause differences like you are seeing.

2 Likes

i also see it

1 Like

It would be poor etiquette to post AI generated content without stating the source.

Secondly, when the subject matter is complex and requires an extremely verbose explanation, do you think it is fair to ask someone to type out such a long response, when the answer is already out there on the web and easily found by asking any available AI for an answer?

The difference seems to be due to a code change where we calculate a conversion factor, such as performing a multiplication rather than a division. These are bound to cause minute numerical differences in the outcome.

4 Likes

Thank you very much for your answer @bugra!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.