Accuracy and return values from inputbox when default value is a length

There is a problem with accuracy when I use floating_point_number.to_l as a default in an inputbox. The floating point number gets converted to a truncated length and is returned as an inaccurate version of itself in the indexed return value.
For example

input=UI.inputbox([“Float”, “Text”], [12.123456789.to_l, “Jim”], “Basic Data”)
puts input[0].to_f

If you just hit return on the input dialog box the number comes back wrong.

I want the dialog to show units in the users format but I want the return values on accepted inputs to come back exactly the same as they went in.

I get a different result on my Mac. The inputbox displays ~ 12 1/8", which is the standard representation of your value given my model units setting. The return value is then exactly 12 1/8 because the field is actually a string input box and inputbox converts it to the same type as originally passed. ‘~ 12 1/8"’.to_l = 12 1/8" as converted. Depending on your model’s units settings, this sort of thing is probably happening to you.

that is exactly my point. I don’t want the rounding, I want exactly what was passed to the field back again.

Sorry, but that’s a problem that the existing UI.inputbox can’t solve. It converts your original values to strings because the input fields are text inputs - they can only display and the user can only type in strings. Formatting a value for display as a string will lose precision, particularly if the value is a length, because not all float values have an exact string representation as a length. On return, they parse the string as a length because that’s necessary to insure that the user entered a valid length.

you just need to handle it yourself…

default = 12.123456789
shown = 12.123456789.to_l
input=UI.inputbox(["Float", "Text"], [shown, "Jim"], "Basic Data")
if input[0] == shown
p default
1 Like

It is a pity the inputbox loses precision in that way. I wanted to compare the value returned with another and if it had or hadn’t changed do different things. Thank you for your reply.

thank you for the idea which might work but not as illustrated. I tried it but the input[0] is truncated and so never == the shown value. I will play around with it tomorrow. It is getting a bit late here in Ireland. Thank you again.

`` shown = 12.123456789.to_l.to_s```

locking it as a string seems to work with metric and architectural units here…


1 Like

YeeeeH! That works! Thank you for that.

I think this problem must surely have affected other programmers. This will be a useful thread for them to read.

default = 12.123456789
shown = 12.123456789.to_l.to_s
input=UI.inputbox(["Float", "Text"], [shown, "Jim"], "Basic Data")
puts input[0]
puts shown
if input[0] == shown
puts"input[0] #{input[0]} IS equal to shown #{shown}"
puts"input[0] #{input[0]} is NOT equal to shown #{shown}"

I tried the above with 0 zero as the number to the left of the decimal point and 10000000 left of the decimal point. Both worked fine.

Thank you again

Opps! Having looked at it again I have realized that the comparison works because both numbers have now been truncated. Ah well. It looked so promising.

I could temporarily change the models units accuracy that might help? Best would be if inputbox would leave the original length at its full accuracy.

Thank you for trying nevertheless.

In my plugins I simply check if the strings starts with “~” and if it does I assume the user didn’t modify the value.

1 Like

The accuracy will differ between 32-bit and 64-bit machines, as well.

That would show some really odd values in the inputbox. You’d see things like “0.7187500000000214”, “2.9999999999999982” and “9.999999999999998” all the time instead of values that makes sense to the user.

I think it makes sense to have the inputbox show lengths consistently with the VCB, dimensions and the rest of the SU UI.

1 Like

I like your idea of checking for the tilda except there is the (low) possibility that a user might type it in. I only want the original value returned if accepted by the user. The user could be presented with a rounded version of the number. The problem is that when you convert numbers between unit systems you get long decimals. If inputbox truncates these it is impossible to do comparisons.

I guess that would not matter since you are not passing the value between different machines but from inputbox to the ruby script on the same machine.

I tried changing the precision but realized that running the code on later more accurate machines would fail.

default = 10000000.123456789
shown = 10000000.123456789.to_l
puts"default is #{default}. shown is #{shown}"
options_manager = Sketchup.active_model.options
unitsOptions = options_manager["UnitsOptions"]
lengthPrecision = unitsOptions["LengthPrecision"]
unitsOptions["LengthPrecision"] = 16 #16 decimal places of accuracy

input=UI.inputbox(["Float", "Text"], [shown, "Jim"], "Basic Data")
puts"input[0] is #{input[0]}. shown is #{shown}"
puts"input[0].inspect is #{input[0].inspect}. shown.inspect is #{shown.inspect}"

unitsOptions["LengthPrecision"] = lengthPrecision #puts LengthPrecision back to what it was

output on my machine

default is 10000000.12345679. shown is ~ 254000.00m
input[0] is 254000.0031358024900000m. shown is 254000.0031358024900000m
input[0].inspect is 10000000.123456791. shown.inspect is 10000000.12345679

Interestingly .inspect produces different numbers. I wonder why?

Since SketchUp’s dimensions are only accurate to 1/1000", accepting input that’s more accurate is fruitless.
So why not always process the users input rounded to that level of accuracy ?

1 Like

Thank you. The way to do it therefore is to change precision temporarily to 1/1000" and then compare the inputbox return value with the stored variable similarly rounded then reset precision. I can do that.
Thank you every one for you helpful suggestions.

Just one last question. Is SkectchUp’s precision limit held as a Constant on the system? Then I could use that instead of specifying it myself.

SketchUp has an innate accuracy of 1/1000"
Any points closer that that are seen as coincident and with those a tiny edge can’t be created.
However, a tiny edge can exist in SketchUp - but you need to create it [or its container group etc] larger, and then scale it smaller afterwards.
The displayed units as set in Model Info do not directly affect SketchUp inner-precision.
So if you set it to display in mm to 0 dp you see 25mm if its a line exactly 25mm long, but ~25mm if it’s actually 1.000" long [which is 25.4mm, so the approximation ~ is added and the valued rounded - in this case it rounds down since it’s < 0.5mm].
So if the user types in 25.3456789mm it’ll display as ~25mm afterwards and actually be 0.997861374015748032" although SketchUp will round it to 0.9979" ?

I guess the accuracy limit isn’t kept as a Consant like PI?

It isn’t really a question of accuracy. What happens is that after many kinds of operations SketchUp executes a “clean up” operation to assure that the geometry hasn’t degraded due to finite precision computer arithmetic. This operation detects vertices that are closer together than 1/1000" and merges them into one. This does not truncate or modify the position values of the kept vertex, rather, it relinks edges that use one of the vertices to use the other one instead.

The merging threshold value is internal to the SketchUp code and is not accessible via any API. Over the years, extension developers have pried the threshold value out of the SketchUp team, but it has never been documented officially anywhere. Nor has the exact capture shape been disclosed (is it a sphere, a cube, or what?).

The effects that have been discussed in this topic have to do with SketchUp’s handling of length types when they are displayed, which does truncate the displayed value to the number of places given in the user’s units selection, indicating it did so with a leading ‘~’. When such a value makes a round trip via an inputbox, the returned value is truncated to what was displayed.