Timers / estimates & abortable operations + cost/benefit feature

Two relatively simple changes that would save MILLIONS of user-hours:

1.) Before any operation, have a subroutine calculate / count the number of sub-units in question, and display an estimate of the time it would take.

2.) Allow the user to ABORT the operation if it’s taking way too long.

There are way too many situations where you start something like a cleanup process, or component purge, or some complex geometry transformation, and… you wait… and you wait… and you wait some more… and the worst part is, you do not know if it will finish in the next 5 seconds, 5 minutes, or 15 hours.

If it’s 5 seconds, I’ll wait.
If it’s 5 minutes, I’ll take a coffee break.
If it’s 15 hours, I’d rather abort it and schedule it for the end of the work day so it can run overnight, OR IN A DIFFERENT PROCESS.

There’s simply no way to know, and it’s very nerve-wracking and time-consuming.

Allowing users to abort an operation that’s taking too long, whether it’s because it’s frozen/stuck, or simply taking too long versus the benefit it’s bringing, would go a long way toward improving usability across the board.

“But how will the program know how long it will take? Every computer is different!”

Well, you could have a basic benchmark feature that would run through some tests and “learn” how long YOUR particular machine would take to conduct a certain number of certain operations.

This brings me to another feature / side-benefit: a unified “Sketchup Performance Score” that would allow users to make more informed decisions regarding upgrading their hardware, operating system, graphic drivers, etc.

Which would benefit the Sketchup userbase / ecosystem by creating a cost/benefit basis for hardware upgrades. If a better CPU / GPU combo would yield 500 additional SPS, at a cost of, let’s say $ 50, it’s worth it… if it would only yield an extra 25 SPS, it’s not.

Currently, there’s no way to objectively benchmark the monetary value of hardware upgrades for this specific software. With a unified benchmark score - there would be.

Hardware vendors would love it because it’s an additional selling point (“our videocards outperform the other guys in Sketchup by 18% for the same money!”)

Trimble should love it because it’s more market exposure at no additional cost.

Users would love it because it lets them make informed decisions.

(P.S. I’m available for marketing consultations, in case you guys need someone with 20+ years of watching WordPerfect, CorelDRAW, AutoCAD, Illustrator, Photoshop, and other programs develop.)

Only if you add emulation for PlayStation and XBox to your list of feature requests. Otherwise I can’t see any video card / hardware maker giving any thought to SketchUp.

Sketchup, like what we put on hamburgers? Ehhh, ¯_(ツ)_/¯ why does my video card need to care about that?

Even basic games seem to get Nvidia’s Game Ready optimisations these days.
And sometimes a piece of software doesnt have to be popular to catch the attention of hardware companies. Look at Cinebench or WinRAR or Blender (cycles) as examples of how a good benchmark function within the software could become influential among reviewers depsite the software being reasonably niche.
That said, i doubt SketchUps utilisation of hardware warrrants a useful benchmark tool as it doesnt appear to fully utilise any hardware component.
But i like the idea of a more scientific way of choosing hardware for Sketchup. The current Test Time Display Benchmark doesnt cut it.

This is usually said by someone who has no knowledge about the subject. (in this case coding)
I wouldn’t know if these changes are simple as I know nothing about coding, so I wouldn’t say such a thing…

Otherwise I agree with your feature request.

2 Likes

That would be great and thanks for sharing, but they are unfortunately not relatively simple changes.

8 years old but still very relevant: Progress Bars are Surprisingly Difficult

The abortable operation is also pretty hard to do well. You need to be able to perfectly roll back all the changes made so far to the model, and ensure that doesn’t take a long time so the user should just have waited for the operation to complete.

2 Likes

If you know there’s a possibility of aborting / undoing, it’s actually simple from the software-architecture POV.

You save a state / checkpoint, and then start the operation. If it’s aborted, you unload whatever is in memory and load the checkpoint. If it’s completed (and confirmed by user), you save the new version as the next checkpoint.

Yes, it’s more expensive in terms of memory and storage, but this isn’t 1994, storage is cheap and users’ time is much more valuable.

The question of “is it going to take longer to load/unload the checkpoint or to complete the operation” would be resolved by the internal benchmarking - if the program knows it will take 8.2 seconds to load the file, versus 9 minutes to complete the operation, it’s a pretty straightforward call.

Or let the user decide:

“This operation is estimated to take 9 minutes to complete. If aborted, it will take <1 minute to reload the previous state. Proceed? [Yes / No].”

And then it’s not your problem, it’s the user’s.

I don’t know, maybe I’m spoiled by COBOL and Linux, but the general Windows mentality of “it’s gonna take as long as it’s gonna take, buddy, deal with it” irks me… Microsoft’s lackadaisical attitude has a cascade effect on pretty much everything developed under Windows. It’s not even a SketchUp issue, it’s a systemic issue, I’m not blaming you guys.

From the “Progress Bars” link:

“still useful, as established by Brad Allan Meyers in his 1985 paper (“The importance of percent-done progress indicators for computer-human interfaces”).”

Exactly what I’m talking about. Even back forty years ago (!!!), software architects understood the value of user awareness of machine processes. I just think that with the massively more powerful hardware we have today, “the edges are smoother” and it should be easier to run estimates.

Maybe it doesn’t have to be a definitive time estimate, fine. But at least a progress bar showing the completed work versus remaining, and percentage, so the user can guesstimate how long it’s taking. This is the approach that Epic Games takes - I can start an update on a game, and it’s not showing a time estimate, but if I see “1-2-3-4-5-6-7-8-9-10%” roll past in a few seconds, I know instinctively it’s going to be a minute. If I see “1%… … … … … 2%”, I know it’s going to be about 5 times slower. But at least I know SOMETHING.

This current situation with “Purge Components” … … ??? … ??? … ??? Nothing’s moving … … ??? … Hmm is it frozen? … … ??? … … I wonder what the weather will be like tomorrow … … … Nope, still frozen … … … … Ok, let’s get some coffee … … … … Still nothing changed … … … Huh … … … … So should I… … … I don’t know, give it another minute … … …" is much more nerve-wracking.

And that’s just the SketchUp internal functions. Doing things like Polyreduce or Cleanup^3 on a complex model could literally take HOURS. (My personal record is 14 hours and change, I have the screenshot somewhere. I knew it was going to take that long, so I just let it run in a separate process. Imagine if you have to sit there and guess at it for 14 hours…)

Just saying. There has to be a way to bring SOME enlightenment of the process to the user interface. And an internal benchmark subsystem is an excellent way to start.

Yes we do this already for the undo system.
It’s not going to be just a matter of a handful of lines of code to use it to make all operations abortable though!

1 Like

Nobody disputes that this is valuable.
But it’s not simple.

2 Likes

One thing that Rendering packages (and gaming benchmark tools) often do well is to expose a small console so that the user can see which sub-operations are being undertaken, as they happen.
This is a great learning tool because it can show what is taking the longest, orof something has stalled…which could be a number of things a user can go back nd optimise, or work differently to avoid reaching a particular threshold of performance degredation.
In the case of rendering it could be caching, processing geometry, loading the environment, scattering instances, transforming, processing textures, raytracing, exporting image data, etc.

1 Like

Thank you for replying - one of the reasons Sketchup is my favorite is because of the incredible community and support from the dev team!

There has to be a way to do this. I don’t know which language Sketchup is written in, but it seems more of a vision/approach issue than the limitations of programming language/framework. Right now, it’s a monolithic process where everything happens under the assumptions that (1) every operation will take a reasonable time, and (2) every operation will complete successfully. So there are no layers or abstractions.

However, if it was refactored as a branched / layered process, where every operation is assumed to have a possibility of failure or user rejection - i.e. before “risky” ops, the program makes a copy of the process / data and operates on that, with a “supervisor” watching it and interacting with the user (both in terms of providing the user with feedback (like @AK_SAM said) and having an [internal, safe] method of aborting the “copy” if needed) - it would be a wildly different experience.

To be clear, I’m not posting all these because I have a problem with Sketchup. Quite the opposite. It’s been an absolute godsend and it’s allowed us to take our design business to an entirely new level - and I would love to contribute any thoughts or suggestions to make it even better :slight_smile:

I feel you, but I grew used to it.
The good thing is that these operations on skethup only keep a single core of your CPU busy.
Once models get heavy, you can still open the same file twice, run the CPU heavy computing on one, and work on the second.
And once the computing is over, copy/paste the things updated in the clean file.
It doesn’t help with knowing the time operations will take, but helps not wasting any time waiting.
At least that’s how I do it.