Commit_operation takes too long

Because v2017 needs better OpenGL than before [>=v3.0]…
What OpenGl version do ALL of the various computers have ?
Any differences ?

The problem computer is running 4.5

The other computers I tested on are running 4.3 or 4.4.

Does anyone have a computer running OpenGL 4.5 to test this on? I don’t.

I’ve said it many times here on the forums, …

… but I’ll never suggest to upgrade any thing older than 4th generation machines to Windows 10. And, locally we made the choice to leave all our 4th generation machines at Windows 8.1.

I believe this is the root of your issues. Old machines that should not have ever been upgraded to Win 10, are experiencing multiple performance issues, especially with selection sets. (See the Technical Problems category.)


Agreed! p() is global method, and each time the interpreter encounters the p it has to take extra time to determine if you are wanting to call the method, or trying to access a local object reference named “p”.

Avoid using the names of global methods (from BasicObject, Object, Kernel, Comparable or Enumerable,) as local references.

1 Like

Thanks for explaining why not to use p as a variable.

Except that in this case it’s the newest (supposedly fastest) computer I own!

I could stand 100mc even… but 1200ms is to sluggish.

It is definitely not new if it’s using an old obsolete CPU first released in 1st quarter of 2011 !

When it arrived at your door has nothing to do with it’s capabilities.

http://ark.intel.com/products/52213/Intel-Core-i7-2600-Processor-8M-Cache-up-to-3_80-GHz

1 Like

That’ll teach me for not doing my research. :frowning:
So my CPU is the problem?

The CPU is at least 2 years older than any of the other computers I tested. :frowning:

that’s why the i5 had better results in the comparison I looked at…

time to buy a mac…

john

Well ya get what ya pay for. Sometimes cheap really does mean “cheap”.

I’d first see if you can send the item back (seem like it may be too late as it’s almost 3 months.)
Second I’d check the motherboard and see if it’s BIOS and chipset supports a newer generation CPU. (Not likely as they seem to use different packages and sockets each generation.)
If that failed, I’d look at a newer motherboard, CPU and memory combination, and keep the SSD, case, power supply and Nvidia card. (If you are going new I cannot see buying less than a 6th generation.)

Or you could pass along the “savings” :wink: to someone else who needs a newer machine, but not a 3D modeling machine. Then use the money to buy a newer machine this second time.


All that said, I’d say it is a combo of such an old CPU & chipset coupled with Windows 10 slowness issues.

1 Like

Thanks for your help Dan. It’s a little disappointing, but I think I might keep this computer around for testing. Clients don’t always have the best computers and sometimes there are issues the don’t show up on faster computers.

I once had a similar issue - deleting a single group from the active entities collection took approx 1s to commit.
This worked for me (no idea why or what’s going on behind the scenes, but reproduceable for SketchUp 16 and 17): just before the line containing the commit-operation, I added a second (transparent) Sketchup.active_model.start_operation(<opname>, true, false, true). Execution time for commit went down to <50ms in my script.
Maybe worth a try…

Thanks for the tip. In this case it made no difference.

Tips for optimum performance.

Workflow - Improving Performance | SketchUp Help

Hardware - SketchUp Hardware and Software Requirements | SketchUp Help

Now the problem disappeared. Maybe I had gremlins. :slight_smile:

Ok I actually figured it out.

I had code that deleted components and recreated them. The problem was that the old definitions were not purged so the code became progressively slower. Apparently new definitions take progressively longer to create as the number of definitions increase. This is probably due a definitions observer I have that iterates the definitions whenever a definition is added.

At any rate a purge_unused after removing components resolved the issue.

No seriously, now I actually figured it out. :wink:

The following code takes progressively longer until the 10th iteration takes 8.56 seconds.

  def add_test
    Sketchup.active_model.start_operation('test', true, false, false) 
      t = Time.now.to_f
      for i in 1..100 
        cd =Sketchup.active_model.definitions.add('test')
        cd.behavior.cuts_opening = true
        cd.behavior.is2d=true
        cd.entities.add_line [0,0,0], [0,10,0] 
        tr = Geom::Transformation.new
        c = Sketchup.active_model.entities.add_instance cd, tr
        c.make_unique
        c.erase!
      end
    puts Sketchup.active_model.definitions.count
    puts ((Time.now.to_f-t)).to_s + ' seconds'
    Sketchup.active_model.commit_operation
  end 

It turns out what is taking so long is that SU is iterating through 1000 definitions to find a unique name.

The following code runs in 0.04 seconds (213 times faster), simply because the name is already unique.

  def add_test
    Sketchup.active_model.start_operation('test', true, false, false) 
      t = Time.now.to_f
      dc = Sketchup.active_model.definitions.count
      for i in dc+1..dc+100 
        cd =Sketchup.active_model.definitions.add('test'+ i.to_s)
        cd.behavior.cuts_opening = true
        cd.behavior.is2d=true
        cd.entities.add_line [0,0,0], [0,10,0] 
        tr = Geom::Transformation.new
        c = Sketchup.active_model.entities.add_instance cd, tr
        c.make_unique
        c.erase!
      end
    puts Sketchup.active_model.definitions.count
    puts ((Time.now.to_f-t)).to_s + ' seconds'
    Sketchup.active_model.commit_operation
  end

SketchUp needs a faster way to check definition names.

Using a hash to check the name takes 0.10 seconds. Allowing SU to do the checking takes 8.56 seconds. Yikes!

  @dnames = {}
  def get_def_name(name)
    valid = false
    unless @dnames.key?(name) then
      @dnames[name] = ''
      return name 
    end
    cnt = 1
    until valid
      unless @dnames.key?(name + '#' + cnt.to_s) then
        @dnames[name + '#' + cnt.to_s] = ''
        return name + '#' + cnt.to_s
      end
      cnt = cnt + 1
    end
  end
  
  def add_test
    Sketchup.active_model.start_operation('test', true, false, false) 
      t = Time.now.to_f
      dc = Sketchup.active_model.definitions.count
      for i in dc+1..dc+100 
        cd =Sketchup.active_model.definitions.add(get_def_name('test'))
        cd.behavior.cuts_opening = true
        cd.behavior.is2d=true
        cd.entities.add_line [0,0,0], [0,10,0] 
        tr = Geom::Transformation.new
        c = Sketchup.active_model.entities.add_instance cd, tr
        c.make_unique
        c.erase!
      end
    puts Sketchup.active_model.definitions.count
    puts ((Time.now.to_f-t)).to_s + ' seconds'
    Sketchup.active_model.commit_operation
  end
1 Like

Yowch! That sounds like SketchUp is using a simple linear list and a text match test to find the definition names. If true, that is just plain bad design! Any list that needs to be searched should use a better scheme such as hash or tree!

2 Likes

It took a substantial amount of effort to pinpoint the problem. I’m actually surprised to find this as the result. It’s not the first thing you’d think to look at.

Edit: It’s a long standing issue. I tried it all the way back to Google SketchUp 8

… and String comparison is notoriously slow in Ruby.

In the case of resolving unique definition name when you create it, that’s all happening in C++. But even then a plain linear search is slow when the list is long.

This probably also explain some of the “groups are slow” threads we’ve seen. As creating them never prompt for name in the UI or API. But the definition do have a name “Group #x” - so they probably experience the worst case scenario as you’d have to traverse the full list to find a unique name.

There is probably something similar with material names.

Back in the early days this was probably less of an issue - I recall when I started using SU in version 6 a “large model” started at about 20-30K faces. Now we are talking at least 200-300K and see more frequently models in the size of 1M+ faces.

So yea - certainly room for improvements there.