This is a bug that I believe started in SketchUp 2015 (possibly SketchUp 2014) but its hard to track down so its only in the last few weeks I coincidently got a few bug reports for LightUp that showed the same problem.
Calls to Face#mesh for geometry not in a Group/Component can be 1000 times slower than the same geometry inside a Group.
Here’s some timings for a test model exhibiting the problem:
If I break into the code, it appears to be constantly throwing away all the normals and topology info every time Face#mesh is called when its not in a Group and re-calculating it. Resulting in horribly slow performance.
Well, that makes total sense. And seems like it’s been that way forever. Think about it. Inside a group, the SketchUp engine need only worry about interaction with the group’s (or component’s) entities collection.
Outside, in the model at large, SketchUp would have to check for interaction with the entire model’s entities.
I’d think it is wise to do this inside a group, even temporarily, so as to reduce the work SketchUp has to do.
So, specifically to your test, … did the test model have any (and if so how much) model level entities ?
… and how is that compared to the group’s entities (count-wise) ?
ADD: Oh, and you just know that Thomas or Chris will ask for a test script and test model(s).
If I have 10,000 entities in the top level model container or 10,000 entities in a Group container, its the same amount of work.
In the former it takes 1000 times longer. I can edit a top level model container just as easily as a Group container so what is cached for the group should be cached for the model container too. But apparently its not.
And it is a recent-ish behaviour. SketchUp 8 & 2013 don’t appear to have this problem.
Hm… this might be related to a fix for another issue where face.mesh would not yield correct vertex normals. I believe a change was made to force a vertex normal recalculation whenever face.mesh was called.
That being said - I see no reason the same call on similar mesh to be different depending on it being in model.entities or in a group/component.
Can you share a model and snippet for reproduction?
Sure, its a model stripped down to have just top-level entities. If you grep all the faces and run face.mesh on them and time it you’ll see the problem. Grouping everything and re-running and it goes away.
I will need to check with the customer if its OK to share. However, it appears to be a generic problem, not a model-specific one, so any model with a few thousand top-level faces should exhibit this. FYI This model had 14,000 entities.
I modified your tests to actually get positions, uvs, normals, edges etc to be more realistic and it does increase time, but nothing like what I’m seeing.
If I break into my code, it is permanently in the face.mesh call with this stack frame (does this give any hint to whats going on?)
It all looks like something is triggering SketchUp to recalculate all normals - yet I’m not aware I’m modifying the model each time I process a Face. Would there be anything else that would cause this?
There was an issue where some times vertex normals would not be returned correctly. Basically they had been invalidated at some point and face.mesh was called before they’d rebuilt.
I just looked at the code for face.mesh and it trigger an update every time a mesh with vertex normals is requested.
That explains the overall increase we see in my snippet above. But it still doesn’t explain your original issue described. I think we need to have an actual model and some code to reproduce to see what is going on. I don’t see anything here that would warrant a difference between root of model and a definition.
First of all, a Geom::PolygonMesh object is a virtual object. (It is not a object that exists within a model, nor saved within the model.)
So, creating them, or modifying them, does not effect the model entities, nor fire observers attached to entities collections (because they are not members of any entities collection.)
Likewise, creating or modifying them, should not effect the undo stack as no changes to the model occur. So it serves no purpose, to wrap only their manipulation within a undo operation. (If their manipulation is mixed with other model changes, then an undo block is understandable.)
But to compare one undo block with no “undoable” items, against a block in which a token undoable statement is inserted, is not a valid test, IMO. Of course the one with the “undoable” action should take longer, especially if there is any string conversion going on.
Add to that, the test attribute is being attached to the model object for both tests loops. Why not the model.entities collection, and then the group.entities collection ?
Is it possible some EntitesObserver or EntityObserver is firing that seems to slow things down ?
I see no difference between having the token attribute write in the block and not.
The average difference between working upon the model.entities and the definition.entities, is 2 hundredths of a second. Often less, and occasionally up to 4 hundredths of a second.
Also it seems random which of the two is the one that takes longer. In half the tests, processing the model.entities takes the slower of the two, … the other half of the tests, the definition.entities processing is the slower of the two.
But then I do not have LightUp installed, nor any heavy extension except for Dynamic Components.
Have you run your tests with no plugins installed. ie, renaming the “plugins” folder ?