Huge File Size Question

I have a 1GB 3ds file I imported. In sketchup it’s 1.4 GB. I’d like to cut down the size in sketch up but the file consumes about 75% of my 64 GB of RAM and is too slow to work with. I’ve been planning on adding 64 GB of RAM to go to 128. If I do so, is it likely going to help, or am I simply at the limit of SketchUp here, regardless of whether I have more than enough RAM?

Important Notes:

  • The CPU isn’t even being strained, so that’s not an issue.
  • I do not have access to 3ds nor do I have the option to ask the person who sent it to reduce the file size.
  • I cannot post the file publicly.

Thanks

Pretty hard to help without being able to see the file. Open Model Info > Statistics and screenshot the info there. Make sure to check show nested components.

If this is directly from a file conversion purging will likely not help but it’s good practice to purge unused in any case. If it’s pure geometry heavy then you need to find a simpler desplaiy style to maximize speed, profiles and perhaps edges off. Or, cut the geometry into several files

Do you need the original amount of data or can a reduction while importing be an option?
https://skimp4sketchup.com/, Lindalë, …

1 Like

Thanks. I did try that. Unfortunately I wasn’t able get “show nested components” to work, precisely because of the slowness of the file. I may give it another shot tonight.

Well if there are no components in the model it does not matter, but what happened? it just stalled when you checked the box? Or nothing happened. What is the edge count of the file?

The CPU will be being strained : 1 core is all SketchUp can really use. So you’ll see one core maxed out.
That’s why it is slow.

Once loaded, turn off View > Edges > Profiles.
Then start organizing the different objects with tags and set visibility according to the focus of things to do.

1 Like

A belated thank you for the replies! I will give this another shot tomorrow and then I will be able to better answer the questions posed. In case it helps, I started with 32 GB RAM and the import would always crash at the end. I then put in a new 64 GB pair and was able to import the file and open it. So I wondered if going to 128 might make it usable enough to cut the file size down. As far as CPU even if SU is only using one core, I don’t think I ever saw it go past 10%.

Thank you Cotty, I will give that a try. I tried a few other online importers I had found and they were limited to 100 MB file size. But the 2 in your link I was not aware of.

In case anyone else runs into this, here is what I encountered. I tried the Transmutr program and while it created a file 1/3 the size, it was unusable. However I am just learning SU and it’s entirely possible that I made the wrong selections.

So today I upgraded from 64 GB to 128 GB of DDR5 RAM, which I’ve been wanting to do anyhow. It takes several minutes to load the 1.5 GB SU file, but after that I can now easily manipulate it, it went from unusable to being able to orbit etc in real time with no delays.

The moral of the story is that what your PC shows for consumed RAM can be highly misleading. Having 75% consumed was enough to make the file unusable. Here are what the specs look like now with the 1.5 GB SU file open with 128 GB of RAM:

FWIW the processor is an Intel i9 12,900K and the GPU is Nvidia RTX3090.

And now with the 128 GB of RAM I am able to see all the statistics for the file with no issues either, which was not possible with 64 GB of RAM, since the file would freeze for several minutes the moment I clicked on anything.

su1

Now it’s time to learn how to work with this and perhaps cut it down in size. I’m starting with the suggestions from @endlessfix and @MikeWayzovski

Thanks again everyone.

Wow, 47 million edges to calculate is enough to bring any machine to it’s knees. Doubling the RAM is certainly helpful. Keep in mind (as was said earlier) that it’s all about clock speed and single core performance, the CPU usage is spread across all the cores, in your case 16, so %100 divided by 16 means that your single core process is completely locked up at around %6.25, and as things heat up the cpu with throttle the clock speed to keep the heat down which also may slow things down. With a file this big it’s also about how fast your HD can read write to chew through that 1.5 GB of geometry, this is where SSD really shine.

Without being able to see the file it’s hard to give constructive advice. If there is geometry you can delete or simplify thats the best option, use tags to control visibility and start turning things off to take the load off the graphics pipeline. If necessary, or possible, copy 1/2 of the geometry into a fresh file then delete it, and use two files each with half the geometry. Is this a point cloud or scan or survey data for a very organic shape? I’m curious what file type you are importing that generates 47 million edges.

1 Like

Thank for all the helpful info. It’s an import of a 1 GB Autodesk 3ds Max of a custom designed house. And once I imported it into SU it’s a 1.5 GB skp file.

Really? That’s enormous amount of edges for any piece of architecture (mostly flat surfaces). Generally speaking we usually see this kind of numbers with very organic shapes and extensive tessellation. It’s actually pretty hard to achieve this with architecture, unless… the model is over populated with complex entourage for decorating, lots of house plants and overly detailed faucets and bathroom fixtures, 2 million edges in the folded bathroom towels or the pillows on the couch, that sort of thing is common for new users who indiscriminately populate their file with components from the warehouse paying no attention to the file size and poly count.

Or… the import process has in fact tessellated your “flat” surfaces. This occurs with some types of file imports depending on the format and import process used. With hidden geometry visible (View > hidden geometry) you might see many dotted spider web lines all over your walls and surfaces. If this is the case it would be good news as there are some extensions you can run to merge coplanar faces, this is also an import option with .3ds (in the Configure button at the bottom of the import window) and can really reduce the file size and make it easier to work with. Really wish I could see this file…

3 Likes

My beginner bet would be on tessellation. There are a lot of very expensive and intricate finishes that are shown with a lot of detail. Walls and ceilings with combinations of highly detailed custom wood finishes, complex custom tile designs etc. And although the architecture itself is largely flat, the wood and tile designs themselves have tons of curves.

One of the bigger of the issues of the file there is that every object in the model is unique - no instancing appears to have been preserved at all.

This is why both the file itself is so big, but it will also be why it is so slow.

You may want to try some other methods of getting the file out of 3DS max and into Sketchup, one that maintains the instancing.

1 Like

Traditionally component nesting reduces file size but once the file is loaded into memory, it doesn’t effect performance as the number of geometric entities SketchUp has to process doesn’t vary. Deep nesting even has a small negative performance effect.

3 Likes

It increases the number of draw calls made , which gives the CPU extra work.

I’m sure they are other issues with when you have such a high number of unique entities.
For example If you took a large number of those instances - grouped them into a a single mesh and reduced the number of unique groups, you’d probably see an improvement , although the file size would remain the same.

If you broke the entire mesh of the model into 1000 entities, rather than 22000 you’d notice a difference in SketchUp handling it too.

Whether this would make a noticeable difference when there are 32million faces is of course up for debate :grinning:

2 Likes

ThomThom’s CleanUp plugin would be helpful, but running it might take a while (days?) or even crash.

1 Like

Indeed, the cleanup plugin is the way to go. I did some stress tests with it once just out of curiosity and found that if you section off a large model into groups or components and then run the plugin inside the instanced group or component, and only tell it to work on a small selection inside of it, The plugin runs a bit better as it has less data to check. Of course this is a bit more labor intensive, but I found that was a small price to pay.

Thankfully your model data shows you are already using components so you are already part of the way there.