Computer Game Graphics vs 3D Modeling Programs

Continuing the discussion from Hardware Improvements For Sketchup 2016:

… I just wondered, how is it that modern gaming engines actually use live rendering to make them look astonishingly real, yet programs like SU noticably slowdown with dwg files (which are just lines no?) and even the most basic textures and an increase in polygons?

I don’t play games but I understand that a game is hugely different from a modelling application. Games often do not need to render their environment in real time. Game design relies more on bitmap textures with quite simple basic geometry underneath.


I realise there are lots of tricks that the game engines use, however, they certainly don’t skimp on polygons anymore. The bitmaps which are used nowadays are very high resolution too.

Perhaps in gaming terms the word ‘realtime rendering’ which is the norm in games these days, actually means something different from the 3D application terminology. But I do know that, for example, there are even rain drops with reflections in them rendered during the game. They use (again, generated ‘realtime’) HDR lighting, volumetric lighting and haze effects,displacement mapping, the list goes on… I just wonder how on earth they get all that performance out of the same hardware we use everyday…

I real time rendering means it actually frees up memory and processor power so that something that is not visible (off screen or the backs of things) is not rendered, and they also use effects that reduce the number of polygons and the resolution of the textures generated on a model the further away they are from the camera. And then ontop of that, the CPU / GPU (don’t know which one) has to cope with the huge amounts of physics that modern games rely upon too. It’s a mystery to me! Maybe I should go search the web for other forums which might answer my question. I’d love to know. Maybe its that the money they have to throw at it is far larger than 3D modelling programs?

I’ve never actually used other 3D modelling programs, apart from Blender on an old machine, and wonder if other programs have the same issues with large numbers of polygons…

The kind of games you’re thinking of have handed all their geometry and textures off to the GPU, and use the GPU for lighting and particle effects. It would be possible for SketchUp to work that way, if you get the LightUp add-on it does allow you to move around a real time rendered looking environment. See here a short example video:

Now, that is not rendered in the way Anssi was thinking, but it looks plenty good enough for a 3D editor.

LightUp comes out of the rendered appearance if you’re going to be changing geometry, but that’s probably because the rendered looking view isn’t part of SketchUp itself. If you look at Unity, its Scene view is as good looking as its Game view:

1 Like

You likely killed the goose right there :wink:

1 Like

Maybe I should delete that bit :slight_smile:

… done :wink:

I expect that only someone from the core SketchUp development team could give you a solid answer. The rest of us can only speculate. And, as it involves proprietary technology, it isn’t likely that a SU team developer will give you a lot of technical details. I’m neither a SketchUp team member nor a game programmer, just someone with a fair amount of computer experience. So, to speculate away…

I think it involves the fact that SketchUp provides an editable model based on a geometry database in the main memory of the computer. The graphics processor (“GPU”) can do things such as orbit, zoom, pan, and paint polygons that have been passed to it. It can handle visibility and shading variations. It can also gather user inputs such as mouse clicks, drags. etc. But it must pass these inputs back to the computer’s central processor (“CPU”) for interpretation in terms of what they mean to the geometry database. After editing the geometry, the CPU must pass the resulting polygons back to the GPU for display. The possible changes to the content of the model are almost infinitely arbitrary, so there isn’t much the GPU can do to help. The new set of polygons could have no similarity to previous ones - but even if unchanged, it would be a complicated task for the CPU to tell the GPU which ones to keep vs replace. This division of responsibilities and round-trips limit performance.

In a game, I think that the contents of the model world are pre-determined to within scripted limits and the resulting possibilities are mostly pre-rendered as textured polygons that are uploaded to the GPU. There are also some kinds of shape morphing that the GPU can do with a set of polygons for animation. Most of each scene is background that, other than camera location, is static. Effects such as explosions consist of pre-rendered clips that are blended into the scene by the GPU. As a result, the CPU’s task is largely limited to ordering the GPU about what to do with the content that has already been passed to it.

These days a gaming GPU is actually quite a lot more powerful than a CPU, but not in ways that SketchUp can exploit.


… Speculation aside, that’s a great help, although I can’t say I understand everything you’ve said :smiley:

The other thing I wonder about which isn’t in the thread subject header is seeing how other programs like Blender and the monster that is Z Brush perform. Z Brush in particular seems to handle sculpting with polygons very impressively, and without ever having tried it, I can’t help there must be some fundamental differences in the architecture of the different programs ‘under the hood’ or rather, under the bonnet as we say in the UK.

SketchUp also has its inferencing that would complicate the situation slbaumgartner so well outlines even more.


1 Like

Without getting into the very technical (and as @slbaumgartner mentioned) proprietary details of how SketchUp renders models, I can say that @slbaumgartner hit the nail on the head with this comment:

“I think it involves the fact that SketchUp provides an editable model…”

In games, much of the 3D rendered content is created in advance by artists and heavily optimized before the game is shipped to users. Yes certain algorithms like lighting, particle systems, physics, etc. are dynamic but the inputs to these algorithms, the character models, textures, scene representation, textures, shaders, etc. are built in advance and optimized “off line” (ie during the game build process not during game play). Often, this optimization process can take many minutes or hours of processing time to convert a game’s models into representations that can rendered as quickly as possible by the game’s engine during game play. Whenever an artist wants to change a model (edit vertex positions, apply new textures, change a shader, etc) and see the final, high-quality results in the game they must go through this content optimization process again.

An application like SketchUp has a very different problem to solve since the user can potentially edit the model at any time. Because the model is always changing, an editing application like SketchUp does not have the same opportunities for rendering optimization as a game engine.


The same sort of thought got me to googling, and I came across this thread. Interesting read. Thanks for the explanation!

I was wondering the same. This sheds some light and satiates a bit of that curiosity. Thank you!

that a good game