How to create architectural visuals efficiently?

Isn’t this post off topic. It should go in the petition post.

1 Like

Sketch-up is certainly fast, but you probably wont get the results you can get from say 3DSMax and Vray, but you can get close. Sketch-up doesn’t like point clouds though, there is a plug-in called undet that allows reasonable point clouds imports, but the plug-in cost 4 times what sketch-up cost, so we tend to do work in AutoCAD with the point clouds and bring them in.
2k seems a lot of expenditure based on what your building, especially if the job doesn’t even come off.
My advice would be to use sketch-up with Enscape, give a little detail to the residence
but plough most of your energy into your structure and really make that pop… Your not selling the residence so why spend so much time making it look so real, it could potentially detract from your actual focus, which is your glass structure.

1 Like

@kevin58 I’m not so clued up regarding Faro 3D, so no comment there. I feel that a combination of Photo Match and Twinmotion will do the job for you super efficiently and achieve all the nuisances you have mentioned throughout this thread. Twinmotion is not a traditional ray-traced renderer but achieves almost identical results. The added bonus is that there is no rendering time required. It is live rendered. Import model and the render is instant. With a modicum of prep work in SketchUp, a direct link to Twinmotion, some initial once-off effort in Twinmotion and you’ll have what you’re looking for instantly.

Foreseeable issues to resolve in this workflow are:-

  1. Create a plane with your photo added as a texture to it in SketchUp. Twinmotion will then accept it.
  2. Sun angles - you could try geo-locating your model in SketchUp and matching with the time the photo was taken. Then play with the sunlight intensity and fog in Twinmotion to get a “matched by eye” similarity. Not sure that Twinmotion bothers with the geo-location of SketchUp actually, so you may have to play with the sun angle and time in Twinmotion as well to match any shading that exists in the photo.
  3. Use a single glass material in SKetchUp, then swap it out in Twinmotion using their glass library. You only need to do this once per project. All updates in SketchUp, refreshed in Twinmotion, will then maintain the Twinmotion materials that have already been assigned.

Throw in some Twinmotion filters and effects (fire pits and outdoor lights) - as you see fit and as your creativity stirs you. I sincerely doubt that you find much else with this sort of efficiency. (Lumion perhaps but I’ve never used it) or go straight to gaming engines, if you have time for a steep learning curve.

1 Like

I actually think that for such simple models using realtime engines is a bit over the top as any engine will be fast and realtime are more expensive. Twinmotion has direct link but you do have to export the model and open an outside software. I use Thea and Twinmotion and Thea allows rendering inside SketchUp window as you model. It also allows having a background photo and fine-tuning sun position either by SketchUp sun or manually. I’d recommend something like that. For such simple models is as fast, output is way better, integration with SketchUp is simpler.

Enscape also allows working inside the model.

If you have a SketchUp Studio subscription VRay is also an option but it’s not as well integrated as Thea imho.

For such simple photomatch models, the best is not the fastest as all will be fast, but the one which is more tightly connected with SketchUp.


Who are you speaking to? If you are responding to me, no it is not off topic. Just because you may not understand the point, does not mean my post is off topic. More people would participate in forums if not for this type of snobbery

I was honestly thinking you might have posted in the wrong thread. It happens to me a lot: I’m posting something, then read another thread for reference and then post in the wrong thread when I hit the button and misread. In the case of your post it was really fitting for this thread:

I still don’t see the immediate connection to the current thread, but I do recognize it must be my flaw.

Thanks for pointing me to the right path. Written language is always difficult to understand. I wasn’t being snob while you now seem to be agressive though you are probably not.


I am revisiting this thread after a few months hoping to pick your brains again :grinning:

I now use Photo Match & the Vray Plugin to create quick visuals, and on the whole I am pleased with the results and the speed, however the quality of the projected textures is a constant disapointment.

In the visual below you can see the difference between the projected textures and the Vray textures where the quality of the original house is awful compared to the photo.

Does anybody know if there is a way to improve the projected textures?


By default SketchUp limits texture image sizes to 1024 x 1024 pixels. Depending on your graphics card, this can be upped to 2048 x 2048 or 4096 x 4096 by selecting the “Use maximum texture size” box in the Window menu>Preferences>OpenGL dialog and restarting SketchUp.

1 Like

Yes. There are a few points, and I don’t know the details or your process.

  1. Using SketchUp to produce the composited model and photo together can be done for quick and dirty but doesn’t produce the best results. It’s better to composite the SketchUp or rendered results into the photo in Photoshop. Many of my Matchphoto scenes have two actual saved scenes: one with a style with the background turned on and another with a style with no background for export. When using a renderer it’s more complicated. You need to render the whole model and then create a mask for use in photoshop that helps composite just the new part into the untouched photo part. The scene with no background can help make that mask in Photoshop.

  2. Using “Project Photo” from the matched photo doesn’t give you the best textures. I generally shoot two kinds of photos on site: some are 2 or 3 point perspectives that are intended for use as Matchphoto material, and some that are looking straight on to the building or specific surfaces that are intended to be imported as textures only.

With rendering and especially with glass, the reflections are key, so you need stuff in the model that will be seen reflected in the glass. That gets very case by case specific and may or may not turn out to be a bit of work.


Thanks very much :+1:

I’ve now ticked the box as per the image below and it definitely improved the projected texture quality but not to the standard of the original photo.

Do you think there is anything to be gained by increasing the Multisampling anti-aliasing to 16x?

That’s given me a great idea.

I will load the original photo and the render into Photoshop with the original photo as the bottom layer then erase the rendered projected textures so the original photo shows through.

I think that will be a great solution and shouldn’t take too long :grinning:


To make the masking job in Photoshop a little easier, you can:

  1. With the Matchphoto scene selected, hit the plus sign to make a new scene that’s a copy of it.
  2. Set that copy of the scene to a new style that has match photo background turned off.
  3. Output a TIFF or PNG of that scene at the resolution of the photo.
  4. With some manipulation in Photoshop, you can import that into your picture and turn it into a mask.
1 Like

Thank you.

I managed to create this in just a few minutes which I am pretty happy with. The projected textures are now replaced with the original photo so the house is now crystal clear.

The problem is that as I look closely I see other areas to improve (i.e. I need to choose a clearer glass material) which is probably a never ending circle. In all seriousness, I can now create a decent visual for a client in a fraction of the time I used to take using Photo Match which is brilliant.

Thanks for you help.


that’s a beautiful rendering. Who did that? Sketchup+Vray+Photoshop?

Cheers William.

It was created in Sketchup using Photo Match then rendered in VRay. Photoshop was used to replace the projected textures from Photo Match with the original photo as the textures from Photo Match appeared to have relatively low resolution.

1 Like

This is the workflow I need to master in my practice. Will pay for consulting if you have time

In Thea render it is possible to have material projected by camera. What it means is that the whole material of the model where textures are projected can be replaced by the high resolution photo, instead of the downgraded projected textures.

I bet Vray can do that too and that would allow you to avoid Photoshop.

@pyroluna ?

Thats a fabulous feature in Thea, and one that would suit my workflow in Vray perfectly if it exists, but sadly I don’t think it does.

I think it might be possible but complicated to set up.
You’d have to set projection type to screen… #challengeaccepted - I’ll be back