How to create architectural visuals efficiently?

I am interested to discuss whether anybody creates their own architectural visuals - and if so, how you go about creating them? We get asked by potential clients for visuals of our glass structures most weeks and producing them has become a major drain on our time. I guess my question is – is there a quicker/more efficient way to create visuals than our current method?

To provide a little context, we used to outsource visuals which cost around £2,000 per project which was reasonable (especially if we got the job) but the issue was the design which was often wrong, and the constant back and forth became tiresome.

Our current method is to scan a potential clients’ house using our own Faro 3D laser scanner then model the house and proposed glass structure from the resulting Point Cloud ourselves using SketchUp and render various scenes using VRay for SketchUp. This usually takes 2- or 3-days per project, not including the scan capture, which is an awful long time, and I would love to find a more efficient method.

I have recently read about various Lidar scan apps for iPhone, iPad, Android etc, which seem to capture geometry easily and some of these scans can be imported into SketchUp to use in place of a Point Cloud.

I just wonder if anybody is using anything like this as a basis for creating their own visuals in SketchUp, whether this is a big-time saver, whether the scans can be used as the basis for a render and whether this is potentially a game changer for creating visuals.

I am not looking for amazing photo realistic results because I believe from a client’s point of view the need for amazing visuals is secondary to just seeing what their proposed new designs will look like in a timely manner, and as you can see below – the visuals we currently produce are far from perfect although I think they are suitable for their purpose.

One other question I though of was whether anybody creates visuals from existing property photos rather than scans to save time?

Have you tried using a photo or two and Match Photo? Probably faster and would give you something useful to show the client without a ton of work. This is a preliminary model I did based on something the client was thinking of.

The building is a simple shape and the photo was used to provide the textures. Since the existing building’s geometry is grouped, making edits to the addition is a simple matter.

Thats very interesting Dave. I would be very interested to learn how you did that from photos. Is there a tutorial some where I could check out. I find one of the hardest parts is applying the textures?

Start with this.

The important key is to make sure you have a suitable image to start with. Clear lines at 90° to each other running off to two vanishing points on the horizon and an uncropped photo.

The textures on the building itself come directly from the photo.

Here’s another example. In this case the project was the planters which were build for a Boy Scout’s Eagle project.

The walls of the building have the texture from the photo. I did replace the snow with grass and added a roofing material for other scenes.

I guess its a trade off between time (money) and realism. It’ll be interesting to find out where the sweet spot is.

This was one of our expensive outsourced visuals.

I guess what I am after is a way of getting somewhere near this with a solid days work.

Obviously impossible to match it but this is the target.

Give some thought to what needs detail and what can be treated like a set on a theatrical stage. In the case of that image, the house really doesn’t need tons of detail because it is in the background. You might still want to go to a rendering app for the final output to add things like the reflections and you can still do that if you want after using Match Photo.

I’m not trying to talk you out of doing the photorealistic rendering but I will suggest that you give strong consideration to the return on the investment. Some of my clients like to see the model looking like a photo but most don’t really care and don’t want to pay for that additional work.


Thanks Dave. I think you are 100% correct. Some clients have even been known to use our renders to get quotes elsewhere which is annoying but somewhat predictable I guess.

Yeah. That is annoying. With the furniture design I do, I tend to use fairly loose sketchy styles in the early phase and don’t bother getting too detailed with things until the client is signed on and committed. Those models generally have the detail because I’m thinking about buildability as I’m modeling but I don’t want to show all the cards up front.

Thats a sensible approach.

I usually take the view that although I know some clients will be dishonest - the majority will be honest and they’re the ones we end up working with.

Do you have an opinion on the new Lidar apps (canvas for instance for iPhone) and whether they are worth exploring.

This looks interesting and mentions SketchUp on its home page.

Canvas IOS App

Just not sure if it gives me anything the Faro scanner does not - apart from I wouldn’t need to take the scanner with me on every site visit it will hopefully last a lot longer.

1 Like

Good point. You can weed out the clients you don’t want to work with.

I haven’t used the LIDAR apps much and don’t really have an opinion on them but from what I’ve seen they tend to create huge entity counts which can cause performance problems and that may or may not be a factor. Might increase the amount of time spent working on the model.

Hi Dave. I am studying match photo today and it seems to have a lot of promise. I know this will come out as I learn but I notice you said “You might still want to go to a rendering app for the final output to add things like the reflections and you can still do that if you want after using Match Photo.”

So if I have a model with some textures taken from “match photo” will those textures still appear if I render using VRAY?

I only ask this because I just read somewhere that they dont?

I don’t use Vray so I can’t answer directly about that. The Match Photo image gets projected as a texture onto the surfaces you choose to use them on so it should show. I don’t know about the mapping of the image as a texture, though. It might need to have some attention given to it.

You’ll have to ‘transform’ the texture of the Match Photo by making it unique (Make Unique Texture command), in order to be able to render in V-ray without appearing distorted.

Match Photo texture

Make Unique Texture


Match photo will show up in VRay.

The iPad Lidar apps create an highly triangulated model but some also create a pointcloud.

There are some apps that do not triangulate, but they are mostly used in interior and simplify the model too much, making it proper for SketchUp modeling but way less realistic and without textures.

What I do for point cloud generation though is photo reconstruction using Agisoft Metashape and then Trimble Scan Essentials or Under.

It’s not as accurate as Laser but it works and any cellphone is enough. I work on more complex stuff than that, for what you need it might be very easy and fast.

If you want to skip the modelling part altogether Unreal Engine supports rendering of point clouds.

With Sketchup’s Trimbles scan Essentials you could model your simple structures around the point cloud and then render win Unreal or, if you don’t like the result, generate the photo reconstruction and render with renderer.

Twinmotion, based in Unreal and owned by Epic too, is working in supporting point cloud rendering as we speak.

So, there are a ton of options.

Photo match was where I started, it is very cool, but bad for gardening and mostly suited for gard surfaces.

With glass, context reflections seems important to me and those garden contexts reflections are important on the structure integration.

1 Like

I think you can also triangulate faces.

JQL, with triangulated faces, but without being made unique texture, in rendering it will appear distorted

“With glass, context reflections seems important to me and those garden contexts reflections are important on the structure integration.”

I think thats a big issue - I have been trying out photo match and it seems fairly intuitive - and as Dave mentioned earlier, its not too difficult to produce something that certain clients will be happy with. But our product is glass and I am keen to get the context reflections in the visual if its possible without taking up too much time.

At the moment I cannot see how using photo match will give me any reflections or even a realistic looking glass.

It won’t by itself but that’s not Match Photo. That’s just SketchUp. To get the reflections you need a renderer.

Ok thanks. I prefer using Thea.

1 Like