How Was This AI Render Made from My SketchUp Model

Hi everyone

I’m hoping someone here might be able to tell me how the AI image below was created. To give a bit of background: The first image is a point cloud I created earlier this year. The second is a SketchUp model I created from the point cloud. The third is simply a screenshot taken from SketchUp showing my model over the point cloud which I included as an overview in my construction drawings. The final AI image was emailed to me by an architect. My assumption is that they fed my overview image into an AI renderer — but they won’t say which tool they used, and my curiosity has got the better of me! :slightly_smiling_face:

Does anyone recognise how this AI image might have been produced? Any help would be very much appreciated. Thanks as always.

I just tried it with MS 365 CoPilot. If you change your design… and the design of the house. And, uh, landscaping elements. It nailed it! So, in other words: it was way off on almost everything.

What you shared looks very nice. You’ll have to pick their brains.

Believe me James - I have tried and tried. Something about a magician never reveals his secrets :grinning_face:

To be fair a lot of the details are wrong in the render but it looks impressive.

Thanks for trying - yours is still a nice render :+1:

1 Like

Out of interest - with MS 365 CoPilot can you give it more detail to work with. Just thinking about it - my drawings to the architect contained a few close ups of the new glass structure. Can those drawings be included in the information you use for the render?

Thinking about how entitled we have all become - yours isn’t just a “nice” render - it absolutely amazing when you think about it.

There are only small handful of models that exists and and readily available - the difference is likely how the AI has been cajoled into the output that is required

1 Like

I’ve only tried AI rendering a limited number of times, so I’m at the Mairzy Doats level of understanding of LLMs and image generation. What you showed was impressive so I screen clipped a couple of images just to give it a whirl. At first I tried one image of the PC with the prompt “Try to make this look more realistic.” That one was… not so good. But not terrible. So, I clipped your model and the PC and prompted, “I have an image of a model and a point cloud of the same subject. Will you use them to create an accurate image render?” (in the same chat) to produce what I posted here. So “yes”, you can give it more detail to work with.

I don’t use M365 CoPilot that much… but it does have “Create” feature that is for image generation and Agents, which might be a better approach (M365 CoPilot). Look around for what Language Models people use for rendering… because the language model used can make a huge difference in results (I use AI a lot, just not for rendering). I presume there are some rendering-tailored applications that have “Projects” (like ChatGPT Plus) or “Spaces” (like GitHub CoPilot) and these can contain files (models, colors, materials, similar images, etc.) that the AI can access, and can save “Instructions”.

And you’re not wrong: the even the images I got weren’t terrible. They’re just not buildable/accurate.

1 Like

Thank you

I am working on an interesting scan I did yesterday, and have set myself a goal of getting an impressive render to the client first thing on Monday so I will experiment with prompts to Co-pilot as I am an avid 365 fan.

First thing is to get the property modelled and build, then build the structure so is going to be a late night :grinning_face:

Thank you James - this is really helpful.

Have you investigated some stored hidden info in the image? There might be some clues there. Maybe upload it to chat gpt and ask it if it can find some clues?

Hi Max

Yes - I asked ChatGPT a few questions earlier and input the initial and output images - and this is what I got back.

Yes — I can definitely shed some light on what’s most likely going on here :slightly_smiling_face:

How the AI image was probably produced

Based on what you’ve shown and described, the architect almost certainly did not feed the point cloud directly into AI. Instead, the typical workflow is:

  1. Your SketchUp screenshot (the clean line / shaded model view)

  2. Fed into an image-to-image AI renderer

  3. Prompted with something like “photorealistic brick house, slate roof, glass extension, English countryside”

The AI then:

  • Keeps the geometry and perspective from your SketchUp view

  • Replaces materials with realistic brick, slate, glass, landscaping

  • Adds lighting, sky, reflections, shadows, and surface imperfections

That’s why the final image still matches your model very closely, but looks like a real aerial photograph.

This is now extremely common in architectural workflows.


AI tools that can produce results like this

These are the most likely candidates, based on the quality and style of the output:

:small_blue_diamond: D5 Render (with AI Enhancer)

  • Very popular with architects

  • Works directly with SketchUp

  • Has AI material enhancement and scene realism

  • Can take a basic model and make it look “finished” very quickly

:right_arrow: High probability


:small_blue_diamond: Lumion (AI / Style features)

  • Widely used in UK architecture practices

  • Strong vegetation, lighting, and realism

  • Can take a simple SketchUp model and produce very similar results

:right_arrow: High probability


:small_blue_diamond: Midjourney (Image-to-Image)

  • Upload SketchUp screenshot

  • Use image weighting + prompt

  • Excellent at brick, slate, glass realism

  • Very commonly used without telling collaborators :grinning_face_with_smiling_eyes:

:right_arrow: Very high probability if it was a single still image emailed to you

2 Likes

What about the diamond icon at the bottom right…?

3 Likes

I hadn’t noticed that Detective Paul - you may have solved it :clap::clap:

From ChatGPT

the render has a diamond icon at the bottom right - is that a clue?

Short answer: yes, it might be a clue — but it’s not a definitive one. Here’s how to read it.

What the diamond icon likely means

A small diamond / sparkle-style icon in the bottom-right corner is very typical of AI image generators, not traditional renderers.

That strongly suggests:

  • :cross_mark: Not Lumion, Enscape, V-Ray, Twinmotion, etc.
    (Those usually add text watermarks, or none at all.)

  • :white_check_mark: Very likely an image-to-image AI generator

Most likely candidates associated with a small diamond icon

:small_blue_diamond: Bing Image Creator / DALL·E (Microsoft)

  • Often adds a small diamond / sparkle icon in the corner

  • Commonly used by architects because it’s:

    • Free / easy

    • Shockingly good with architecture

  • Very often used quietly :grinning_face_with_smiling_eyes:

:backhand_index_pointing_right: High probability

1 Like

I just created this using Bing Image Creator in 5 minutes which is incredible, and although my glass structure is obviously wrong, I think with some better prompts it will produce very accurate renders.

It feels a bit like cheating though :disappointed_face:

4 Likes

I found this: “A Standardized Cue: Major tech companies like Google, Microsoft, and Meta have converged on using this symbol to indicate AI actions or generated content, making it an emerging industry standard for users”

So it’s used to clarify it’s made by AI but not specific to one..

I did this one in Gemini from your image by describing what I was looking at /wanted

A high-resolution aerial three-quarter view of a traditional English country house built in red brick with decorative diamond-pattern brickwork. The house has steeply pitched grey slate roofs, multiple gables, and tall brick chimneys. Large timber-framed windows with stone surrounds. A modern glass conservatory extension with a black metal frame connects two wings of the house. Stone paving surrounds the building, with outdoor seating and potted plants. The house sits in a manicured green lawn in a rural setting. Soft natural daylight, clear sky, ultra-realistic architectural photography, sharp focus, natural colours.

3 Likes

Wow — that’s incredibly helpful!

I’m pretty sure that must be what they used. Thank you very much for sharing this, and especially for including the wording you used to generate the image — that’s hugely useful.

Thanks again, much appreciated.

1 Like

I can see this as adding pressure for me in the years to come.

I haven’t provided realistic renders for clients because:

  1. I couldn’t face the learning curve – I don’t have the time.

  2. There isn’t enough in the fee to get a render done by others and if I adjusted the fee accordingly, I’d maybe lose the work.

  3. My basic monochrome SketchUp models with a curated style and careful use of colour or texture – I felt were enough.

If I started using Image Generation Models for my work, my children would disown me!

2 Likes

Crikey – so that’s prompt engineering!

:astonished_face:

Some good sleuthing and prompting here. Just to add to the prompting and ‘instructions’ topic: keep a text file, or files, of your instructions and prompts. As you test and see results, you may be able to narrow in on a set of instructions that tells the AI how to return what you want. That can be your go-to set of instructions. Then, if you keep your prompt file with your projects, you’ll have a record of what did (and didn’t) seem to work. You’d end up with sets of prompts for ‘Moden’, ‘Brick’, ‘Stucco’… or more specific "Glass: opacity, reflectance, etc.

3 Likes