SketchUp & AI - StableDiffusion, Automatic1111, Controlnet and segmentation colors

A week ago I stumbled on YouTube on a video about Controlnet - a technique to feed drawings/shapes/colors/masks as a sort of a guide for AI when generating images.Still totally new to this but the results are quite interesting for designers.

In short; generate a global 3d mass of your design in SketchUp, assign specific colors to the 3d model that reference a type of object (wall, floor, tree, sky etc), export as an image and just let the AI do its thing. I’m testing it to generate design options for inspiration. Sometimes you get stuck in the same design loop and this might help in opening up your mind by seeing some new combinations/colors/materials for your design.

You can interate really fast. Just add of change a few extra keywords on the prompt, switch to another dataset or just change some of the specific segmentation colors on the 3d model and the results can be totally different. Also, the AI takes just 10 to 30 seconds (depends on your gfx card) for a new 1024x512 image.

Interesting times!



You can also just trace over an image by hand, add some things and feed that ‘scribble’ to the AI.


5 Likes

Reflection and horizon height in the background look kind of strange as if things don’t match. Unless it’s a hilly environment. (third image)

You are totally right - in the first three images the house was exported from SketchUp and the horizon and trees were as segmentation colors added by hand in Gimp. The AI took it from there. Should have added a ground plane and sky in SketchUp

1 Like

Oh, this is awesome! And a bit scary.

@maxB So you were able to tell Stable diffusion that green is vegetation, gray is concrete etc? Or it just detects the color and interprets it as something different?
I’d be interested to know how you used controlnet to do that

Hi @petitclercj The steps are fairly easy;

  1. In SketchUp assign specific colors to all the faces. The RGB values for these segmentation colors can be found at GoogleDocs for instance. Each color is a category (wall, grass, street, etc).
  2. Export an image from SketchUp while shadows are disabled. You will get an image with weird colors like in the pic below.
  3. Download the segmentation model file from Huggingface
  4. Open your StableDiffusion app (Automatic1111 / InvokeAI / ComfyUI). They all can work with controlnet as long as you don’t use the SDXL model (at this time).
  5. Enable controlnet, open the image in the controlnet-section. In controlnet, keep the preprocessor at ‘none’ because you already have an image with segmentation colors but set the model to ‘seg…’
  6. In the prompt describe your image. SD will generate a new image using the seg-colored image from SketchUp and the corresponding categories from the seg-colors-table and your prompt.

You can also have controlnet generate the seg colors from a regular colored image using a seg-model in the preprocessor but I prefer to create it myself to have total control.

See this nice explanation at Reddit.

It would be interesting to make a plugin for that purpuose. That technology can really help to save much time for 3d designers, as well as others, who are looking for inspiration.

I guess the people who develop the Veras plugin could do that. The workflow I described above is completely free though and gives you all the flexibility to create and refine the results using AI

Thanks for the clue. I’s more obvious now why Sketchup team has added Revit integration. AI is everywhere these days )

You do know Veras has a SketchUp plugin as well?

Didn’t know that.Till today. Not big fan of ai driven technologies, yet. Use Sketchup for mm scale working projects. Those ai things are good for designers but those who do things with own hands, plan every bit of space and integrate real world objects: things you can buy from shops or make with your own hands (not just modelled by some very creative designer) may not rely on (IMHO)

I’m no expert in AI at all but I expect that AI will be at least a big of a change in my profession (architect) as the introduction of computers or the internet. So I try to keep up with the developments in AI.

Even the current state of AI (using the tools described above) already helped me last week to generate some new fresh ideas for the design of a high end villa just by letting the AI generate about 100 images using a SketchUp 3d mass outline image for input (Controlnet + MLSD preprocessor & model).

Using AI in this way is like a creative co-worker who comes up with all kinds of ideas and using your experience you filter out the good ones and can start refining the 3d model or create another interesting alternative.

Let me add some more info for those interested as I got a pm asking me this.

Most of the time, I’m using a workflow with Automatic1111 + controlnet

For installation, I think I used a youtube tutorial by Olivio Sarikas or Sebastian Kamph.
But this one is more recent and good as well.

For the AI model, I’m having great results (architecture) with RealisticVision 5.1. Safetensors. Find it here

Keep the prompt very simple; A photograph of a modern villa in the forest, sunny day, midday
When using seg colors, use the method I described a few post above.
When using a line output drawing use MLSD or Canny. See pic for an example of the line input & output.
And if you let it run for a while, or play around with the settings a bit, you get all kinds of options/ideas to consider. Not always perfect but good enough to get the idea of the alternative.

Edit: you need a dedicated gfx card with a minimum of 6 GB for this to work at all. If you don’t have something like that, you can look for a Google Colab workflow (haven’t tried that myself).


4 Likes

Here’s an interesting video about tuning google services for such tasks in case your graphic card is not good enough. Hope you can find some ai technology that can interpret to your language on the fly :dizzy: It’s very interesting how Google trains its models on own servers with the help of millions of people envolved in such an interesting process

This is awesome. Thanks a lot for the links!
I installed StableDiffusion/ControlNet and did first tests. My intention is, to use the simple picture out of CAD und make a “Rendering” out of it. Instead of using Enscape or Lumion.

But when it comes to inpaint, it is not working. Which Preprocessor is needed?
(I guess you used inpaint to controll the facade an the grass in the foreground?)

Hai @Peter_B, Have a look at the picture of the Automatic1111 interface in my last post just above, I don’t use inpainting. I only use controlnet. If you have a simple picture out of cad, try out the canny pre-processor and model in controlnet. Or MLSD. Both could work fine.

Thx maxB.
I used the same prompt as you did and added “pool”:
“a photograph of a modern villa, in the woods, spring day, midday, pool”
(and the same negativ prompt as you used)

this ist the result:

So now i´m asking myself, how to get more control to produce a good picture. I´ve chosen “ControlNet is more important”. Changed to “balanced” I get this:

BTW, I tested also the Veras-Plugin. Some results following.
Sadly, the Demo-Version only allows 30 Renders. Which is imho to few to really test out if the software is worth 50 $ / month. The results following are not good. But I like the workflow, cause it really tries to reproduce the materials used in SU.

maxB, one more Question about StableDiffusion/ControlNet: What is “Seed”?
Default it´s setting is “-1”. But I noticed that you´re using a large number.

The seed is a ‘magic number’. If you hover over ‘seed’ in the interface you will get more info. Set it to -1 and every result will be different. Set it to a fixed number and you will get similar / same results. Maybe I was trying to refine the image by re-using the same seed number and just added a few extra keywords on the prompt.

To get more control; first export a black & white image from SketchUp and disable the shadows.A clean input image works best for what we’re trying to do here.

Also try MLSD in controlnet.

What model are you using for StableDiffusion checkpoint? (you can find this in the top of the interface)
I get great results with RealisticVision_V5.1_safetensors. Many other models only give me ‘meh’-results.

If you could share a black&white, non shadowed image, I could give it a try and share my findings if you like.