Ah thx a lot!
Ok, I changed the colored-picture and took an s/w.
Yes I´m using “RealisitcVision…”
I played around with Controlweight from 1 to 1.2. And used both canny and mlsd.
And: “ControlNet is more important”
the input image should be 1024x512. If I’m not mistaken - the AI is trained with 512x512 images so if its multiples of that size you’re good
the problem here is you just see a fragment of the pool. The AI isn’t smart enough to know its a pool if the whole shape isn’t visible. In that case I use Segment colors. See one of my previous post of 19 days ago.
If I apply a few segment colors, the pool gets picked up nicely. See pics.
Now I tried to get better results with the use of colored images like you did. The results are better, but still not in the Quality of a Rendering I could sent to a client. Cause some geometry isn´t rendered exactly and materials are not in the way I want them to be…
And I noticed, that you used not canny anymore, but instead “…seg”. Why is that? I guess I should check some tutorials about ControlNet :sweat_s
See my second bullet point in my last post. There I explain why I use seg and where to find more info.
Personally, I use AI just to generate alternative ideas / materials, for fun and to keep up with the new tech. As you have noticed, the end result can be slightly different to what you designed in 3d. You could refine those parts by sending the image to Inpainting and take it from there (see YouTube for Inpainting & Automatic1111).
Yes, and I think at the moment it´s too early to expect perfect “Renderings” for Marketing oder Clients made in AI through a few clicks.
I also tested Veras, but I wasn´t able to get a good result within the limitation in the demo (30 Renderings).
I have been hoping to get to use image generators for work for a while, but they have so far not proved useful for our projects. However, I’ve put together a SketchUp file you can use “as is” with the segmentation coding as materials, grouped as I see fit for architectural visualization.
I have not yet tested the seg model in Controlnet, but I hope it can prove useful for stuff not easily modeled/rendered in 3d, while keeping our “precision” Revit/SketchUp models in the process.
Related to the original topic; a new (test) version of this technique was released recently and it now generates images in real-time. Just start typing and the image changes. Only 512x512 pixels for now but this is really mind blowing.
The software used is Comfy-ui and the model is Stable Diffusion XL Turbo. See a small test here - the video is the original speed.
Does anyone know if someone has made an automatic integration with locally runned stable diffusion? Some sort of plugin that automatically gets the data from sketchup into on of the programs.
The online version of Stable diffusion is rather bad compared to what this can produce locally. In painting and other features and control net, makes a world of difference.
I can see why this could be nice but I have not heard of something like that.
You would need a SketchUp plugin that duplicates some parts of the UI functionality of the AI program and inject the SketchUp image, prompt and inpainting data into those programs (which one? ComfyUI / Automatic1111/…?)
I expect this to be a bit complex and hard to maintain because the AI programs are in constant development so the injection protocol could easily break.
Personally, I would not spend time creating this code if the alternative - export an image from SketchUp + import into the AI program + do your controlnet, inpainting + prompting there - is not that big of an issue (to me).
I am wondering how many users would run Diffusion locally if we were to provide either support for standard frameworks (Comfy, Automatic, Fooocus…) or convenient methods to extract the data you need to develop your own pipelines.
Not saying we will do it, but I am interested in your thoughts.
At the moment, I prefer to run locally because the quality (using RealisticVision model) and speed (using a 4090) is much better.
Its not much effort to take an image out of SketchUp and use it in Comfy or A1111 and take it from there. In my workflow as an architect, I’m not sure if making a direct transition from within SketchUp into external AI solutions needs more integration. Its not what is holding me back in creating great results with AI and wanting to use it even more.
I do see another area that could use some improvements and where SketchUp’s AI solution itself could be helping architects; a further refinement of the segmentation color workflow. The current implementation of segmentation colors (in A1111) is too limited and the results aren’t always nice. There need to be many more categories and/or the option to manually link colors to objects and materials the AI should use.
If you could tag facades, floors, walls, windows, ceilings, kitchen isles etc etc and prompt all kinds of materials options, styles etc and the AI would generate a high-quality image or even high quality (tiling) materials on the 3d model, that would be really wonderful. By doing so, you could use the AI to quickly investigate design options for the materials AND re-use the results (as textures) in the 3d model.
However, in my case, I’ve got no time to pursue his workflow, even if I think it’s the most interesting I’ve seen yet.
I still rely on render engines for final production so I’m not that concerned about fine tuning AI to the limit, as @maxB is, however, if you could get close to what he does, I’d love it, as I’m not at all against it it would be great to easily do it without having to get so deep has he was able to, or even without knowing almost anything about Stable Diffusion.
So, having this would be great:
Segmentation colors and a better way of integrating them into our Sketchup workflow
Inpaint
RealisticVision
Generate textures for renders based on our results
You’re right - the AI output doesn’t need to be perfect. I do my final output in UnrealEngine (stills and VR) and most materials are (automatically) replaced with good PBR materials in UE anyway.