A week ago I stumbled on YouTube on a video about Controlnet - a technique to feed drawings/shapes/colors/masks as a sort of a guide for AI when generating images.Still totally new to this but the results are quite interesting for designers.
In short; generate a global 3d mass of your design in SketchUp, assign specific colors to the 3d model that reference a type of object (wall, floor, tree, sky etc), export as an image and just let the AI do its thing. I’m testing it to generate design options for inspiration. Sometimes you get stuck in the same design loop and this might help in opening up your mind by seeing some new combinations/colors/materials for your design.
You can interate really fast. Just add of change a few extra keywords on the prompt, switch to another dataset or just change some of the specific segmentation colors on the 3d model and the results can be totally different. Also, the AI takes just 10 to 30 seconds (depends on your gfx card) for a new 1024x512 image.
Interesting times!
You can also just trace over an image by hand, add some things and feed that ‘scribble’ to the AI.