SketchUp Diffusion is all about creativity, imagination, and sharing your vision with others. Generative AI is an incredibly powerful medium but as with any new tool or medium, there is an associated learning curve for those of us looking to master it. Learning is a journey that sometimes includes observing and being inspired by the work of others, while also looking for ways to receive feedback about our own work.
That’s why we’ve created this space.
We would love for you to share your creations with the rest of the SketchUp community and include not only the final output but if you’re open to it, we’d also love to see you share the original SketchUp viewport and the prompts and settings that you used to generate each image. We’re hoping that this space can evolve to become an educational resource for folks who are looking to better understand what the tools are capable of, as well as the process that’s involved for achieving inspiring results.
Below is an example of an image I just created along with the original image and the parameters I used.
Preset Style: Watercolor Prompt: modern cozy interior Respect Model Geometry: 0.9 Prompt Influence: 9
Have impressive, unexpected, or delightful results? We’d love to see how you use Diffusion and what you can come up with. Share your Diffusion images with us, the SketchUp Community or on social with #SketchUpLabs.
I don’t remeber the exact prompts but the first one was probably something like old meeting room, wood floor, wooden walls, tall windows. The second was something like old machine shop stone floor tall windows.
Diffusion seems to have difficulty with holes through the model. These aren’t bad but most of the holes in the flywheel on the second one. There’s also a problem with the area at the right of the flywheel. I haven’t figured out if changing prompts would fix those issues or not.
Ahh great - thanks! Curious to see how Diffusion landed on the final outcome. Do you think being able to see a history of your prompts might be interesting/useful?
Definitely try tweaking your prompt and keep us posted on your concerns/results with holes through the model.
The AI clearly has issues with recognizing and preserving people, and it keeps trying to turn random objects into familiar pieces of furniture, like suitcases into chairs and blocks into couches, like is has a strong architectural interior design bias. But, it is very interesting, I am impressed at how it responds to different specific lighting promts.
I’d really like to be able to inpaint part of an image and also to be able to have a material replacement list. I know these are possible with Stable Diffusion, and it’d be great if you guys could integrate that in the UI
That would allow us for great control.
Now, we cannot get that close to what what we’re imagining, but also not far away. Even so this is a very useful and creative process as we get a lot of surprises that we can use if we are clever.
Here is one, from almost the same perspective, but with a different point. Classical architecture was envolved and somehow Diffusion interpreted this as an interior.
This is something that should be taken care of. I wouldn’t mind to have an option of auto saving all images to a folder and their corresponding prompts. I understand that not all users would want that, but I certainly would.
I would also want to be able to carry on working where I left off, in a new session, even if it was a month later. So Diffusion should keep an history of the prompts, a track record of which prompts correspond to each image and an ever growing list of images generated from a model. All of this should be saved with the model and into that folder structure, as long as the user toggled that on.
I imagine the amount of data that will be stored will be huge, but that would be very useful for me, otherwise I’d have to get a lot more organized, and that is not the typical state of mind of a creative process.
I will play with this once I clear a deadline… but I agree - having a history / recipe book with links / snapshots to images would be really helpful. But I imagine that as the engine gets better old prompts would work differently through time - which could be good / bad.
A few different runs with the same prompt.
“Old brick factory setting. High windows behind camera. neon “open” sign on wall. two gray cushions on bench. gray stucco patches on brick wall”
I agree. As architects, many times we already have an idea of the overall shape / restrictions to shapes so being able to just inpaint parts and quickly generate material options / shape options for some parts would be really helpful.