SketchUp Diffusion Feature Requests

I’m liking where this is going. Like with all AI I’m still frustrated by the relative lack of creative control. I’ve been able to approximate my ideas with other applications and I’m very happy to see SketchUp getting involved and enabling AI diffusion to maintain my desired geometry which gets lost elsewhere.
However, I’m constructing very novel ‘habitats’ with atypical features. Think Fantastical and sci-fi. AI being trained on existing content gets confused with which components on my model are which elements, such as windows or doors, but also why (it may wonder) are there legs or wings on a habitat? Perhaps it’s something else. Yes, prompt engineering is key, however…

I’d like for it to be able to consider my different components as they have been saved in the model I.e. identification name, material type, colour, as it considers the prompt I have entered for generating the output with more specificity.

I can then both maintain geometry and function but allow the ai to experiment with embellishments and decorative styles. (Saving a great amount of time) I can then also identify particular components (or shapes) by their name for particular attention within the generation.
I appreciate this almost sounds like I could simply render my model and admittedly it is a step closer to that end. It’s really not though. Fine details such that I’m imagining would take a huge amount of effort to model.

Could this be possible?

1 Like

Would like to see a version of this for Diffusion. Similar to Refine in Revit Veras.

Thanks for a fantastic software.

There are some functions i miss thoug:

  • To chose the render frame / frame ratio
  • Often i get images that is 90% perfect, but with a small glitch or a material that i would like to change. I miss the opportunity to make smaller adjustments and to take the render in a certain direction and build further on that. (perhaps the visibility/adjustments of seeds?)
  • the possibility to override parts of images: would it be possible to identify the objects in the scene somehow, like: wall, bookshelf, sofa, window - and make adjustments locally and keep the rest of the rendering as it is? Or even better: to be able to click an element in the render and change that, more like generative fill in photoshop? Say, if you would like to change the color of a wall and keep the rest as it is.
  • Obviously the function to chose output (resolution, file format)
  • It would make a HUGE difference if you also could input reference images
  • the possibility to keep some geometry as true as possible to model (like the geometry of the room), and let other sections be improvised by AI (like furniture)
  • To be able so save your own prompts as a custom style
  • To possibility to keep a consistent style of renderings within a file. If you make a interior you could then make multiple views in the same style

Thanks again, i really look forward to follow the development of this project.

1 Like

Sorry, just to clarify. I meant to say that if Trimble ever decides to charge extra for what it offers for Stable Diffusion at the current stage (as a subscription separate from it others), currently Sketchup diffusion does not offer anything that would justify an extra subscription (on top of the standard Pro).

Even if they improve what it can do, if what it can do can be easily be run locally they shouldn’t charge extra for it. Its more of a foreshadowing.

But anyway back to stable diffusion, I thought they can also automatically produce an “control net” and use the texture name and components/group names to guide the “control net” that could go a long way to make it more accessible for people who are not so tech savvy.

Hi everyone,

Once again I want to thank you all for the incredibly constructive feedback offered in this thread.
In fact we are currently running a closed alpha testing program that addresses some of the issues and requests mentioned above (Unfortunately I cannot tell which ones).
If you are interested to join, please DM me and include the email that is associated with your Trimble ID.

Best,
Aris

Feature Request: I’d love to click on a previous render from Diffusion and see the text prompt (and possibly copy) that was used to get the render! After some leads and dead-ends, I’d love to go back and use a previous prompt that seemed to be giving me the results I was looking for. Right now, I have to rely on memory.

Orthographic Render has already been mentioned, but would also request a camera match to two-point perspective instead of kicking to 3-point perspective.

Its MADDENING that diffusion does not “honor” the materials and colors i applied in my model. how do i get it to respect the model materials and colors?

You could just turn off the foreground image in the style.

Options to save a prompt and settings I want to reuse, then apply those saved instructions to other scenes in the model.

Allow us to use our models colors and textures. Not being able to do this transforms our vision too far away from what we intend. At least allow us to use Diffision as both a standard rendering (use our current model as-is) and creative rendering (let Sketchup help us envision greater things).

Hi everyone,

Thanks again on all your feedback around Diffusion. We are excited to see so many ideas and we 'll try our best to address these in future versions.
If you are interested to have a look at what we are currently working on, and let us know if we 're heading the right direction, you can join the SketchUp Advanced Workflow Alpha Project by clicking this link (where you will find Diffusion along with some other goodies…)

Thanks,
Aris

It would an incredible feature if you had a massing model that you have created in SketchUp and let’s say you have 3 buildings in this model and a community park in the center. The buildings and the park are each their own group. It would be such an amazing feature to be able to associate some reference images to each of those groups, so you could inform diffusion of how you would like each of these elements to look and feel. Example, I drag a reference of a modern glassy building to one of my groups, the other two are brick mixed use residential feeling, so I drag some images of buildings like that onto those, then I drag an image that was generated in MidJourney of how I want the park to look and feel. SketchUp Diffusion takes all of this into account when making your diffusion results. I think it would use something similar to the Describe function in MidJourney and come up with a prompt for each group and then, only apply that prompt to a selection area that encompasses that group. Sorry for the long post, but this was a thought I had.

2 Likes

Amazing addition, all the renderings via AI however tend to be in the overemphasized dark side. Mutiple runs lighten up the view. Really impressed with the ease of use on Mac, running latest update 2024 BTW.

Unfortunately I don’t have an IPad so do have to read between the lines as to what differences there are between IPad markup features and Windows markup features.
What I think seeing is that Ipad at least lets you set the markup thickness and pick the color. Am I correct?
On Windows these two is very tedious to achieve with the 2D and 3D annotations.