I’m liking where this is going. Like with all AI I’m still frustrated by the relative lack of creative control. I’ve been able to approximate my ideas with other applications and I’m very happy to see SketchUp getting involved and enabling AI diffusion to maintain my desired geometry which gets lost elsewhere.
However, I’m constructing very novel ‘habitats’ with atypical features. Think Fantastical and sci-fi. AI being trained on existing content gets confused with which components on my model are which elements, such as windows or doors, but also why (it may wonder) are there legs or wings on a habitat? Perhaps it’s something else. Yes, prompt engineering is key, however…
I’d like for it to be able to consider my different components as they have been saved in the model I.e. identification name, material type, colour, as it considers the prompt I have entered for generating the output with more specificity.
I can then both maintain geometry and function but allow the ai to experiment with embellishments and decorative styles. (Saving a great amount of time) I can then also identify particular components (or shapes) by their name for particular attention within the generation.
I appreciate this almost sounds like I could simply render my model and admittedly it is a step closer to that end. It’s really not though. Fine details such that I’m imagining would take a huge amount of effort to model.
Could this be possible?
Would like to see a version of this for Diffusion. Similar to Refine in Revit Veras.
Thanks for a fantastic software.
There are some functions i miss thoug:
- To chose the render frame / frame ratio
- Often i get images that is 90% perfect, but with a small glitch or a material that i would like to change. I miss the opportunity to make smaller adjustments and to take the render in a certain direction and build further on that. (perhaps the visibility/adjustments of seeds?)
- the possibility to override parts of images: would it be possible to identify the objects in the scene somehow, like: wall, bookshelf, sofa, window - and make adjustments locally and keep the rest of the rendering as it is? Or even better: to be able to click an element in the render and change that, more like generative fill in photoshop? Say, if you would like to change the color of a wall and keep the rest as it is.
- Obviously the function to chose output (resolution, file format)
- It would make a HUGE difference if you also could input reference images
- the possibility to keep some geometry as true as possible to model (like the geometry of the room), and let other sections be improvised by AI (like furniture)
- To be able so save your own prompts as a custom style
- To possibility to keep a consistent style of renderings within a file. If you make a interior you could then make multiple views in the same style
Thanks again, i really look forward to follow the development of this project.