Diffusion works great and makes rendering easier, except it doesn’t read the lighting applied as well as the materials. doesn’t seem to understand the prompts well also. should be seamless for a smart AI tool.
It’s a robot trying to hallucinate what you want based on a picture that it learned to interpret by being shown pictures of labeled objects, and a line of text that it learned to read by following algorithms which predict what word is likely to come after another.
In even simpler terms, it barely knows what you are talking about but it’s trying really hard, and it’s not too difficult to make even the most sophisticated ai art generator available p00p the bed on the daily.
SketchUp diffusion is definitely not the most sophisticated ai art generator.
But it does seem to have some extra backend stuff that allows it to follow an image’s architectural lines better than anything I’ve seen so far, so I’ll give it that.
If you want realistic renders faithful to the model, textures and lights, you are using the wrong tool, there are other engines that are meant to do that like vray, enscape, Thea, D5, Twinmotion, Brighter 3D and a lot more.