Apologies if this question has been asked before, I did a quick search and couldn’t find the answer, but I’m also still trying to figure out how to navigate these community forums, so please go easy on me. Is it possible to do a basic photorealistic render with the new Diffusion extension? Without any of the materials changing?
No. Not really. That’s not what Diffusion is intended to do.
Adding to what Dave said, in particular the materials you’ve used are not yet part of the query that gets sent in. Lots of people have requested that, so hopefully it will be possible some day.
Thanks Dave. I appreciate that it’s more to do with AI, although because it does a quick photorealistic render, having that as an option with the already specified materials is appealing!
Thanks Colin. That’s great to hear that others are requesting the same functionality.
There’s a rendering engine called Veras that respects more the geometry and materials on your model to create photorealistic renders, it’s a paid engine but it has a free trial period.
Diffusion generates “creative” presentations of your model based on what it found was typical in its training data. It does not actually do any sort of rendering, rather, it imitates what appeared in other images that might have been rendered.
Thanks for your reply. The output is a rendered image, right? I guess with the absence of a quick/easy photo-realistic rendering extension, those of us who can see a need for this (within Sketchup), got a little excited that Diffusion may help to fill this void… It looks fabulous from an AI perspective!
Thanks for your suggestion, looks good, I’ll investigate further. It’s a shame that it comes with an additional cost and the functionality is only available via a third party application though… additional cost is a barrier for sole traders and clients who aren’t willing to pay more for it.
So you want an AI based fotorealistic renderer that respects your geometry and used materials that is also intergrated in SketchUp for no additional cost…?
Because it is convenient, doesn’t cost you money, doesn’t cost your client money and doesn’t cost you any time or effort…
Maybe tell your clients that if they want a nicer picture than your SketchUp model they can google: “free fotorealistic rendering program” and invest a lot of time and effort to do it themselves so it won’t cost you time and money because they don’t want to pay for it…
Maybe in the future SketchUp Diffusion is able to do what you want. For free or maybe for an additional cost, who knows… Don’t forget it is still in the early “SketchUp Labs” stage!
No, Diffusion creates an image, but it is not rendered. A renderer looks at light sources and material properties and creates an image that would be true to life based on that. AI guesses what things might look like based on existing imagery. So it’s really not a renderer and, though it can make images that resemble photography, it is not intended to be photorealistic.
You need to charge more or find different clients.
Don’t undervalue your work and your time.
Just a thought - export your textured SketchUp file to a raster format. Upload to MidJourney and “ask” it to alter that image…see what happens.
From my experience, Veras changed the geometry of my model completely.
It would be nice if it wasn’t so artistic. My boss thought the image of our cart on the beach was fun, but I doubt it will be usable like this. Is there a SIMPLE render software out there that I don’t need to spend days learning?
This is off-topic for the Diffusion category, but there are many discussions already open on this subject:
Great question @david.espinoza. Rendering is a pretty complex topic no mater what program you are using and what you want as an end result will determine how you best approach it.
If all you need is a simple render that is not photorealistic, you might be able to get away with just throwing your model in something like Lumion without any major adjustments or fancy effects and call it good enough. Lumion is pretty fast at spitting out a basic render and are fairly simple to learn if that is all you need to do.
Of course if you want a more photorealistic or stylized render you are going to need to do a bit more to make sure everything comes together the way you want. At that point, you might start getting into high resolution textures and subjects like bump maps and even some optimization of your model to reduce render times. At that point, you are expanding your workflow significantly and that will require some time commitment and some learning.
With SketchUp Diffusion still in development at this point, we may end up getting closer to what you are looking for but it will most likely be a number of years before AI can start to compete with a manual render, assuming that it happens at all.
My best advice to you at the moment is to determine what you consider “good enough” as a render and to look at different programs or extensions for SketchUp to see what will get you closest to what you need. I’ll leave a few links here to hopefully get you started.
I’m inclined to believe it will never happen within the current conception of AI. In effect, Diffusion is doing a cut-and-paste/blend from things it found in its training data. Where render-like effects are seen, it is because the training data image was rendered, not because Diffusion has any notion that is why the image looked that way. It works with the appearance of the model geometry, not with the geometry itself, and true rendering depends on the real geometry.
That said, it may be able to fake quite convincing effects.