As far as rendering with SketchUp is concerned, I think the issue is that the SketchUp developers mismanaged the implementation of Ai by allowing it too much control over the output. They have since added some control back to the user but not nearly enough to make it a viable resource. As a designer, I don’t move into the rendering stage of design to get ideas. I have ideas. I already know what I want. I understand the parameters of my designs and they aren’t something that I can easily convey to an Ai.
I work in other programs that have implemented Ai with much better success. Photoshop is one. What Adobe did was, instead of using Ai as an idea generator, they use Ai as a tool. This proves to me that Adobe has a much better understanding of their users workflow than Trimble.
What I would have liked Trimble to do is to implement Ai with much more discretion. Break it up to be individualized tools with specific functions. Whereas Diffusion renders the entirety of a scene, I would much rather the Ai was piecemeal. For instance, lighting is an important aspect in need of a specific tool. I always felt that render programs were too technical with their lighting interface. I shouldn’t have to have a background in lighting design just to add a few interior lights in my model. Let Ai handle the technical side and let me just dictate a bit about the lighting scheme I desire. Once the lighting is to my liking then maybe I move on to materials in a separate Ai controlled tool where I can tell the Ai that a certain material should be glossier or have more texture. I will concede that it appears Trimble is working in that direction with materials but they are still far off. Another tool I would like is something for what I call “fluff”. Books in a bookshelf, flowers in flowerbeds, glassware on tables. Filler stuff that makes a scene seem lifelike. It’s taxing to search down these items in the warehouse and those added components really mess up the tags and materials not to mention add way too much to the file size. This is one aspect I am loving about Photoshop’s Ai. Just circle a shelf and say “add some books” and it puts books on the shelf. Easy. Done.
Without the individualized controls, I have found that Ai (especially in diffusion) is just guess and check. But it’s so rarely correct that I typically just get frustrated and quit. Better to show a client an un-rendered image than to show them an image that is infested with hallucinations.
The problem with AI is being able to reproduce different options. I remember I used diffusion last year to show a customer some options on some townhomes and they pick one out, but I couldn’t reproduce it for the different angles or the different elevations and it was frustrating to them. So until AI can produce different views with similar themes, humans still have a job.
Trimble implemented Stable Diffusion which is a customizable image gen, as an add-on. It wasn’t specifically created for SketchUp. Not sure how deep the Trimble team went into training it specifically to recognize SketchUp attributes (eg can it tell what’s a material, versus an edge or a shadow?..can it detect scale properly?).
As a designer, I don’t move into the rendering stage of design to get ideas. I have ideas. I already know what I want.
Truth!
Photoshop is one. What Adobe did was, instead of using Ai as an idea generator, they use Ai as a tool.
AI Generative Fill in photoshop is one of the more advanced image gen AI’s ive seen, from my experience with it. It not only draws from training material, it also samples its context extremely accurately. So it really is more like having an AI assistant who uses Photoshop.
Another tool I would like is something for what I call “fluff”.
“entourage” or “environment” - the landscape architects, interior designers, etc, HATE when people call it fluff
I asked AI all these things (see the conversation screens above), and while the result isn’t perfectly controllable, it satisfies me well enough here.
Yes, this is a flaw in current tech and I understand that there are projects that require very precise control over the final rendering, but I guess it’s coming.
Twinmotion 2025 update just added nanite as a feature, this is a game changer in the competition for real time rendering engines, I just tested a huge file with millions of entities and converting everything to nanite makes it run smoothly and without any lag, it’s awesome, not even on sketchup on monochrome without any textures I can orbit or move around without a bit of lag.