Poll: Who still wants to learn rendering with AI in the mix?

Ok so the topic is the question:

Do people still want to learn to render these days? Learning V-Ray for example (geometry, materials, lighting, composition, output, and post-processing)? I know AI can mimic the ‘look’ of a rendering and to many, that might be enough. Is there value still in knowing the process for how renders are made, rather than just see how fast we can get to a finished product?

4 Likes

I think the process is changing. At least in my area which is arch-viz.

Ai-enhanced rendering is what people and software are moving to.

The use of twinmotion/unreal, etc, is making rendering more of an assembly process, that i dont think its a bad thing . More like a film director as oppose to a set designer. Less emphasis on building objects, materials and environments, and more emphasis on adjustments to lighting, camera and overall scene composition.

Ive been rendering using raytracing engines for decades and there is a lot of science and a setup process that is frankly a barrier to creativity (and productivity). Thankfully is becoming unecessary in all but the most experiemental rendering projects.

The exact same thing happened with digital photography where automatic mode is now so good that 99.999% of photos adopt it, and then images are adjusted (in camera or in external apps/filters) after being captured.

1 Like

I wrote this as a reply to a similar AI rendering post on another platform:

What I need to see before I’ll take ANY AI architectural renderer seriously:

I’m tired of seeing AI render comparisons that show dozens of variations of the same design - none of which actually match what was intended. Instead of showcasing creative “interpretations,” here’s what would actually prove an AI renderer works:

The Test:

  1. 5 different render types - exterior modern office, interior lounge, residential development, etc.

  2. Perfect accuracy - Each render must match the source model EXACTLY. No bonus railings, missing chimneys, chair substitutions, or phantom table legs. If the model has 4 windows, the render shows 4 windows. Period.

  3. Single modification - Make one specific change to the source model (move a window, change wall color, add a door, etc.)

  4. Exact consistency - Re-render all views and they must match the originals down to every blade of grass, with ONLY the requested change visible.

Until an AI renderer can nail this level of precision and consistency, it’s just an expensive hallucination generator.

Stop showing me 20 “creative variations” that all miss the mark. Show me ONE renderer that can actually follow instructions and maintain design integrity.

13 Likes

For now, there’s no AI rendering engine that gives 100% accurate results, Veras and Rendair are in my opinion the best options for AI rendering but you can’t ask for results like vray, with all the materials and geometry 100% accurate. Probably in the near future they’ll be able to do that but for now as I’ve mentioned on other threads, they’re tools more focused on the early stages of a project, with a simple draft you can get pretty nice and interesting ideas.

What I can see disappearing in the near future is V-Ray, real time rendering engines are getting better, the learning curve is easy and you can get quite nice results even as a beginner, if you compare renders made with twinmotion in 2020 and now, is like comparing renders made by a novice with twilight and one made by an advanced user of V-Ray.

1 Like

Generative AI models are getting more and more accurate. Look what I was able to get today in no time from a simple image of a SketchUp model:

Input:

Output:

And in future, I guess output will be interactive, not just a still image.

1 Like

Personally, yes, I want to learn rendering. In particular, I want to leverage what I know from real world photography in the realm of rendering. We talked a bit about it at 3D Basecamp, and it’s still on my agenda to do some basic exercises.

I did resort to a little AI (in Photoshop, not SU) in this rendering, but it doesn’t scream “AI Generated!”

3 Likes

I have to agree with RTCool; I want to improve my rendering skills so that I have ultimate control. Only I, with human eyes and perception, can determine exactly the look I want. I do not want to give up my creative nuances to an AI generator. Added to which, in my line of work designing and modelling performing arts venues and the like, AI cannot and should not replace the subtleties of architectural and performance lighting within a venue.

1 Like

You make a good point. Lighting design cannot be done by an application that just takes guesses as to what kind of image you would like to see.

On the other hand, many of the people who post their rendering issues seem to have an idea of the kind of image that they want, and to get to that, they add a proliferation of hidden and other lights to their models that are quite impossible to realise in real life. They call what they want “realism”. This kind of “render”, I think, can more easily benefit from AI.

1 Like

Render programs are adding ai features to their workflow already.. I start modeling and rendering as hobby, so, i will probably continue using old methods, even if that sector is not a job anymore.. But our tools are changing, so, old methods are will be gone soon i think..

1 Like

Very nice work!

Other than the colours, the plants in the pots, the background and the assumption that the walls are stacked stones I can’t see any differences between the model and the render … that one of the best I’ve seen so far.

Now here’s the real test - change ONE element (delete a plant pot or something) and generate a render that matches EXACTLY the same lighting, camera angle, and composition as above, but with just that one change.

I’m currently working on a rundown listed property restoration where the client struggles to visualise colours and furnishings from traditional drawings - she needs to ‘see it’ to perceive it. So I’m building an accurate model room by room, allowing her to experiment with different finishes and see the restored spaces come to life.

Each room requires multiple renders that must be identical except for camera position/view. When finishes get specified, they’re rendered again. When revisions come in, they’re rendered again. The key requirement: AT NO POINT should anything be added, deleted, or “creatively interpreted” in the renders.

This is where current AI rendering falls short - it can’t maintain that pixel-perfect consistency while making targeted changes. Once it can reliably do this, I’ll embrace it wholeheartedly.

I’m not anti-AI at all - I have a paid Claude subscription that’s proven invaluable for my work, and ironically, a big part of our architectural practice involves designing the massive data centres that house and run these AI systems. But for client presentation work requiring precise control, traditional rendering workflows still reign supreme.

The moment AI can guarantee “same scene, different sofa colour” without randomly adding a houseplant or changing the lighting mood or … if it does, it does it to the other xx renders of the same space it’ll revolutionise our industry.

I used Claude to rewrite this and the above post I made, I’ve no shame or guilt in that as ‘words and stuff’ ain’t my strong suit.

1 Like

That´s what AI rendering is meant to be, at least for now, it´s a great tool for early stages of a project, you can just create some basic volumes or even just sketches and with the right prompt you can get a lot of inspiration or new ideas, those renders are just for the designer not the client, for the client you still need to use a classic rendering engine.

2 Likes

this line “I’m tired of seeing AI render comparisons that show dozens of variations of the same design - none of which actually match what was intended.” says it all.

First with a test prompt about a card house, not even once did I get a similar result even when you ask to create the same result.. When I finally got one that looked nice, even with a slightly different color change I got a completely different result. So I tried using SU with a design base, but not once, even with no changes, did I ever get the same result twice.

Btw, Its the same with writing code. For a test project I did deliver variable names and functions parameters, and even with a paid subscription and memory on, and using its own first result it change it own code with notifications that the idea ( supplied code ) was wrong. I didn’t even point out what I want to have changed just using its own result.

Like you said ‘expensive hallucination generator’.

1 Like

I tried to add peoples to a still image render using google veo 2, not bad at all..

8 Likes

The shadow and reflection of the people look very natural

2 Likes

Now, there’s a good use for AI. It ‘enhances’ your results instead of imagining an alternative.

Nice result!

2 Likes

Thanks everyone for your insights. It’s nice to see both new and innovative AI uses that are actually useful and aiding in the creative process…as well as a desire to still learn and understand the aspects that go into rendering the ‘old school’ way.

As the wise Ferris Bueller once put it best: ‘Life moves pretty fast. If you don’t stop and look around once in a while, you could miss it.’ That’s was my intent with this post….to pause for a moment and reflect on where we are at this given moment/shift in time.

3 Likes

No, AI cannot do pixel perfect accurate FINAL renders. But, what it can do is create many WIP and concept iteration sketches. Good for designers and architects for ideation, concept work as well look and feel for final renders.

1 Like

What you’re talking about is Prompt-based image generation.
Like Diffusion. The use case for that is limited, though diffusion at least references a model (really an image…lame)..So there’s hope.

I think the better use case (for now) is for AI tools to generate assets and make adjustments WITHIN a rendering engine (Eg enscape, vray), or within SketchUp. This means I can quickly change the scene from summer to winter, or add rocks, or generative/procedural background environments… So in this case the control and repeatability is possible.
An AI helper using parameteric tools similar to Live Components would replace the 3d warehouse, for example. I can say " I want a 1982 Honda Civic in mustard yellow with a convertible roof" and it should be able to produce that accurately.

AI hasn’t been trained to use Software yet. Right now the LLMs and Image gens are procedurally creating content based on sampling. Once AI agents get exposed to more software tools, they’ll be able to create more stuff we can use.

For Arch Viz and other rendering, there are a few areas like adding traffic or moving people where AI can really shine, because those aspects take a LOT of time and processing power to do in the traditional manner. Same with backgrounds - we might model a great building, but we’re not being paid by our client to model half a city block around it.

Going forward, I’d like to see FAR more integration between real design data/parameters (traffic flows actually being realistic, people moving in real ways), aerial photos used to generate places, with AI agents (NPCs) simulating real activity within those places. EG a person sits down at a seat and watches a movie, or waits for traffic before crossing the road.. As a design tool this will allow us designers to create simulations of the real world.

The data behind this is pretty heavy though - a few companies (eg Arup) have created crowd simulation models, plus we have traffic models, weather models, etc… Those databases are all significantly complex and draw from data that’s held within locked commercial servers. AI might make “lite versions” of them accessible to typical commercial projects. The AI grunt needed to create models is also high and currently subsidised (OpenAI loses $100billion or so, a year).

Gaming is what’s moving this forward (I cant wait to see the new Grant Theft Auto VI game, not for the gameplay but the city environment that it creates, complete with roaming A.I. npcs)

2 Likes

This is fantastic. Whai AI did you use? I have tryed a few, but though I beg them to preserv geometry they insist in remodel it a little.I use Enscape and Enscape offers the use of AI to enhace photografically the result.

1 Like

I use D5 Render > SU Livelink. When in D5, I really enjoy adding assets to all parts of the structure, but when it comes to land scape I find the AI tools a blessing and time saver. I always want to put in more hours then the budget allows. so it allows me to spend more time on Interiors. So it’s a helpful tool, but I don’t care for it designing the project.

1 Like