Hiya, I am new to the vray rendering system. Would someone be able to tell me what the rendering time is on average (approx.)
Thank you very much
Hiya, I am new to the vray rendering system. Would someone be able to tell me what the rendering time is on average (approx.)
Thank you very much
Its completely different and depends 100% on the hardware setup you have.
I am using Thea instead of Vray but the principal is the same. Nowadays you specifically want to be rendering from the GPU rather then the CPU (both really) if you want to go quick. Both of my GPU’s have 2816 cores, much more than my CPU that has 6 for example.
Depending on how the render is setup for the hardware and also what you are rendering (once you add lights, volumetric’s and reflection’s it will eat up even more processing power).
Here is an example, I rendered this image first on my PC using a GTX 680 card and it took 26 hours at 8000px. I re-rendered it again using my MSI 6GB GTX 980Ti and it only took 5 hours.
If you are on a laptop you can expect much lower speeds than a desktop, also if you are on a MAC you can generally expect much lower speeds than a PC (but in both cases it’s possible to spend a very large amount of money to upgrade them).
If you want to take it a step further you can even create a small render farm or use a cloud solution. But everybody will be different (by hours or even days) depending on what hardware you have.
Like liamk887 said it’s depends… Simple architectural scene approximately 3-4 hours with CPU i7 4790 (4000px long edge).
Actually, in the long run, the render time stays the same, quality keeps getting better, depending on the system.
Is that really the case? I am not disputing the fact just wondering how it’s so.
Only five years ago I would not have been able to render in real time, where as now I can render a scene at the same quality as five years ago but instantly and constantly, allowing me to not only preview it but also navigate it.
For example the image I posted earlier was rendered to a pre-set amount (ie: stop rendering when ‘x’ parameters are met). Using one card it gets to that level in almost one day but with another it reaches the benchmarks in just under five hours.
The quality stays the same but the time taken is reduced in this example.
Here is good example with fstorm render and decreasing render time
https://www.behance.net/gallery/45983669/Varenna-meets-Natuzzi(-Updated-Animation-Teaser-)
With some tweaks in Lighting and Settings, the scene was rendered amazingly fast.
3-5mins for 2000px wide rendering. I use GTX1080 and 980Ti
5 minutes with depth of field
The point is not that there are eamples in which the render time of a certain quality pixel based picture is reduced, I am refering to the human nature which gets used to a certain time for processing an idea, or set up.
So, you have this idea and you would stop messing with different kinds of setting etc. One hour before deadline, where you, 5 years ago, would have stopped a day before deadline.
The physical render time would be an hour, but the whole process still takes two days
Oh yes I can certainly relate to that, also I read the article after I posted my response hoho.
This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.