GPU Rendering - CUDA cores / RT Cores - what matters most?

Started by smalldogstudio, May 18, 2020, 04:13:16 AM

Previous topic - Next topic

0 Members and 2 Guests are viewing this topic.

smalldogstudio

Hey everyone

So related to mafrieger's topic, I'm interested to know the importance and relationship of CUDA cores and Turing cores to render times? Can anyone from Luxion share any knowledge from their own testing please? Is one more important than the other?

For instance at present I have an MSI GeForce RTX2060 -6GB 1920 CUDA cores, 30 RT cores - around £330 UK. (it's been fantastic so far)

I am considering upgrading my system either CPU (from Ryzen 2700X to 3950X +£700) or adding a second card identical to above? or buying a GPU with more cores, but I'd like to know the relative benefit from any such upgrade, I realise that performance increase isn't linear and I not sure if I could afford to go for a a card with NV Link capability, but perhaps I should as I realise that the GPU performance is many times the CPU performance (CPU benchmark 1.89 vs GPU 16.80).

Would I see benefit from simply adding an identical RTX2060 - faster renders but not shared memory? My other thought is to upgrade to a card with NVlink and then I could add another identical card in the future to double the memory, but benefit from more render power now with more cores?

Most of my scenes are currently under 6Gb total (with one recent exception with ran to over 28M polys). I'm now rendering mostly at 4k to 300-500 samples, most images take between 20-60 minutes. I'm also increasingly getting into animation so frames render times are important.

any guidance very welcome, I've got some money burning a hole in my pocket!

best wishes

David (aka small dog studio)

Sune

Hi David.

We have not tried the RTX 2060 here at Luxion, but we have tested a lot with the RTX 2080 and 5000.

We have also tried comparing a GTX 1080 with an RTX 2080 and found that while the RTX card only has 15% more CUDA cores, the RT Cores makes it 4 times as fast in KeyShot.

Using two identical cards did in fact almost halve the rendering time in some of our tests.
For example, we rendered a scene that took 474 seconds on one 2080, and 251 seconds with two.

Now I only have an RTX 5000 right here which in our benchmark tool gets a 28.91, which is very similar to how the RTX 2080 performs.
Two 5000 GPUs get a 51.63 meaning two GPUs should result in about 1.79 times better performance.
Assuming that scaling factor applies to all multi-GPU setups, you could expect a result around 30 with two RTX 2060 GPUs. That would be slightly above a single RTX 2080, but the 2080 does have more memory + NVLink capability.

So yes adding a 2060 would increase your GPU render speed, but you will not be able to benefit from more memory as that requires NVLink.

smalldogstudio

Thanks Sune

That's really useful information, I'm leaning heavily towards a new GPU - either 8Gb or 11Gb with NV Link, would make sense given the better performance. Incidentally how do tensor cores affect anything - do you know? I was making a comparison spreadsheet to look at cost / CUDA / RT / Tensor and RTX-OPS, this is based on prices from one vendor (SCAN) and priced in £ UK.

So what you're saying effectively is that RT cores contribute to faster rendering times more than CUDA cores do?

The cost per/cores of any type is pretty linear it seems. (see attached table)

best wishes

David


Sune

Hi David,

I'm glad you could use the information.

We only use the tensor cores for denoising, and the difference between a 2060 and 2080 in that case would be down to milliseconds.

Both the amount of CUDA and RT cores affect rendering time, but yes the RT cores contribute more.

joseph

Hi Sune,
Thanks for pointing out Tensor cores (Nvidia mixed precision compute branding as I did a search on it) do for denoising. I am  :-[ as to go in here to this topic as I am not a Nvidia hardware user. The denoising feature is really a must, it also demonstrated that other than Nvidia solution can take advantage of. The entry level 5500XT did a good job on the Contemporary Bathroom Interior for my budget. I wasn't able to test it with my WX4100 as I didn't know that feature before on KS9 when I replaced it with the current 5500XT.
regards - joseph

Jon-213

Hi Sune,
"We only use the tensor cores for denoising"
This means that the denoising "look" of a 1070 (it has no Tensor cores) will be different that of a 2070?
I am asking because in my system the the CPU denoising works like magic and the GPU denoising looks blown-out and smeared.
I assumed it was a GPU thing and not specific to the non-RTX cards.
Edit: I must clarify that I only see the GPU denoising problem when the scenes are very complex (millions and millions of polygons, lots of materials, very complex lighting)
:)

Sune

So I am not the developer behind the GPU mode, but to my understanding Tensor cores are only used to improve the speed of denoising.
The end result will be the same, whether your GPU has Tensor cores or not.

Joseph the GPU mode is not available on AMD graphics cards, but the denoising in CPU mode is independent of your GPU.

joseph

Hi Sune,
If I untick  Use GPU (enable effects) Denoise status is Waiting on the Heads-Up Display. If I tick Use GPU (enable effects) it displays Denoise status cycles from a 5 second idle and burst running less than 2 seconds. It also on GPU-Z sensors that GPU Clock and Memory Clock ramps up speed during the running cycle and drops down on idle. So thank you KeyShot team for this and please don't put any restriction on non-nvidia hardware on this Denoising feature.

DerekCicero

Our Chief Scientist, Dr. Henrik Wann Jensen, wrote a blog post today about internal tests he's been running on GPU vs CPU and how they are utilized by KeyShot that you may find informative:

https://blog.keyshot.com/keyshot-rendering-performance-amd-cpu-nvidia-gpu

Jon-213

Wow.
Very interesting article !
Explains various of the characteristics I was seeing in the GPU rendering.
Thanks.

mafrieger

Quote from: DerekCicero on May 20, 2020, 02:36:22 PM
Our Chief Scientist, Dr. Henrik Wann Jensen, wrote a blog post today about internal tests he's been running on GPU vs CPU and how they are utilized by KeyShot that you may find informative:

https://blog.keyshot.com/keyshot-rendering-performance-amd-cpu-nvidia-gpu

superb!

figure1a

Quote from: DerekCicero on May 20, 2020, 02:36:22 PM
Our Chief Scientist, Dr. Henrik Wann Jensen, wrote a blog post today about internal tests he's been running on GPU vs CPU and how they are utilized by KeyShot that you may find informative:

https://blog.keyshot.com/keyshot-rendering-performance-amd-cpu-nvidia-gpu
Great article. Thank you. One thing that I find very interesting is the Quadro RTX 6000 is only posting a 34.73 whereas my RTX 2080ti is posting a 34.43. The 2080ti is about $1200 where the 6000 is a $4000+ card.

One question on the benchmark. What does the number represent?

DerekCicero

This blog post talks more about the Benchmark Tool, but the specific question/section is:

https://blog.keyshot.com/how-to-use-keyshot-benchmark-tool

The results are multiples based on render time. Higher scores are better and scores higher than 1.0 are better than the reference system. The reference system is an Intel Core i7-6900K CPU @3.20GHz, 2601 Mhz, 8 Core(s). A score of 1.0 matches the speed of the reference system. A score of 2.0 would be double the speed of the reference system.

The GPU benchmark follows the same CPU baseline, based on that same machine. Typically, you'll get much higher numbers with a GPU test. For reference, you can check out this document from NVIDIA (PDF). For example, you can see a big difference between the power of having two Quadro RTX 6000 cards, versus the CPU baseline of 1.0.