What are the real parameters that reduces CPU rendering time?

Started by Trixtr, June 27, 2020, 08:05:46 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Trixtr

I use network rendering and as such I've seen that different cpus render at different speeds. Not so strange, but.... Some rendering is done on our 5 year old server, Xeon, at about 2.3 GHz, where I allow Keyshot to use 4 virtual cores (virtualized server). My laptop, i7, 7 gen at 3.6Ghz and 8 cores (4 + 4 hyperthreading) with 32 Gb ram where I allow 100% core use. The server always renders more tasks that my laptop. This got me thinking that there must more than simply Ghz in play when it comes to rendering speeds. Could it be L3 cache, more instructions sets on the Xeon or what am I missing? Furthermore, Xeon, Xeon Bronze, Silver, Gold, Platinum at a similar clock and core count, will they render the same speed or will the higher grade always be faster?  I'm planning a dedicated render network station and have bee struggling to figure out what I should implement. Due to other rendering/simulation software needs, I have opted to stay with Intel, rather than AMD, but I'm "free" to pick in the Intel range (although platinum is too pricey).

PerFotoVDB

Hey Trixtr,

Go for the most core count you can get. The 3990 is a beast for rendering. I think nothing can top that. There's a keyshot benchmark test to read over at cgchannel.

Cheers
Per