GPU CPU or both?

Started by hugo, May 19, 2014, 11:28:06 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

hugo

Where should Developers be heading?

My Example:  Keyshot was not used.

An older Quad Core CPU, with 8gb ram, with GeFore GTX 760 graphics card

It took 3 hours to render one 1280 x 720 picture.
This program needed to interpret a lot of 3d data, it did NOT use the GPU.

It took 42 min. to render an entire 1 min. 1280 x 720, mp4 movie.
This animation program needed to interpret a lot of 3d data, and used the GPU.

In fact in the time it took to render the 1 picture, I spent the same time producing, 3 - 1 min. movies. I then used another program to quickly assemble it into a 3 min. movie with transitions open and closing credits, and sound.

comments welcome!

Speedster

And the question is...? 

KeyShot is totally CPU centric, and cares less about GPU or the graphics card.  On my 32 core 64 gig  BOXX, 1280 x720 would render in maybe 15 seconds, or perhaps a bit more depending on the materials. It's all about CPU's. All the smart developers and users are migrating to KeyShot.

Bill G

DriesV

#2
For all I know, KeyShot is extremely well optimized to make best use of the CPU.
I have not found a faster renderer (CPU or GPU) for product shots than KeyShot.

There are a few situations where at the current state-of-the-art GPUs will be faster (e.g. indirectly lit interior rendering with GPU path tracing). However, there's no reason to doubt that in the future KeyShot might have algorithms to deal with this.

It's all about clever and efficient use of computing resources and I think CPUs have the edge over GPUs in this regard. (Just a hunch, I'm not a programmer...)

Dries

ddolezal

From my point of view the approach to use the GPU for rendering is currently totally en vogue and the hardware vendors love to sell you bundles of VideoCards to improve the render speeds.


There are a few "buts" that nobody mentions:


In many cases the rendering engines only support a certain type of GPU, e.g. nvidia.
If you have a machine with a superfast Graphics card from another vendor, you simply cannot use GPU-rendering when your card is not supported by the renderer.
(And many render engines currently only support CUDA)


You always have to buy the most recent video card in addition to the most recent workstation.


In many cases the fast graphics card only speeds up your rendering - the general speed in your workstation for "mainstream" tasks is not altered in any positve way.






So the CPU approach - albeit currently not very sexy - offers many advantages IF the general design approach of the software is smart and the algorithms used in the rendere software are well optimized.....


Performancewise (I am running only on an iMac with 60fps) I can say that keyshots beats all other renderers I have tried so far - so I am quite happy with the approach Luxion took.....




Dieter

hugo

Quote from: Speedster on May 19, 2014, 05:14:54 PM
And the question is...? 

If they have figured out how to share the computing workload between multiple processors.
Then why not figure out how to share the rendering workload between CPU and GPU?

Also I don't have 10K to buy a BOXX. But I have picked up several computers at the local recycling center, with are great for network rendering. Your BOXX will probably end up there in about 5 years, let me know when your throwing it out :)

I still remember spending $800.00 for an 8088 math co-processor, back in my foolish spendthrift days, to make AutoCad run faster.

cheers!