GPU's used in parallel processing

Started by hugo, October 31, 2014, 11:41:10 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

hugo

http://www.wired.com/2014/10/future-of-artificial-intelligence/

Interesting article about AI and how the GPU's are changing parallel processing
If you don't have time to read the entire article scroll to:

1. Cheap parallel computation

Which once more raises the question: When will KS recognize that CPU rendering may not be the only way forward. :)




DriesV

#1
I don't agree.
Look at the complete mess that happens with GPU renderers when new generations of nvidia GPUs arrive: bad performance because code isn't optimized for new architecture, bad performance because of CUDA quirks, discontinuation of support for older generation hardware...
I think there is this myth that GPUs are faster than CPUs in ALL compute conditions. This seems to be a false argument. In certain workloads GPUs CAN be faster, but it's fair to say that a highly optimized CPU renderer can outperform any GPU renderer in speed and functionality.

Also note that most GPU renderers are 'simple' path tracers. That's the algorithm that is the easiest to implement on GPUs. I hope Henrik can confirm, but I don't expect a renderer like KeyShot to be able to run on a GPU.
One very important benefit of CPUs over GPUs, that is all too often overlooked, is flexibility:
*no ultra specific hardware requirements
*no driver dependencies (!)
*much more straight-forward network rendering (with GPUs often hardware and drivers need to be matched across nodes!)
*'easier' to implement complex algorithms
*much easier to deploy on multiple platforms, on servers, in the cloud...
*...

It all boils down to the workload and context at hand:
Some tasks/algorithms run faster on GPUs, others run faster on CPUs.
GPUs are more finicky, CPUs are more flexible.
I think for rendering specifically, GPUs aren't the be-all and end-all solution.

Also remember that, contrary to what nvidia wants you to believe, CPU development is strong as ever.
We're going to see 72-core Xeon CPUs in 2015. Let's talk again then. :)

Dries

hugo

Quote from: DriesV on October 31, 2014, 12:25:35 PM
I don't agree.
_snip_
Also remember that, contrary to what nvidia wants you to believe, CPU development is strong as ever.
We're going to see 72-core Xeon CPUs in 2015. Let's talk again then. :)

Dries

Well maybe we can agree to disagree! I don't think 72 core CPU's are the answer either, without considering the bottleneck between memory and motherboard. Its as phony as thinking we are all going to use Cloud computing. The massive amount of electricity required to keep these clouds alive is not sustainable.

https://www.youtube.com/watch?v=jcmsby8jDKE HP seems to be working to create  "The Machine", which is really whats required going forward.

thomasteger

We have run some tests on 72 core machines and are seeing a 50% increase in performance from 48 core machines.

hugo

Quote from: Thomas Teger on November 04, 2014, 01:20:24 PM
We have run some tests on 72 core machines and are seeing a 50% increase in performance from 48 core machines.

I wont be going out and buying a 72 core CPU any time soon, and maybe never. There have to be partical affordable alternatives.
Its not hard to create software, and in this example a plugin that uses both CPU & GPU, with the ability to chose one or the other.
Note the attached pic:

DriesV

CFD is the textbook example where GPUs make most sense. Mainstream GPU accelaration for CFD was available much earlier than for rendering.
You can't compare rendering to simulation algorithms.

I get the idea that these days some developers try to shoehorn every problem into a GPU solution. I think the CPU in many cases -and for many reasons- is still the best approach.

If GPUs are so much more powerful, then why is KeyShot still unrivalled in terms of rendering speed?

Dries

hugo

Quote from: DriesV on November 06, 2014, 06:53:34 AM
If GPUs are so much more powerful, then why is KeyShot still unrivalled in terms of rendering speed?
Dries

Should I believe everything I read?
I think GPU's have a definite market place, maybe its as simply as giving consumers a choice.
Its cheaper to buy a 4 gb graphics card then to buy a new 72 core rig.
I don't know why software companies make certain decisions. But I have been long enough to the planet to keep ask questions? I hope discussions make us all a bit smarter, in selecting what to buy next.  :)

thomasteger

You will need several expensive graphics card to match the performance of KeyShot on a 72 core machine.

hugo

I don't know if Pixar in bed with Nivdia or vice versa :) but the presenter at this conference
makes a strong case what GPU's offer together with CPU that allows real time rendering for Pixar animators.

https://www.youtube.com/watch?v=dk-0L6oAJqI&list=PL6Oz_zLeNQ0ksobrAtnyNaZGwkOs-u6_J&index=9

KeyShot

GPUs are great for preview rendering as shown in the video. Pixar does not use GPUs to render their movies. All the major movie studios use CPUs for high quality rendering and some use GPUs for previewing. GPUs are great at running simple algorithms in parallel, but they are not good at running the advanced algorithms used in KeyShot. Path tracing which is commonly used by GPUs is an example of a simple algorithm which is great for fast noisy results, but it takes a long time to become noise free.