KeyShot Forum

Other => Wish List => Topic started by: syfer on April 01, 2014, 07:44:24 AM

Title: gpu support
Post by: syfer on April 01, 2014, 07:44:24 AM
Hi i think it would be cool if we could add some gpu support. i know this cpu rendering but it could use the gpu  i would like me be support for gpu and cpu working together to get the job done even faster kind of like double teaming the production to get even faster result and faster rendering times if you could some how integrate the cpu and gpu to talk to each other then it might even render faster with help of gpu cores.
Title: Re: gpu support
Post by: thomasteger on April 01, 2014, 08:10:44 AM
No it will not. That is a common myth.
Title: Re: gpu support
Post by: DriesV on April 01, 2014, 09:44:41 AM
I dabbled in GPU rendering for a while about 2 yrs ago.
I gave up once I figured out that for every new release of software x I had to update my graphics driver too to keep things working!! That and stability issues.
Decided to call it a toy. Don't know firsthand if things have changed a lot since then...

Dries
Title: Re: gpu support
Post by: Despot on May 28, 2014, 06:52:16 AM
I used Octane beta years ago and it was OK...it did the job, although it nearly fried my graphics cards which at the time were dual GT9800's i think... now I use a lowly laptop which KeyShot works fine on (well, V3 anyway)
Title: Re: gpu support
Post by: DriesV on May 29, 2014, 01:31:05 PM
Just wait until Intel releases their 60-to-72-core (4 threads/core!) Xeon Phi CPUs (http://www.realworldtech.com/knights-landing-details/) (codenamed Knights Landing) next year.
Not many details available in the wild yet, but basically it will be a HPC-centric CPU that plugs into your Xeon capable server/workstation. In a multi-socket system, it could be paired with a regular Xeon for higher single-threaded performance.
Supercomputers are already being designed around this new chip at this very moment (http://goparallel.sourceforge.net/nersc-supercomputer-first-use-intels-next-gen-knights-landing/).

Like most Xeon components it will not be cheap. However, the benefits over GPUs (and current Xeon Phi for that matter...) are huge: no memory limitations, direct link to host CPU, easy to share data between CPUs, highly familiar and highly flexible programming (no pushing over PCIe etc.)...

In my (humble and non-expert) opinion, it's going to be huge. It could be a blow for the relevance of GPUs. At least (or most definitely?) for supercomputing.
As we know from history, mainstream follows suit. :)

Dries
Title: Re: gpu support
Post by: jiyang1018 on July 13, 2014, 08:08:03 PM
Myth busted by mythbusters.
http://www.youtube.com/watch?v=-P28LKWTzrI
Title: Re: gpu support
Post by: jiyang1018 on July 13, 2014, 08:28:34 PM
Before XEON Phi comes, adding more GPUs into a computer is easier and more cost efficient than adding CPUs.
For the most computers you have right now, add one more graphic card should not be a problem. How many people's computer can take another CPU?
Phi should completely change the game, but at what cost? I am afraid it cannot be answered until Phi is officially released.
Title: Re: gpu support
Post by: Arn on July 24, 2014, 09:16:22 PM
Personally I see CPU based rendering as a little more robust, but GPU based rendering as a little cheaper or more powerful. The cards are made for these kinds of things, so it figures you can buy a lot of speed for little money. On the other hand, they are a bit more finicky to get to work properly and optimal.

I would love to see what Keyshot can do with GPU's, but at the same time I think focussing the effort on doing one thing well is the best strategy. I would rather have another company make a good GPU renderer so we have both technologies competing.
Title: Re: gpu support
Post by: Despot on July 24, 2014, 10:34:36 PM
QuoteOn the other hand, they are a bit more finicky to get to work properly and optimal.

That's not the case at all... just ensure your nVidia card has CUDA support and that you have up to date drivers installed and away you go
Title: Re: gpu support
Post by: edwardo on July 25, 2014, 04:44:36 AM
I think most of the issues arrive from trying to network GPU renderers
Title: Re: gpu support
Post by: Angelo on July 25, 2014, 05:55:50 AM
i tested with cpu and gpu in blender cycles renderer i have an intel core i7 3930k 6/12thread cpu and a titan black 6Gb, the cpu renders 12 squares and gpu renders 1 square but faster, cpu made a 6-15 seconds less than gpu almost every time and i tried this around 8 times.

i think if gpu would be faster one would have to have multiple cards but thats not cost efficient for those on a budget.
Title: Re: gpu support
Post by: Arn on July 25, 2014, 07:47:41 PM
Quote from: edwardo on July 25, 2014, 04:44:36 AM
I think most of the issues arrive from trying to network GPU renderers
Indeed, that is one of the problems. I also understand it is quite a bit more work for the company developing the software to support the various chips that are out there, although CUDA en especially OpenCL should mitigate that a bit. Don't quote me on that though.
Title: Re: gpu support
Post by: DriesV on July 26, 2014, 05:06:50 PM
I recently purchased a license of Octane Render (I already had a GTX 670, which is sort of okayish for GPU rendering). Mainly because it is cheap enough to justify fooling around with.

note: personally I find performance/Watt to be the most important factor when comparing CPU and GPU rendering performance. Luckily my CPU (3930K @ 4.6GHz) and GPU (GTX 670) draw the same amount of power: +/-170W. This makes comparing them quite straightforward. Both parts were released no more than 6 months apart and were state-of-the-art when I bought them.

Here are my 2cts so far:

Closing comment:
In my opinion, many developers of GPU renderers are falsely assuming that CPU performance scalability has reached a final station. They're assuming raw CPU performance will only improve very marginally from one generation to the next, not by big jumps (like it did in the past).
There's enough CPU-centered tech in the works to prove them wrong. I think multiple CPU systems with heterogeneous chips (e.g. a 10-core general purpose CPU + a 70-core parallel processing CPU) are going to be very important in the near future.

Dries