Main Menu

gpu support

Started by syfer, April 01, 2014, 07:44:24 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

syfer

Hi i think it would be cool if we could add some gpu support. i know this cpu rendering but it could use the gpu  i would like me be support for gpu and cpu working together to get the job done even faster kind of like double teaming the production to get even faster result and faster rendering times if you could some how integrate the cpu and gpu to talk to each other then it might even render faster with help of gpu cores.

thomasteger

No it will not. That is a common myth.

DriesV

I dabbled in GPU rendering for a while about 2 yrs ago.
I gave up once I figured out that for every new release of software x I had to update my graphics driver too to keep things working!! That and stability issues.
Decided to call it a toy. Don't know firsthand if things have changed a lot since then...

Dries

Despot

I used Octane beta years ago and it was OK...it did the job, although it nearly fried my graphics cards which at the time were dual GT9800's i think... now I use a lowly laptop which KeyShot works fine on (well, V3 anyway)

DriesV

#4
Just wait until Intel releases their 60-to-72-core (4 threads/core!) Xeon Phi CPUs (codenamed Knights Landing) next year.
Not many details available in the wild yet, but basically it will be a HPC-centric CPU that plugs into your Xeon capable server/workstation. In a multi-socket system, it could be paired with a regular Xeon for higher single-threaded performance.
Supercomputers are already being designed around this new chip at this very moment.

Like most Xeon components it will not be cheap. However, the benefits over GPUs (and current Xeon Phi for that matter...) are huge: no memory limitations, direct link to host CPU, easy to share data between CPUs, highly familiar and highly flexible programming (no pushing over PCIe etc.)...

In my (humble and non-expert) opinion, it's going to be huge. It could be a blow for the relevance of GPUs. At least (or most definitely?) for supercomputing.
As we know from history, mainstream follows suit. :)

Dries


jiyang1018

Before XEON Phi comes, adding more GPUs into a computer is easier and more cost efficient than adding CPUs.
For the most computers you have right now, add one more graphic card should not be a problem. How many people's computer can take another CPU?
Phi should completely change the game, but at what cost? I am afraid it cannot be answered until Phi is officially released.

Arn

Personally I see CPU based rendering as a little more robust, but GPU based rendering as a little cheaper or more powerful. The cards are made for these kinds of things, so it figures you can buy a lot of speed for little money. On the other hand, they are a bit more finicky to get to work properly and optimal.

I would love to see what Keyshot can do with GPU's, but at the same time I think focussing the effort on doing one thing well is the best strategy. I would rather have another company make a good GPU renderer so we have both technologies competing.

Despot

QuoteOn the other hand, they are a bit more finicky to get to work properly and optimal.

That's not the case at all... just ensure your nVidia card has CUDA support and that you have up to date drivers installed and away you go

edwardo

I think most of the issues arrive from trying to network GPU renderers

Angelo

i tested with cpu and gpu in blender cycles renderer i have an intel core i7 3930k 6/12thread cpu and a titan black 6Gb, the cpu renders 12 squares and gpu renders 1 square but faster, cpu made a 6-15 seconds less than gpu almost every time and i tried this around 8 times.

i think if gpu would be faster one would have to have multiple cards but thats not cost efficient for those on a budget.

Arn

#11
Quote from: edwardo on July 25, 2014, 04:44:36 AM
I think most of the issues arrive from trying to network GPU renderers
Indeed, that is one of the problems. I also understand it is quite a bit more work for the company developing the software to support the various chips that are out there, although CUDA en especially OpenCL should mitigate that a bit. Don't quote me on that though.

DriesV

#12
I recently purchased a license of Octane Render (I already had a GTX 670, which is sort of okayish for GPU rendering). Mainly because it is cheap enough to justify fooling around with.

note: personally I find performance/Watt to be the most important factor when comparing CPU and GPU rendering performance. Luckily my CPU (3930K @ 4.6GHz) and GPU (GTX 670) draw the same amount of power: +/-170W. This makes comparing them quite straightforward. Both parts were released no more than 6 months apart and were state-of-the-art when I bought them.

Here are my 2cts so far:

  • For HDRI-lit product shots KeyShot is faster and WAY more efficient than any GPU path tracer (unidir, bidir, MLT...). Path tracers take the brute-force approach, no matter how complex the scene is. For product shots it's a terrible waste of resources.
  • However, currently GPU path tracers ARE significantly faster than KeyShot for realtime complex interiors (or any other scenes that require complex multi-bounce indirect illumination. Here the peak compute power of GPUs definitely helps.
  • Still, I don't believe path tracing is the best one-size-fits-all solution. A new GPU renderer, called Redshift, is supposedly capable of handling a multitude of modern biased algorithms (supposedly hard to achieve on GPUs). It's very very fast, but feels like V-ray, with all the settings finicking to boot. Any renderer that feels like V-ray is not trying to compete with KeyShot. :)
  • From a rendering technology point of view I find KeyShot to be the most impressive renderer. Ultimately, once KeyShot runs on Knights Landing Xeon Phi CPUs (it HAS to someday, right!? ;)), it will blow the doors off all GPU renderers. Maybe as soon as 2015?
  • "GPUs are so much more flexible. You can add GPUs and quadruple rendering speed!!" This is a silly argument. A great number of GPU rendering demos are running on 4 (or more) GPUs. With the current state-of-the-art that amounts to 1000W (or more) (!) for the GPUs alone! That's an insanely high number. True, it's much easier to add GPUs to an existing system than CPUs. However, do you really want your workstation to double as a furnace??
  • GPU rendering is still hardware and driver hell. As pointed out, getting network rendering to work isn't trivial in most GPU renderers. You often need to match GPU generations, models, drivers (!)...Network rendering with CPUs is child's play in comparison.

Closing comment:
In my opinion, many developers of GPU renderers are falsely assuming that CPU performance scalability has reached a final station. They're assuming raw CPU performance will only improve very marginally from one generation to the next, not by big jumps (like it did in the past).
There's enough CPU-centered tech in the works to prove them wrong. I think multiple CPU systems with heterogeneous chips (e.g. a 10-core general purpose CPU + a 70-core parallel processing CPU) are going to be very important in the near future.

Dries