KS9 GPU & CPU - why not both at same time?

Started by rfollett, August 01, 2019, 12:25:18 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

rfollett

surely 2 is better than 1 when it comes to the final result? I dont understand why using the CPU & GPU at the same time is not the fastest option? can you explain?
surely is is all about crunching the numbers for the final render.. is it not possible to utilise the GPU & CPU at the same time?

INNEO_MWo

That is a nice idea - and maybe in a future release it will be possible?!
Hendrik showed at the KeyShot World (the user meeting by INNEO in Germany) the comparison of GPU/CPU results of different material types. They're looking quite good.
So I guess that Luxion will collect experiences and look to the next steps.

I am looking forward to the new release when ever it will be released.

Cheers
Marco

Eric Summers

I'm curious about that too. If it's possible it would definitely benefit everyone!

RRIS

From the comment section of the KS9 announcement:
"Q: Will it be possible to use a hybrid with gpu and cpu at the same time like blender render system?"
"A: Not at this time. So far, we're finding better performance with the option between one or the other."
https://blog.keyshot.com/keyshot-support-nvidia-rtx-ray-tracing-ai-denoising

mattjgerard

I wondered this a long time ago when GPU rendering first started to come around to mainstream, and when a certain unnamed render engine really broke through to the smaller shop 3d houses like I was at. A friend of mine who is something of a savant when it comes to understanding the coding and technical side of this issue did a very good job at talking to me about 3" over my head where I could understand why its so hard to get them to work together. Its basically boils down to having 2 very different toolsets to try to get to the same image. You can use the tools that the CPU or GPU manufacturer provides you with, but they are totally different toolboxes. Imagine toolbox A only having phillips head screwdrivers, toolbox B having flatheads. A having left handed hammers, B having right handed. A having cordless drills, B having corded drills that only work when held with your left foot upside down and you can only pull the trigger with a peeled banana. If you were trying to build a birdhouse with either toolbox, sure you can modify your method of building to suit the toolset you have. But if you had two people trying to use 2 different toolboxes to build the same birdhouse, there will be conflict, some nasty words tossed back and forth and likely a couple punches thrown. And the end result will be very difficult to get to look the same as the one that was built with one toolbox.

Same is the CPU+GPU working together. Two different toolboxes. CPU is just a number cruncher, you can write whatever code you want, its a blank slate. You are required to build your own toolset, stock your own toolbox with whatever you want.  Whereas as far as I understand (and I could be totally off base) with the GPU, you are given a toolbox, but only with certain tools in it, and you can't go to the pawn shop and buy the tools that are missing that would make your job easier. So trying to get the GPU to match what you have already programmed the CPU to do forces you to use a much more limited toolset to create the same looking image that came from your toolset on the CPU.

That being said, from what I undertand KS is taking a novel approach in that they are 1) Relying on the CPU as the main method to create an image. All else fails, the CPU will diligently output the image that it always had, based on the coding that has already been developed. 2) They are looking at the GPU to take on only certain tasks that they can get the exact same results out of as the CPU

If the GPU toolset can't calculate the color of the shadows in a certain pass, OK let the CPU still handle that. (Shadow color cast probably isn't a huge thing for gaming)  But guess what, the GPU is really good at calculating ray bounces for the GI pass? Sweet, move that function to the GPU and that frees the CPU to move on to other stuff. This will only work for the calculations that the GPU is good at and has the tools to do so, and only if the code magicians at Luxion can get the same results from.

This is all conjecture and educational guessing, but its how I understand it with my limited knowledge of coding and how its been dumbed down to be explained to me by people much smarter than I.

jet1990

I wonder if it would be possible to split the workload so the CPU takes care of one set of tasks and the GPU takes care of another?

mattjgerard

Quote from: jet1990 on August 02, 2019, 07:38:14 AM
I wonder if it would be possible to split the workload so the CPU takes care of one set of tasks and the GPU takes care of another?

I think this is what their goal is, to use the GPU to enhance the CPU rendering speed by offloading processes that the GPU can do from the CPU. Can't imagine its too easy to do that and get acceptable results to merge back into the end image.

jet1990

I'm not sure if it goes against the principles of how keyshot works but if we could bake lighting in the scene. Oh man. That would take interior renderings to a whole new level.

Tamerlin

Quote from: jet1990 on August 02, 2019, 07:38:14 AM
I wonder if it would be possible to split the workload so the CPU takes care of one set of tasks and the GPU takes care of another?

It depends a lot on how the renderer is implemented. There are some that do this, but the trick here I think is that KeyShot will be using Optix to leverage the GPU. I read a bit about this in one of the docs for the upcoming Cycles update which provided an overview.

The rather over simplified upshot of is that Optix essentially provides a ray server; you set up the scene geometry with the Optix data structures and then let IT do the ray hit calculations (I believe that you as the developer get to control how the rays are generated), Optix uses whatever hardware is available, which means that if you're running with RTX GPU(s) it will use the RTX hardware to calculate ray hits and if you have more than one RTX GPU it will distribute the load between them. You define what to do (i.e. what shaders to call, etc) on a ray hit via a callback, and on a ray hit Optix calls that... callback.

Optix also provides hardware based noise reduction that doesn't share any compute load with the general purpose compute resources, so it leaves the entire GPU and CPU compute capacity available for things like running shaders, calculating physics, loading geometry, saving images, etc -- that's the big thing that Turing brings, dedicated tensor and RT hardware that don't take up any of the compute resources available.