KeyShot - but with GPU rendering?

Started by Reuben J, February 06, 2018, 04:16:17 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Reuben J

Is there another program which can do everything that KeyShot does but in render using GPU?

I need it to be a standalone program, capable of importing multiple file formats, easy to use and be able to produce photorealistic images.

Appreciate anyone's input.

TGS808

As easy to use? Can't think of any.

Why does it have to be GPU based? If you're looking for a "standalone program, capable of importing multiple file formats, easy to use and be able to produce photorealistic images".  Well, you just described KeyShot so, why not use it? Why get hung up on the CPU vs GPU?

DMerz III

 This probably isn't going to be the best place to find the answer to your question, Reuben. If there were a GPU version of Keyshot which could match the photorealism and 'easy' workflow, I don't think Keyshot would have much of a selling point in the long-run.

AFAIK, GPU rendering CAN be must faster, but you give up certain calculations which promote real world physics (at least, that's how it has been explained to me).. (although recent memory debunks this since crypto-mining ) updating your GPU/buying multiple cards is cheaper in the short-term than rebuilding a machine with an updated CPU (cost of motherboard + potentially everything else for compatibility).

However, a slow CPU + a Fast GPU still results in a bottleneck...so there's that. Also, the on-board memory on a GPU is still quite small compared to the RAM capacity of most CPU/Motherboards at the moment (and if your scene is rather large in polycount or has lots of high resolution images, that memory goes quick!).

All that being said, at home, I have a modest i7 6700k and a pretty decent GTX 1070, do I wish I could use the power of the 1070 in Keyshot when I am doing test renders....absolutely.
Good luck in your search, please let us know if you find the pot of gold!  ;)

mattjgerard

Quote from: DMerz III on February 07, 2018, 07:34:46 AM
This probably isn't going to be the best place to find the answer to your question, Reuben. If there were a GPU version of Keyshot which could match the photorealism and 'easy' workflow, I don't think Keyshot would have much of a selling point in the long-run.

AFAIK, GPU rendering CAN be must faster, but you give up certain calculations which promote real world physics (at least, that's how it has been explained to me).. (although recent memory debunks this since crypto-mining ) updating your GPU/buying multiple cards is cheaper in the short-term than rebuilding a machine with an updated CPU (cost of motherboard + potentially everything else for compatibility).

However, a slow CPU + a Fast GPU still results in a bottleneck...so there's that. Also, the on-board memory on a GPU is still quite small compared to the RAM capacity of most CPU/Motherboards at the moment (and if your scene is rather large in polycount or has lots of high resolution images, that memory goes quick!).

All that being said, at home, I have a modest i7 6700k and a pretty decent GTX 1070, do I wish I could use the power of the 1070 in Keyshot when I am doing test renders....absolutely.
Good luck in your search, please let us know if you find the pot of gold!  ;)

Ditto all of this. Coming from a GPU renderer, and doing dozens of hours of just research into one or the other, CPU is still the most reliable. The pain of keeping app versions matching NVIDIA driver versions matched was a time waster for just me, I can't imagine doing that for a crew of artists, much less a render farm! Its a huge headache. Which is why all the major players are still rendering CPU mainly. They are testing GPU stuff, but its still so fragile.

I got sucked inte GPU rendering solutions last year, and while it CAN work well for small shops and single players, Merz is totally right with the RAM limitations. There are people out there that build 14  card systems and they completely crank until a card overheats, or something updates and tosses the whole system into chaos.

CPU is the more reliable, easier to maintain, you don't have to babysit them and restrict updates and drivers, and makes network rendering a simple thing. More expensive on the front end, yes.

I'll be interested to hear what you find, and if you decide to go GPU. There are certainly places for it, and its not a bad thing. Keyshot just offers so much simplicity and depth for what it does. It would be neat to hear from an engineer at KS to see what they would say on the matter. If they did get GPU rendering to work in KS would it immediately be 10 times faster? Is there that much of a gap betweem GPU and CPU times? or is it much smaller?

Reuben J

Thanks for the responses...really appreciate it.

My main reason for going GPU rather than CPU, was mainly speed, but also cost. I can buy a single capable system and easily add two or 3 more cards in SLI, without needing a new machine, whereas with a CPU based system, I need to buy an entire new system to add more power.

If i was going to go CPU, what is the most cost efficient CPU to go for? I was thinking of trying to find a cheap server, maybe a dell r910 (4x 8-10 core cpus) or something.

Furniture_Guy

I hear good things about the AMD Ryzen ThreadRipper 1950X 3.5 GHz 16‑Core Processor...

Perry

Reuben J

I also hear things about it being insanely expensive, not only to buy the processor, but all the other hardware (mobo, cooling, etc)

Mario Stockinger

We built a Threaripper Workstation with a TR 1950X Air Cooled , 32GB Ram and a M2 Sata and a Quadro P2000 : - 2500€

DMerz III

Another thing to mention that I did not know when I thought about GPU integration is that, even if you tie (SLI) multiple cards together, you're not increasing your GPU vRAM availability. (At least this is the case for the GPU render engines in my experience). So.. for instance, if you have 3 GTX 1070s with 8GB of on-board vRAM each, you do not have 24GB of vRAM available to render your scene, it actually can only use 8GB max.

The limitation gets even more serious when you have mixed cards. Let's say you have two 16GB capable cards, and one 4GB card....do you get 16GB of vRAM available? Nope, it limits itself to the smallest card, in this scenario, you only get 4GB... A lot of people don't think about this limitation when they go GPU. Yeah, you can upgrade your system by slapping in another card, but if you started with a 4GB and want to go to 8, you need to get rid of the 4GB card altogether.

Perhaps things will change in this limitation (or already have) I honestly haven't kept up with it. Perhaps the work you do isn't very memory intensive. But for my work, I am usually working high-poly and need high resolution textures, so having a high memory capacity is important.

mattjgerard

Quote from: DMerz III on February 09, 2018, 08:37:02 AM
Another thing to mention that I did not know when I thought about GPU integration is that, even if you tie (SLI) multiple cards together, you're not increasing your GPU vRAM availability. (At least this is the case for the GPU render engines in my experience). So.. for instance, if you have 3 GTX 1070s with 8GB of on-board vRAM each, you do not have 24GB of vRAM available to render your scene, it actually can only use 8GB max.

The limitation gets even more serious when you have mixed cards. Let's say you have two 16GB capable cards, and one 4GB card....do you get 16GB of vRAM available? Nope, it limits itself to the smallest card, in this scenario, you only get 4GB... A lot of people don't think about this limitation when they go GPU. Yeah, you can upgrade your system by slapping in another card, but if you started with a 4GB and want to go to 8, you need to get rid of the 4GB card altogether.

Perhaps things will change in this limitation (or already have) I honestly haven't kept up with it. Perhaps the work you do isn't very memory intensive. But for my work, I am usually working high-poly and need high resolution textures, so having a high memory capacity is important.

Did you use Octane too? Because this is exactly what happens. And with heavy polycount scenes, the load times can be limiting.

SLI is for gaming, I don't think any of the render engines use SLI. I know octane doesn't. SLI is starting to go away as well, as GPU's get faster and bigger, there's little need to link them in that way. The code is getting smarter and can optimize without ganging them together. SLI is a fairly fragile thing anyway.

Each solution has its place, but I built a PC for GPU rendering and spent about $2300USD.   Bu that gives you a high end GPU  with just ok MB, CPU, RAM, etc. So, all my other apps suffered a bit just to get the monster GPU. I would rather now put that money into the CPU and better ram, so that all apps that use the CPU can run faster, and a decent GPU that will handle most stuff just fine.

DMerz III

 :) I have not tried Octane, I have some friends who use it, for less photo-realistic work, more abstract, VFX art type of stuff and they love it of course. They have never used Keyshot though  ;)

I was referring to GPU rendering in Cycles, the difference there is that Cycles is also able to render on the CPU if you want to, so your scene is not a total loss. I use cycles as a test engine when I am building stuff in Blender which I eventually bring over to Keyshot as geometry. The GPU capability is nice to have just for testing some stuff, but I don't really go through material shaders and stuff at that point.

Will Gibbons

Quote from: Reuben J on February 08, 2018, 03:20:49 PM
Thanks for the responses...really appreciate it.

My main reason for going GPU rather than CPU, was mainly speed, but also cost. I can buy a single capable system and easily add two or 3 more cards in SLI, without needing a new machine, whereas with a CPU based system, I need to buy an entire new system to add more power.

If i was going to go CPU, what is the most cost efficient CPU to go for? I was thinking of trying to find a cheap server, maybe a dell r910 (4x 8-10 core cpus) or something.

I realize I forgot to create a post for my Threadripper workstation I built back in December. (so, I can't link to its component list at this time).

I spent about $6K (liquid-cooled Threadripper, 2 1080TI Hybrid cards, 64gb RAM, 2 M.2 SSDs and some cosmetic upgrades) (but could have skimped on everything except for the CPU & cooler and could end up with the same exact KeyShot experience for less than $2,000 USD. Downside of muli-tower configs is the cost of Network Rendering licenses. So, I'd recommend going with a single machine/os on as many chips as you can so you can use a single KeyShot license on it if you're looking for cost savings.

I built the Threadripper 1950x and it is quite fast. 32 threads running at 4.1GHZ is pretty quick in KeyShot. Just depends on what you'll be rendering. I built an all-round machine that is fast for KeyShot and I figure any jobs that my machine is too slow for, can just be sent out to a render farm.

The way David explains SLI is how I've experienced it too. I tested mine in SLI on Cycles render and it wasn't that fast. I disabled SLI and got better results.

bshapiro

Just saw this post while looking for other info and thought you guys might find this of interest.

Last year I did some extensive testing with Solidworks Visualize. I dissagree with some of the other comments above. The actual rendering on a Quadro M4000 was very fast and the newer Pascal processors would smoke any CPU.
Cost to set up a machine is way less that a tricked dual CPU.

So you might ask why did I go with Keyshot.  While GPU has great potential it has some pitfalls like Matt G pointed out. The video driver must always be updated to match the version of the program. Secondly I was getting odd hanging issues when changing materials, selecting bodies etc. It got painfully slow with some files. Most of my files were large upwards of 18-19 million polygons. Nothing like the sample file they give you to try. Never resolved those issues.

Also the interface was unrefined. Whereas Keyshot is a very refined and mature program. It's not just about pushing the render button for the actual rendering. The set-up often takes way more time at least for us.

So yes the GPU can do wonders with thousand of cores and many of the newer cards have plenty of RAM. BTW if you use multiple video cards the only use as much RAM as the smallest card. So if you have 1 card with 4 Gigs of RAM and one with 8. both only use 4.

And one of my favorite aspects of Keyshot is no need for a separate HDRI editor.

So maybe GPU some day but not yet.

Barry


Reuben J

Quote from: DMerz III on February 09, 2018, 08:37:02 AM
Another thing to mention that I did not know when I thought about GPU integration is that, even if you tie (SLI) multiple cards together, you're not increasing your GPU vRAM availability. (At least this is the case for the GPU render engines in my experience). So.. for instance, if you have 3 GTX 1070s with 8GB of on-board vRAM each, you do not have 24GB of vRAM available to render your scene, it actually can only use 8GB max.

The limitation gets even more serious when you have mixed cards. Let's say you have two 16GB capable cards, and one 4GB card....do you get 16GB of vRAM available? Nope, it limits itself to the smallest card, in this scenario, you only get 4GB... A lot of people don't think about this limitation when they go GPU. Yeah, you can upgrade your system by slapping in another card, but if you started with a 4GB and want to go to 8, you need to get rid of the 4GB card altogether.

Perhaps things will change in this limitation (or already have) I honestly haven't kept up with it. Perhaps the work you do isn't very memory intensive. But for my work, I am usually working high-poly and need high resolution textures, so having a high memory capacity is important.

Have you seen the AMD Radeon Pro Duo? I think that'd solve any VRAM issues!

mattjgerard

Quote from: Reuben J on April 02, 2018, 10:03:23 PM

Have you seen the AMD Radeon Pro Duo? I think that'd solve any VRAM issues!

While that does seem to remove one roadblock of GPU rendering, the others are still a large issue. If they can get a hold of the driver and versioning better than NVIDIA, then this might start to overtake.  If anything, this whole process is really interesting to watch from the outside.