Computer Specs cost comparison

Started by andy.engelkemier, October 04, 2016, 10:14:53 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.


I was noticing that a beats of a machine that's a few years old, with a higher end Xeon processor (at the time), more memory which is also likely faster, and other things that Should cause a computer to be much faster than another, was being out performed Noticeably by someone's dell laptop, which happens to be a 6th gen i7.

That got me thinking...
People are spending 3k+ on render machines. An extra 32cores costs $480 per year. Well, you can buy a few cheaper computers networked together to accomplish more rendering power, or you can try and pack that all into one machine.

Now, I know what you're thinking (some of you at least). If it's your machine, you'll just want it as fast as possible. I suppose that may be true. But there's always a cost/benefit analysis that should happen, even if it's just an estimate.

I'm not questioning FPS here. I'm just looking at the actual final render speed. Throw in some advanced materials, have a fairly complex scene, and render the thing out at 12,000x,8,000. Something that is just going to take a long time no matter how you spin it.

The question is, for the same amount of money, what's more efficient? Having a few fairly cheap computers as a mini-farm, or having one or two machines that cost more than my car (I drive an old car, that's isn't difficult).

I was thinking, something like an ok i7, a regular SSD that only needs to be probably a 120GB or so which should run something like $70, and 16GB of memory.

That should do the trick, and doesn't even really need much of a graphics card if it's just a render node right? I'm not sure about that, but as far as just rendering in most software that I've used in the past, it's not really utilizing your card for much more than a few tasks like casting certain types of shadows, non-GI tasks.

But then I had to think, why is my coworker's laptop actually faster at rendering than a machine, according to it's specs, that should be twice as fast? That's where I'm hoping some of you could chime in.
I was hoping to see some of this info on the benchmark area, but it seems people just like to post up their builds there to show why their $8k computer is slightly better than your $4k computer, where I'm actually talking about building sub-$1k computers and using them as a farm.

And some people will tell you, similar to quadro's vs gamer cards, "but Xeons are built for this and will last." I've got news for you. We've been using gamer cards for quite some time, and none of them ended up going bad (not to say others wouldn't). Also our two oldest computers are gaming machines, where lots of our dell and hp workstations have gone bad. Having something with an industrial claim doesn't necessarily make it better. In my career I've also had 2 quadros go bad, and zero consumer cards (not overclocked). 

Has anyone made a small cheap render farm? Assuming 32 cores is probably at least 3 computers worth at that level, it's $160 per year per computer. So if you build a 1K computer that lasts 3 years you're looking at a total of just shy of $1500 per computer, or $4500. Is a $4500 computer faster than 3 1K computers put together? If not, where's that sweet spot? If the expensive computer Is just plain faster, then I guess I'll peruse the benchmark pages and figure out where I think we should put some money.
One big advantage to the small computers though, is if one breaks you still have more to just be a little slower with. If your expensive computer breaks, you can't render at All.


Having fewer computers is always faster, because you are reducing if not eliminating the data transfer between machines.

Your concern about about multiple small computers in case of one going down is a valid one.


I don't think it would Always be faster. Always seems like a strong word. Yes, you're eliminating data transfer between machines, but for a 6 hour render, the time for data transfer compared to  the speed of the render is somewhat negligible, isn't it?
Back when I had a render farm, we noticed that some jobs really only benefited from a few machines, where others benefited from all 16 machines. It depended greatly on how long each bucket took. When they took long, the addition was nearly linear. When it was Really fast, there was actually a slow down because only the first machine really did the rendering. Some of the others would end up picking up Just at the end, and that's where there's that extra overhead of compiling things back together and more network traffic.

I'm not really sure how Keyshot is sending data though. The other engines I have used send the entire scene to every computer, then talk back and forth about which buckets they are working on. Then something compiles it in the end. Another method is each machine can render a completely predetermined sector, which is Not a good method because that doesn't account for one machine being faster/slower.
And then you have the newer render method that real-time rendering uses. You can use multiple machines there, but I have no idea how that's working so I won't even share what my thoughts are on it. But I do know that when I render real-time in Vray that if I add just one machine, it's faster by quite a bit, but it's Not linear.

I haven't actually tried keyshot network render yet. I'm trying to get in touch with Jeff to see if I can get a demo so I could put some numbers together for management. I'd like to see if we can use keyshot exclusively. We'd still be animating in Maya and Max, but the render engine would stay consistent. I think it might work for us, but I do need to test it out.


Hey Andy, I just went through a similar experience. I had a limited budget (no more than 3K) but needed a way for three industrial designers (including myself) to start rendering some short animations for marketing purposes. We all have Dell M6800 laptops, but wanted to offload the renders so we could continue to work while the animations were rendering.
I had originally spec'd one fast PC, but one of our IT guys recommended getting some older Xeon servers off of eBay and using those. We went that route, and I was able to buy four of the servers in the link below.
I had originally intended to use three and have the fourth as a parts machine as I wasn't too sure how reliable they would be. I am happy to report that all four servers fired up and ran well, so we now have 96 cores (with Hyperthreading). The four servers were just a bit over $1200, then I had to buy four Windows licenses ($139 ea). We bought the Win7 seats from New Egg even though I found cheaper licenses at other sites (my IT guys didn't have a good feeling about some of the sites I was finding licenses at) :-)
We went slightly over budget, as we had originally planned on buying the 64-core Network Render add-in, but since we had 96 cores we sucked it up and bought that.
We benchmarked our render farm against our laptops that we were using to render, and a scene that took 1 hour and 47 minutes to render on my laptop took 14 minutes to render on our network. I was very happy with that, and in addition while those renders are going we have our laptops to use at full power. My guys didn't realize how big of a deal that would be.

We did loose the farm for two days while our network in the building was being upgraded, and having to render on our local machines again is such a bad workflow. I know these servers do use a fair bit of power which is a negative, but compared to the time we save being able to keep working while our renders finish, I think we are ahead.

Here's that link. There are newer gen Xeon chips for similar prices occasionally on eBay, but this guy (I have no affiliation whatesoever) seems to have a steady supply, and emailed me to say he can build as many as I might need at a slight discount form the eBay listing.


So this is Definitely where I end up needing to look at benchmarks. Because a Xeon 5690 3.47Ghz in my old machine renders considerably slower than my current iy-6700HQ at 2.6 Ghz. The only thing I can really equate that to is newer chipset and processor architecture using smarter paths and software.
I noticed this giant discrepancy when a coworker purchased a laptop. I think it cost 1500 or so, so definitely not really high end. It's just a nice laptop. We made sure it was a current processor though. His machine was just Killing mine. I was doing 20 minute renders, and he thought I was crazy because he was pulling off the same exact render in 5 minutes. So then I did some testing. Some things were a Lot closer. A couple 7 minute renders were nearly the same. And some Really quick small renders were actually faster on the older Xeon processor. But the majority of everything was considerably faster on his newer consumer laptop from costco. That would also use a lot less power, btw.

My point here is, I question the need for using Xeon processors for this, especially if you are going a budget route. And benchmarking only one scene doesn't really cut it either. You get very different results with the real-time render and with the advanced render, which if you use lots of advanced materials there is no question that the advanced render will save you time once you tweak the settings a bit. Although, for animations, you'll likely get more ugly noise with that which you don't perceive until you see it moving and realize the GI doesn't look good.

So I'm leaning more toward getting the newest chipset and processor that I can for a budget price instead of already buying old hardware. I may just start with one, assuming I can get support for it, and compare speed with other machines in the building. That should tell us if a newer build is beneficial.

I totally agree with having renders off of your machine. We currently only have a couple seats of keyshot, and have them on shared machines specific for rendering. That ensures personal machines are always freed up from rendering, plus it's cheaper than floating licenses.


Yep, we use the maximum sample setting for most renders, and tweak the samples needed to suit the materials, and output quality needed.
I only listed one of my benchmarks, but the servers did beat my laptop (Dell M6800 i7-4710MQ @ 2.5GHz), in every test I threw at it by a pretty significant margin.

Just thought I'd mention the used server route, as I hadn't thought of it before IT mentioned it.
BTW, I was very against it before trying it. Ultimately, it just made better sense from a budget/support standpoint to try it, and it ended up fitting our needs. Keep us posted on what you end up with.


yeah, I bet if you compared with a gen-6 i7 that would likely be faster than your server machines also. That's what I was finding here. We were joking around, talking about getting some of those tiny computers that are basically a laptop without a monitor...until the laughter subsided and we realized that actually isn't a terrible idea given the speed vs. cost. Now, if you needed a good GPU or something like that, then sure. And I'm not exactly sure how much slow down you get from some of the memory pipeline issues you might get. But it really isn't a bad idea to try out.
The only issue is finding a way test it out. It's not like you can just try it, then send it back.
Once we get a network render test going we'll test a variety of the computers we have, and see what we can learn from that. We do have quite a few different types of machines because we have a wide mix of needs.