Hello,

Rigth now I've got 3 computers, 4 GPUs each, TitanX, Titan and 970
One problem when putting as many GPUs as possible like 8 for example is
limit of PCI lines that CPU supports.
For now I found that sweet spot is 4 GPUs and 5930k or similar CPUs that
have 40 PCI lines supported. Also power supply for that system is a bit
easier then for 8 GPUs.
If you wanna go with more GPUs then you should probably had to look into
dual Xeon setups as well which increases price significantly.
Depending on scenes rendering beside GPU memory it is important to
accommodate RAM as well as due to scaling of GPUs when all are rendering
same frame it goes around 2.9 scaling when rendering with 4 compared to
single GPU. But solution is simple, using Deadline or similar render
management you can assign 1 or 2 GPUs per frame, but that also runs
multiple SI instances as well so that extra RAM is always good to have.
SO far all my comps are at 32GB RAM but I am considering upgrading to 64
any time soon just to give some more breathing space when allocating single
GPU per frame ie running 4 instances at once.

Probably sounds a bit messy or something but to sum so far what I've found
is nice:
- 4 GPU per computer, reference cooling design to exhaust heat out and back
of the case,
- any 4 GPU support motherboard that can support 4x PCI3.0@x8 speed (4x16
speed is nice but didn't show much improvement in performance but price is
much higher)
- 40 PCI lanes support CPU (3930k, 4930k, 5930k or 5960x and similar
editions),
- 32+ GB RAM,
- ~1500 PSU (4x titan X I saw coming around 900W-1200W when rendering out
of the UPS, better to give some extra space ofc)
- nice good ventilated case (corsair 750d is nice example, much smaller
then 900d for example but pretty good ventilation)
and in air conditioned room there is no issues with overheating. Liquid
cooling for 4 GPU setups is rather expensive and doesn't justify the cost
in my opinion.

There is option of going with some server rack cases and putting all in
cabinet as well ofc but that depends if you are gonna use them as
workstations as well or pure rendering nodes.

As for GPUs, in most cases 970 is just a bit behind titans, but in couple
scenes where I had a some fur it was probably hitting memory limits and it
was couple times slower even at range 3-4 times slower then titans with
more ram)

On Wed, Aug 5, 2015 at 12:05 PM, Morten Bartholdy <[email protected]>
wrote:

> I know several of you are using Redshift extensively or only now. We are
> looking in to expanding our permanent render license pool and are
> considering the pros and cons of Arnold, Vray and Redshift. I believe
> Redshift will provide the most bang for the buck, but at a cost of some
> production functionality we are used to with Arnold and Vray. Also, it will
> likely require an initial investment in new hardware as Redshift will not
> run on our Pizzabox render units, so that cost has to be counted in as well.
>
>
>
> It looks like the most priceefficient Redshift setup would be to make a
> few machines with as many GPUs in them as physically possible, but how have
> you guys set up your Redshift renderfarms?
>
>
> I am thinking a large cabinet with a huge PSU, lots of cooling, as much
> memory as possible on the motherboard and perhaps 8 GPUs in each. GTX 970
> is probably the most power per pricepoint while Titans would make sense if
> more memory for rendering is required.
>
>
> Any thoughts and pointers will be much appreciated.
>
>
>
> Morten
>
>
>
>

Reply via email to