Hi Morten,

you may find it helpful to register in the Redshift3D.com forums, afaik you´ll need to have at least one registered license to get access to the "Registered users only" forum area.

There´s a few threads there about Hardware, multiple GPU systems and some user cases of testing single gpu vs. multi gpu rendering plus some Developer info about roadmaps and such.

Personally, I´m a big fan of Redshift 3D.

Still, here´s a few things to consider you may find useful:

- Compared to Arnold, there is no HtoA or C4DtoA equivalent, e.g. no direct C4D or Houdini support - Compared to Arnold, rendering Yeti is not yet supported in Redshift3D - it´s looked at, no ETA. - Maya Fluids, Volumerendering, FumeFX e.g. Fire&Smoke&Dust&such isn´t in Redshift3D sofar

- Multitasking, compared to CPU based multitasking and task switching (e.g. switching between rendering in Maya, Softimage while simultaneously comping in Nuke and painting Textures in Photoshop or Mari) may pose GPU specific limitations with multiple applications fighting for a very limited GPU VRAM. Redshift3D can utilize system RAM for VRAM but there can be headache when other, "dumber" apps go ahead and just block VRAM for their caching. It´s well worth running a good few hard tests in typical workflow scenarios. Maya, Substance Painter/Designer, Nuke, Photoshop, they all offer one type or another of GPU caching or GPU acceleration option. My personal feeling is, such stuff never gets tested in real-world, multiple-applications-running scenarios.

At a glance, it would sound easy enough to have separate, dedicated GPUs run headless for rendering and reserving one GPU for viewport display and other apps but to be honest, all this stuff is so new, even thought it´s great, it´s still pushing grown
legacy workflows and boundaries and in doing so, it may sometimes hurt.

My very personal suggestion is:

- a starter kit is just one GPU, optimally a Titan X with 12GB VRAM.
- step 2, adding a second GPU, running headless, reserved for rendering
- step 3, adding a third GPU, comparing speed to step 2
- step 4, price/performance balancing, comparing a 1-2-3 GPU GTX970 render rig with the above

Could be you find out you like to run 1 Titan X for viewport display and multi-apps, and 2 GTX970 for a render job.


Another thing.

Multi-socket CPU boards and PCIe slots. It seems easier to get solid single socket CPU boards with lot´s of PCIe slots.

Again, from my personal experience running a current generation dual socket Xeon rig, it is annoying how many CPU cycles I see wasted away in idle in most of my daily chores, except for pure rendering with Arnold or the likes, I find myself mostly having one CPU and even most of the other CPU´s cores just not used properly by software.

I think a good sweetspot would have been to just go for one fast, solid 6-core(budget) or 8core (current) CPU, unless of course for a dedicated render slave...


Cheers,

tim










Am 05.08.2015 um 12:05 schrieb Morten Bartholdy:

I know several of you are using Redshift extensively or only now. We are looking in to expanding our permanent render license pool and are considering the pros and cons of Arnold, Vray and Redshift. I believe Redshift will provide the most bang for the buck, but at a cost of some production functionality we are used to with Arnold and Vray. Also, it will likely require an initial investment in new hardware as Redshift will not run on our Pizzabox render units, so that cost has to be counted in as well.

It looks like the most priceefficient Redshift setup would be to make a few machines with as many GPUs in them as physically possible, but how have you guys set up your Redshift renderfarms?


I am thinking a large cabinet with a huge PSU, lots of cooling, as much memory as possible on the motherboard and perhaps 8 GPUs in each. GTX 970 is probably the most power per pricepoint while Titans would make sense if more memory for rendering is required.


Any thoughts and pointers will be much appreciated.



Morten




Reply via email to