Hi Peter,

First of all, let me apologize for taking forever to respond.  We've had a
pretty crazy last couple of days with the alpha launch.

You're absolutely right that speed is worth very little or even nothing if
you can't actually get the image you or the client wants out of the damned
thing, whether it's due to missing features, stability issues, limitations
on content complexity, lack of flexibility or ease of use.  We're very
sensitive to that and while I can't claim that we're there yet, we do plan
to have all the bells and whistles, stability, flexibility and ergonomics
to make Redshift a legitimate choice for production rendering.

That being said, speed can be important for a number of reasons.  A big one
is iteration times.  Everything else being equal faster rendering results
in better images because you have more opportunity to iterate, experiment,
tweak and generally be creative.
Another one is cost.  This will vary a lot for different types of users,
but if you suddenly don't need a render farm because your workstation
renders just as fast, you've saved money.  Or if you need a farm with only
100 nodes instead of 1000, you've saved some more money.

I should point out that Redshift doesn't just do basic raytracing and GI
but actually already supports many of the features you mentioned.  We do
point-based SSS, motion blur (not deformation blur yet, but we're working
on that right now), instancing and refractions.  For a 3rd party renderer,
I would say that our support for the native Softimage shaders is probably
about on par or possibly better than the others.

And we're not done yet!  Proper ICE support is a big one, as is proper
support for AOV/render channels.  Hair is another.  These are all in the
plan and have already had some significant thought (and in some cases
initial work) put into them.

-Nicolas


On Thu, Mar 14, 2013 at 1:21 AM, <[email protected]> wrote:

>   > Please also bear in mind that we're still just in alpha and
> constantly improving performance.  We're kind of obsessed with speed :)
>
> speed is great of course – but IMO it’s not the most important factor.
>
> over the years we have all been doing productions with rather long
> rendertimes, running into hours per frame and more. The bottom line was
> rarely “it has to be rendered in X amount of time” – clients couldn’t care
> less. It has to be good enough first and rendered in time for delivery.
>
> it’s been a long time I’m looking forward for a viewport/GPU mental ray
> replacement in softimage.
> Hopefully staying below 5 minutes for complex HD images and within 1
> minute for more simple stuff – but more importantly, it should have the
> bells and whistles of a modern raytracer, and deliver production quality
> rendering – that can be very precisely tweaked by the user.
>
> It’s very frustrating to get a promising image very fast, but not being
> able to make the image really final - some remaining artifacts, sampling
> problem or no ability to finetune this or that effect or simply lack of a
> feature you really require – so in turn you have to bite the bullet and go
> back to good old offline rendering – and the corresponding rendertimes will
> be twice as frustrating.
> Very extensive support for lighting features – not just GI / AO /
> softshadows / softreflections – but also SSS, raytraced refractions, motion
> blur, volumetrics, ICE support, instancing, hair – and a good set of
> shaders and support for the rendertree and as many of the factory shaders
> as possible.
>
> Mental ray never became the standard it was because of speed – but because
> of what one can achieve with it. (and then you have to turn off a few
> things left and right for final renders in order to make rendertimes
> acceptable)
> Obviously in this day and age it’s features are getting long in the tooth
> as well, which opens the door wide open for others – but it remains a
> reference for what a renderer should at least aspire to.
>
> just some thoughts and hints of what matters to me when considering a new
> renderer.
>

Reply via email to