Hey guys,

I'm going to respond to the last few messages regarding the importance of
speed later, but in the meantime here is a video of some live rendering in
Softimage.

http://youtu.be/fjCguRdSlV0

-Nicolas



On Thu, Mar 14, 2013 at 1:17 PM, <[email protected]> wrote:

>   you are right of course, as always.
>
> what is really needed is a fine balance between quality and speed,
> at a pricepoint that is affordable yet high enough to sustain development,
> and available before my retirement.
>
>
>  *From:* Andy Moorer <[email protected]>
> *Sent:* Thursday, March 14, 2013 9:02 PM
> *To:* [email protected]
> *Subject:* Re: Announcing Redshift - Biased GPU Renderer
>
>  Well said, but speed is still important, deadlines are tight and
> particularly in the iterative direction phase often re-rendering takes much
> more time than making a directed change. "Dailies" reflect this... A series
> of several directed tweaks to a shot can stretch over several days in part
> to allow time to make changes and get them rendered... A major limitation
> to working with rendered VFX  elements versus composite effects which can
> often be altered in near realtime.
>
> Sent from my iPad
>
> On Mar 14, 2013, at 4:21 AM, <[email protected]> wrote:
>
>    > Please also bear in mind that we're still just in alpha and
> constantly improving performance.  We're kind of obsessed with speed :)
>
> speed is great of course – but IMO it’s not the most important factor.
>
> over the years we have all been doing productions with rather long
> rendertimes, running into hours per frame and more. The bottom line was
> rarely “it has to be rendered in X amount of time” – clients couldn’t care
> less. It has to be good enough first and rendered in time for delivery.
>
> it’s been a long time I’m looking forward for a viewport/GPU mental ray
> replacement in softimage.
> Hopefully staying below 5 minutes for complex HD images and within 1
> minute for more simple stuff – but more importantly, it should have the
> bells and whistles of a modern raytracer, and deliver production quality
> rendering – that can be very precisely tweaked by the user.
>
> It’s very frustrating to get a promising image very fast, but not being
> able to make the image really final - some remaining artifacts, sampling
> problem or no ability to finetune this or that effect or simply lack of a
> feature you really require – so in turn you have to bite the bullet and go
> back to good old offline rendering – and the corresponding rendertimes will
> be twice as frustrating.
> Very extensive support for lighting features – not just GI / AO /
> softshadows / softreflections – but also SSS, raytraced refractions, motion
> blur, volumetrics, ICE support, instancing, hair – and a good set of
> shaders and support for the rendertree and as many of the factory shaders
> as possible.
>
> Mental ray never became the standard it was because of speed – but because
> of what one can achieve with it. (and then you have to turn off a few
> things left and right for final renders in order to make rendertimes
> acceptable)
> Obviously in this day and age it’s features are getting long in the tooth
> as well, which opens the door wide open for others – but it remains a
> reference for what a renderer should at least aspire to.
>
> just some thoughts and hints of what matters to me when considering a new
> renderer.
>
>

Reply via email to