On 2/9/2017 1:27 PM, Gabor L. Toth wrote:

> Hi Nathan, thanks for the clarification! But then what is the 
> preferred workflow with this plugin? I assume many companies are using 
> with deep input. We can't keep the comping fully in deep until the 
> defocus node, we are putting together the passes before, with all kind 
> of color corrections and other nodes...  Any advice?

Well, the "preferred" workflow would be to render and composite entirely 
with deep images, even though that's almost guaranteed to be 
impractical. :) If you think about it from the plugin maker's 
perspective, it's not really their job to design a deep compositing 
workflow that makes it easier for people to use deep data when 
defocusing... that will always be the end-user's responsibility.

To be honest, deep rendering and compositing as it exists in 
off-the-shelf software is pretty imprecise, and the toolset (which only 
really exists in Nuke) has stagnated. In some cases, things like OpenDCX 
(http://www.opendcx.org/) can provide drop-in functionality and 
efficiency improvements, but unless facilities are keeping a whole lot 
of crazy evolutionary tech in house, there hasn't been a lot happening 
on the deep front (no pun intended). It would be a nice surprise to see 
some improvements to Nuke's Deep toolset sometime in the near future, 
but my feeling (and most of my experience) points to deep being 
relegated to a similar position as the once-lauded end-to-end stereo 
workflow: More trouble than it's worth, except in some very specific 
scenarios.


-Nathan
_______________________________________________
Nuke-users mailing list
Nuke-users@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Reply via email to