I'm not sure we have the same concept for "sample" here! I'll try to explain 
myself with a picture :P

[Image: http://i.imgur.com/p8bzPTP.jpg ]

The 3 objects contributing to the pixel of the example are opaque but the green 
one (0.5 transparent), and I'm computing the alpha that way:

point0-material-alpha = 1
point1-material-alpha = 0.5 (transparent object)
point2-material-alpha = 1

alpha-point0 = 6 / (6+8+5) * 1 = 0.316
alpha-point1 = 8 / (6+8+5) * 0.5 = 0.421 * 0.5 = 0.211
alpha-point2 = 1 * 1 = 1

Am I doing that right?

Another questions:

Can deep compositing in nuke works with only one whole-scene deep render? Or is 
it mandatory to render separately each object/object-group depending on what 
(fog, another deep renders...), where and how we want to compose beween the 
deep pixels ?

I'm asking because I'm tryging to get the deep render from my engine to achieve 
that one-pass-deep-render-ready-for-deep-comp ;), so the nuke user doesn't have 
to care about splitting the render.



_______________________________________________
Nuke-dev mailing list
[email protected], http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-dev

Reply via email to