I am implementing Deep output for a render engine, and I have some doubts about 
how nuke deals with rgbaz data.

The alpha channel is my main source of problems, and I suspect that I am wrong 
with the way I create the alpha info of the deep pixels.

Imagine I have a sphere in front of a plane. The 2D border between them is not 
perfect of couse, so I have pixels with 2 samples, 1 for the sphere, and 1 for 
the plane.

I'm computing the alpha taking into account the number of samples that pixel 
receive from each object, so if I have 5 samples from the sphere, and 15 from 
the plane, I set the pixel's alpha for the sphere sample as 5/(15+5)=0.25. 
Should I leave the alpha for the plane = 1? (no transparency, of course), or 
0.75?

Forgetting transparency again (only DOF), how I should compute sample's alpha 
for a pixel when I have 3 or more samples? Let's do the math:

Pixel with color of 2 object borders (i.e. cube and sphere) in front of a 
background plane:

cube = 5 samples
sphere = 10 samples
background plane = 10 samples
cube alpha = 5/(5+10+10) = 0.2
sphere alpha = 10/5+10+10) = 0.4
background plane alpha = 10/(5+10+10) = 0.4? = 1.0?

If I do that I have artifacts merging with other deep images (between the 
sphere/cube and the background), or DeepCrop'ing the background, i.e.

Any thougts/advices?

Abraham



_______________________________________________
Nuke-dev mailing list
[email protected], http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-dev

Reply via email to