Hi Abraham,

You don't want the alpha values to be the same for each of the multiple samples 
you're collating for each object.  In fact you want the sample (colour and 
alpha) contribution to be the same after taking into account the transmission 
(1 - alpha) through each sample in front (to give unbiased box filtering).

In the case of the transparency of your background plane, I assume this is a 
pixel taken from the edge of the plane.  If it were inside the plane then you 
should get a sample for every jitter value.  Say you're using 4x4 
multi-sampling, then you should get 16 samples of the opaque plane which will 
give 16/16 * 1.0 = 1.0 alpha.  If it's from the edge of the plane and you've 
only got 15 samples then one missed, implying that the colour contribution here 
should come from behind the plane.  This will give 15/16 * 1.0 = 0.9375 alpha.

Now you have 15 samples which need to accumulate to the plane alpha of 0.9375.  
These may have different colour and depth values.  For unbiased box filtering, 
each of these samples will have an equal contribution.  In fact, you want each 
of the samples to contribute the same amount (1/16) of the total alpha.  The 
front sample will not be transmitted through the back samples when flattening 
the deep data, so the alpha for this sample is simply 1/16 * 1.0.  However, the 
sample behind will be filtered by this sample: 

        (1 - alpha0) * alpha1 = 1/16 * 1.0
        alpha1 = 1/16 * 1.0 / (1 - alpha0)
        alpha1 = 1/16 * 1.0 / (1 - 1/16)                (from above)
        alpha1 = 1/16 / (15/16)
        alpha1 = 1/15

In fact for an opaque plane with n subsamples the values are:

        alpha0 = 1/n
        alpha1 = 1/n-1
        alpha2 = 1/n-2
        ...
        alphan = 1

which produces an opaque plane if the pixel has samples for every jitter value.

For unbiased sampling this produces an iterative equation allowing each sample 
alpha to be determined from the previous:

        alpha(n+1) = alphan / (1 - alphan)

This is nice and easy and will produce good results when merging.  (Consider 
the case of rendering all planes separately and merging the resulting deep 
output.)  Unfortunately this requires identifying each plane even when multiple 
samples may be interleaved by depth and applying alpha values on that basis.

It is possible to avoid identifying the planes by computing the desired 
contribution of each of the samples (colour and alpha) taken at each jitter 
position.  This is similar to the standard 2D render except that, instead of 
applying contributions by performing the multiplication on each of the samples 
(colour and alpha) immediately, the multipliers are stored and used to 
calculate the contributions in the final collated deep pixel data.  This method 
has the added advantage that it's applicable to filtering other than unbiased 
box.

I hope that makes sense.

Thanks,  Ben


On 1 Apr 2013, at 13:08, Brany wrote:

> I am implementing Deep output for a render engine, and I have some doubts 
> about how nuke deals with rgbaz data.
> 
> The alpha channel is my main source of problems, and I suspect that I am 
> wrong with the way I create the alpha info of the deep pixels.
> 
> Imagine I have a sphere in front of a plane. The 2D border between them is 
> not perfect of couse, so I have pixels with 2 samples, 1 for the sphere, and 
> 1 for the plane.
> 
> I'm computing the alpha taking into account the number of samples that pixel 
> receive from each object, so if I have 5 samples from the sphere, and 15 from 
> the plane, I set the pixel's alpha for the sphere sample as 5/(15+5)=0.25. 
> Should I leave the alpha for the plane = 1? (no transparency, of course), or 
> 0.75?
> 
> Forgetting transparency again (only DOF), how I should compute sample's alpha 
> for a pixel when I have 3 or more samples? Let's do the math:
> 
> Pixel with color of 2 object borders (i.e. cube and sphere) in front of a 
> background plane:
> 
> cube = 5 samples
> sphere = 10 samples
> background plane = 10 samples
> cube alpha = 5/(5+10+10) = 0.2
> sphere alpha = 10/5+10+10) = 0.4
> background plane alpha = 10/(5+10+10) = 0.4? = 1.0?
> 
> If I do that I have artifacts merging with other deep images (between the 
> sphere/cube and the background), or DeepCrop'ing the background, i.e.
> 
> Any thougts/advices?
> 
> Abraham
> _______________________________________________
> Nuke-dev mailing list
> Nuke-dev@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
> http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-dev

-- 
Ben Woodhall
Software Engineer
The Foundry, 6th Floor, The Communications Building,
48 Leicester Square, London, UK, WC2H 7LT
Tel: +44(0)20 7968 6828 - Fax: +44(0)20 7930 8906
Web: www.thefoundry.co.uk
Email: ben.woodh...@thefoundry.co.uk

The Foundry Visionmongers Ltd.
Registered in England and Wales No: 4642027



_______________________________________________
Nuke-dev mailing list
Nuke-dev@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-dev

Reply via email to