If the depths are floating point values, they can simply be in "real world" units, and do not need to be normalized.
However, 16-bit floats are not really sufficient for "real-world" depth data, with units in feet, meters, or centimeters. When used as an input to something like a depth-based fogging node in Nuke or Shake, you'll see noticeable banding for a scene of the scale of a building or larger. If you can make it so that the 'z' plane is 32-bit float, with the rest being whatever precision you need for R,G,B,A - it will improve the "depth-based fog in comp" kind of usage considerably. If you do want to normalize the depths, you could put the near & far values into the header's metadata. On Wed, Mar 14, 2012 at 1:40 PM, Paul Miller <p...@fxtech.com> wrote: > On 3/14/2012 1:51 PM, Christian Bloch wrote: > >> 'Z' layer seems to be the de-facto standard, analog to the single channel >> 'A' for Alpha, but as full-float buffer. That's what I scripted into the >> output pipeline here at EdenFX, and it drops right into the correct slot in >> Fusion. >> > > Thanks. Already have it hooked up in my I/O module. I don't suppose there > is any standard to the depth range? Or is it common to have a scale/bias > somewhere in each application? > > > > ______________________________**_________________ > Openexr-devel mailing list > Openexr-devel@nongnu.org > https://lists.nongnu.org/**mailman/listinfo/openexr-devel<https://lists.nongnu.org/mailman/listinfo/openexr-devel> > -- I think this situation absolutely requires that a really futile and stupid gesture be done on somebody's part. And we're just the guys to do it.
_______________________________________________ Openexr-devel mailing list Openexr-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/openexr-devel