I'm not exactly sure what the best way of wording this question is, which may 
be why I haven't turned up many answers in my searching.  Hopefully someone 
here can suggest the best terminology and/or point me to an answer.

Assuming that we want to store depth in an image using unnormalized world space 
distance units, there are two main ways we could do this:
A) Distance from the point location of the camera (ie. if the camera is facing 
directly at a flat plane, the depth value is highest at the corners and lowest 
in the middle )
B) Distance from the image plane (ie. if the camera is facing directly at a 
flat plane, the depth value is constant )

The depth channel in an OpenEXR image is by convention named Z, which suggests 
interpretation B), where depth is orthogonal to the pixel X/Y location.

I tried looking through the document "Interpreting OpenEXR Deep Pixels" for any 
sort of suggestion one way or another, but all I could find was:
"Each of these samples is associated with a depth, or distance from the 
viewer".  I'm not sure how to parse this - it's either defining depth as 
"distance from the viewer", which suggests A), or it is saying you could use 
either A) or B).

Is there a convention for this in OpenEXR?  The two renderers I currently have 
convenient access to are Mantra, which does B), and 3delight, which does A).  
I'm wondering whether I should try and pressure 3delight to switch to B), or 
whether our pipeline needs to support and convert between both.  It shouldn't 
be hard to convert back and forth, but it's one more confusing thing that can 
go subtly wrong when moving data between renderers.

-Daniel
_______________________________________________
Openexr-devel mailing list
Openexr-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/openexr-devel

Reply via email to