On Apr 15, 2011, at 2:28 AM, Kevin Wheatley wrote:

> Bob Friesenhahn wrote:
>> On Thu, 14 Apr 2011, Florian Kainz wrote:
>> 
>>> 
>>> At ILM we want to implement a workflow where a computer graphics artist
>>> can bring up an OpenEXR image of, say, a scene from Rango on his or her
>>> screen, point to a pixel, and be find out that the object seen at that
>>> pixel is called "Beans/dress/button3."
>> 
>> How would you deal with composited pixels which are built from several
>> different objects?
> 
> we've attempted to use OID passes before and run into this exact
> problem, motion blur, transparency, etc all got in the way of being
> able to use simple per pixel data like this, so we're looking into a
> more 'deep' pixel implementation. We've embedded only IDs and use
> other 'databases' to store human meaningful data.

  If you have access to all the samples that make up the composited pixels,
 you can use up some of the OID space to create "combinations". IE: 
 OIDs 1,56,128,243  are 4 samples in a pixel -- create a unique hash,
 say 322435 that represents all 4 and create the "human" version that
 reads:  

 "Beans/dress/button3:Beans/dress/pocket:Beans/dress/coat:Beans/dress/stitch"

 that represents the combined hash value.  It does use up the 2^32 ID 
 space more rapidly, but 2^32 can still be adequate. Having sub-pixel
 ID is pretty useful when motion blur and transparency come into play, but
 it can also be confusing without a way to mask out depth layers.

   --Wayne



_______________________________________________
Openexr-devel mailing list
Openexr-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/openexr-devel

Reply via email to