This isn't a question about the OpenEXR file format or library per se, but 
since it's the main way everybody is storing deep images...

If you're generating deep images out of a render, and you're starting with 
large numbers of samples per pixel, a "raw" deep image can be unnecessarily 
huge. I'm assuming that it's common practice to "compress" them somehow by 
decimating/combining the samples in each pixel. I've tried combining samples 
that were closer than a certain threshold in depth, as well as dropping samples 
that are below some opacity threshold (combining their slight opacity with the 
next one, say). But I'm still not fully satisfied with this, haven't hit on 
threshold values that give an accuracy vs file size tradeoff that I'm aiming 
for.

Is anybody inclined to comment on what sample decimation strategies they have 
found to work well?

--
Larry Gritz
l...@larrygritz.com




_______________________________________________
Openexr-devel mailing list
Openexr-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/openexr-devel

Reply via email to