I can't think of a realistic reason why data can't be written
'SORTED' rather than 'MESSY', since a sort on the samples isn't
particularly expensive compared to writing them, and it causes no
extra data. It's possible you want to sort the samples according to
some scheme other than depth order, but I can't think why!
Note that merging two tidy deep images does not give back a tidy
result, so tidying separate elements before writing to EXRs could be
wasted effort
Florian's document gives examples where overlapping is necessary:
tidying images containing overlapping samples with different motion
vectors or object identifiers causes loss of data.
On 30/10/13 12:43, Christopher Horvath
wrote:
Hey Larry,
It's common to pre-comp work you're doing while
compositing, or to work in stages. By not requiring a Deep
Pixel (in Nuke, or in EXR2) to be "tidy", it makes merging
deep pixels trivial, as well as keeping the amount of data
significantly lower. By way of example, look at how the deep
pixels in the paper have more points of data after being
tidied. Peter Hillman had some specific examples showing how
tidied volume renders became significantly larger than their
untidied versions. Since the major downside, or limitation,
of deep workflows is their increased data usage - every step
which can prevent additional data bloat is one we should take!
Given that the paper provides a performant code
sample which will tidy and merge samples, one that can be used
by client applications, is this a major concern?
Chris
_______________________________________________
Openexr-devel mailing list
Openexr-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/openexr-devel
|
_______________________________________________
Openexr-devel mailing list
Openexr-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/openexr-devel