Hi Thorsten,
I think you are very close to being able to make an example file
yourself: you've already read the deep image into a flat/regular image
buffer for display. You can then write out a scanlineimage
representation of that buffer into a new file using the OutputFile API
and combine together the original deep image and your new flat
representation into a single multipart file with 'exrmultipart'.
I do suspect some software will behave incorrectly when reading
mixed-type EXRs. Tools should only read the first occurrence of a
channel name (within the same view) unless some override is in place to
change that behaviour such as an "only read deep" checkbox. So putting
the flat image as part 0 and the deep as part 1 should mean that the
deep is ignored. However, some tools may instead read the /last/
occurrence of a channel name, or perhaps read both parts as if they
contain different channels, rather than different representations of the
same channels. This might mean they accidentally read channels from the
deep instead of just the flat image.
My understanding is that reading a mixed-type file in Nuke with
'DeepRead' will work (the non-deep parts get skipped) but I'm not sure
what happens if you try to read just the regular flat image part with a
'Read' node.
Peter
On 30/05/17 19:28, Thorsten Kaufmann wrote:
Hey Peter,
thanks for the in-depth feedback. I was not aware that there can be
both flat and deep data in a single image. Are you aware of any
implementations
that I could use to give this a try? e.g. does nuke read these
correctly and is there a way to generate such images? I have a very
strong use-case for
this depending on how good this works.
Cheers,
Thorsten
---
Thorsten Kaufmann
Production Pipeline Architect
Mackevision Medien Design GmbH
Forststraße 7
70174 Stuttgart
T +49 711 93 30 48 661
F +49 711 93 30 48 90
M +49 151 19 55 55 02
thorsten.kaufm...@mackevision.com
www.mackevision.com
<http://www.mackevision.com/?utm_source=E-Mail-Signatur&utm_medium=E-Mail&utm_campaign=Mackevision-Link>
Geschäftsführer: Armin Pohl, Joachim Lincke, Jens Pohl
HRB 243735 Amtsgericht Stuttgart
---
*MOTIONBOX*: New film packages available
<http://www.mackevision.com/motionbox/motionbox-new-film-packages-available/?utm_source=E-Mail-signature&utm_medium=E-Mail&utm_campaign=motionbox-offer>
(offer valid until June 30th)
*SHOWREEL*: Watch our latest Showreel 2017
<http://www.mackevision.com/references/mackevision-showreel-2017/?utm_source=E-Mail-Signatur&utm_medium=E-Mail&utm_campaign=Showreel-2017>
*ISO 9001*: Mackevision is certified according to ISO 9001:2015
<http://www.mackevision.com/company/?utm_source=E-Mail-Signatur&utm_medium=E-Mail&utm_campaign=iso-company-link>
*SOCIAL*: LinkedIn <https://www.linkedin.com/company/mackevision>,
Xing <https://www.xing.com/companies/mackevision>, Facebook
<https://www.facebook.com/mackevision/>, Twitter
<https://twitter.com/Mackevision>, Behance
<https://www.behance.net/mackevision>, Vimeo
<https://vimeo.com/mackevision>
*From:*Openexr-devel
[mailto:openexr-devel-bounces+thorsten.kaufmann=mackevision...@nongnu.org]*On
Behalf Of *Peter Hillman
*Sent:* Montag, 29. Mai 2017 23:48
*To:* openexr-devel@nongnu.org
*Subject:* Re: [Openexr-devel] Slow deep exr
Hi Gonzalo,
Apologies for the confusion in my reply. Yes, this file only contains
deep image data, not flat. If you use the standard EXR API to read the
file, it will composite the samples together to provide a
representation of the image (see ImfCompositeDeepScanLine.h in the
OpenEXR source for more details). This is exactly what's happening in
your viewer under the hood: all the deep RGBA data is being composited
to give you a single 'beauty' image representation. This compositing
operation is slow because it has to process every deep sample in every
scanline before it can be output to your framebuffer.
Storing deep and flat within the same file is possible and supported.
In my previous reply I assumed this was exactly how this file had been
written, with the deep and flat parts representing the same data. The
main reason to do this is for speed: the deep and flat parts would
represent the same data, but the flat part is much faster to read when
deep samples are not required. Since the deep file is already quite
large, storing the flat part in the file as well is probably
justifiable. The flat image would be stored as part 0, so an image
viewer would read that by default, and ignore the deepscanline
representation of the same channels in part 1. A viewer might offer a
"deep image mode" to read the deep instead of the flat. In that case
it would likely use the dedicated deep pixel API in OpenEXR so it can
generate useful analytical data such as Thorsten's sample count
visualisation or a 2.5D/3D representation of the data.
As Thorsten suggests, it may be a more common practice to write that
non-deep representation of the data as a completely separate EXR file.
There are pros and cons to doing this: although this doesn't take more
disk space than a single combined file and makes it easy to remove the
deep image if it is later deemed unnecessary, it leads to more files
in the system and possible confusion about which deep image goes with
which flat one, particularly if only one of the files is later
overwritten.
Peter
On 30/05/17 02:37, Thorsten Kaufmann wrote:
Hey Gonzalo,
deep and flat do not live in different parts of the file. It is
essentially the same data. With deep files each sample for every
pixel in every channel is saved separately
whereas in flat the samples are merged into a single pixel in
every channel.
I have seen people do things like saving a flat beauty and deep
“other” channels like position or the likes, but that does not
really make much sense to me.
My guess would be that you’d have to write a separate non-deep
version of the EXR if you want to be able to choose.
Cheers,
Thorsten
---
Thorsten Kaufmann
Production Pipeline Architect
Mackevision Medien Design GmbH
Forststraße 7
70174 Stuttgart
T +49 711 93 30 48 661
F +49 711 93 30 48 90
M +49 151 19 55 55 02
thorsten.kaufm...@mackevision.com
<mailto:thorsten.kaufm...@mackevision.com>
www.mackevision.com
<http://www.mackevision.com/?utm_source=E-Mail-Signatur&utm_medium=E-Mail&utm_campaign=Mackevision-Link>
Geschäftsführer: Armin Pohl, Joachim Lincke, Jens Pohl
HRB 243735 Amtsgericht Stuttgart
*From:*Gonzalo Garramuño [mailto:ggarr...@gmail.com]
*Sent:* Montag, 29. Mai 2017 14:20
*To:* Thorsten Kaufmann <thorsten.kaufm...@mackevision.com>
<mailto:thorsten.kaufm...@mackevision.com>;
openexr-devel@nongnu.org <mailto:openexr-devel@nongnu.org>
*Subject:* Re: [Openexr-devel] Slow deep exr
Hi, Thorsten.
El 29/05/17 a las 04:21, Thorsten Kaufmann escribió:
Hey there,
so there is no "deep vs. beauty". It's simply a deepscanline
image. I don't think it is even possible to mix
deep and non-deep images, is it?
No, but my understanding was that the deep and beauty info live in
different parts of the image.
Deep is not an additional set of information but it stores all
of the regular information in a more granular way.
So there is both more data to read and you more "work to do"
(to blend the individual samples) which makes
reading deep images slower. The amount of additional samples
depends a lot on the type of image you have
motionblur, dof making things way worse and of course on the
renderer's implamentation and if samples
are merged and how.
So you mean that the beauty pic is composed of the deep samples?
There's no decoupling of the deep and beauty data? That doesn't
sound right. I remind you I only want to load the beauty picture,
not the deep data (of which I have no use for now).
Thank you for looking into this,
--
Gonzalo Garramuño
_______________________________________________
Openexr-devel mailing list
Openexr-devel@nongnu.org <mailto:Openexr-devel@nongnu.org>
https://lists.nongnu.org/mailman/listinfo/openexr-devel
_______________________________________________
Openexr-devel mailing list
Openexr-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/openexr-devel