Thanks Karim for this great explanation. I will share this with my students
in future because it is one of the best explained answers on this topic.
Are you a fan of ProPhoto RGB instead of sRGB or Adobe RGB when further
editing will be applied?

On Sat, 5 Sep 2020 at 03:04, Top Rock Photography <
[email protected]> wrote:

>  Can I ask, when you export an image from DT for further editing in
>> programs like GIMP would you suggest 32 bit FP…?
>
>
> Yes. There is a de facto, industry standard intermediary known as OpenEXR,
> which is used across the imaging industry. It was developed by Industrial
> Light & Magic, (ILM), for the film industry, for high dynamic range
> imagery, for doing all their post-processing, including grading, blending,
> compositing, et al.
>
> It was originally a 16-bit fp format, then they added 32-bit fp format,
> (or was that the other way around???), and now also includes a 32-bit
> integer format. It also uses a linear colour space model.
>
> …whether exporting in a - wider than AdobeRGB - colorspace would be more
>> beneficial too ? (Of course if that other photo editing program supports
>> the wider colorspace and can export into AdobeRGB or sRGB.)
>
>
> The *final* output colour space ought to be chosen based on the final
> usage. Since most images will be either shown on an sRGB compatible screen,
> or printed to a colour printer which understands sRGB, (but natively speaks
> some form of CMYK), it makes sense that the final image, (at the very end
> of the workflow), be output in sRGB. *Intermediaries, however*, need to
> keep as much colour information as possible, so the “*working*” colour
> space must be equal to or greater than the colour space of the original
> file.
>
> Too long; Won't Read →
> For intermediaries, use either a linear colour space, or the widest gamut
> colour space your other program will allow. For final images, I suggest
> sticking with sRGB
>
> Technical details →
>
> As long as the output of one's application is intended to be the input of
> another application, then use a wide-gamut colour space. If the output is
> intended for final viewing/printing, then use sRGB (for viewing) or, if one
> knows what printer will be used, an appropriate CMYK colour model.
>
> As for OpenEXR, it has resisted using ICC standards for its internal
> colour representation, as it cares more about how these values interact
> with other values, than what the values actually represent. A pixel can
> have any amount of channels, —from one, representing, say, luminosity, for
> monochrome images, to eight or more, e.g., CcMmYyKLlG (Cyan, light cyan,
> magenta, lt magenta, yellow, lt yellow, black, grey, lt grey, & gloss),
> used in some Epson printers— and as long as you can tell the system what
> each channel is supposed to be, the system can manipulate it. When one is
> ready to make an output image, that is when a colour gamut is applied, with
> appropriate gamma, etc, and definitions of black, white, and middle grey.
>
> If no definition is given of channels, OpenEXR makes assumptions based on
> standard names. It assumes that a channel named, ‘Y’, is luminosity, or
> channels named ‘R, G, B & A,’ are Red, Green, Blue, & Alpha, etc. It also
> assumes that a value of 0 is black, but it does not assume that a value of
> 1 is white. What becomes black or white is done at export time.
>
> Here is what the format specifies:
>
> Scene-Referred Images By convention, OpenEXR stores scene-referred linear
> values in the RGB floating-point numbers. By this we mean that red, green
> and blue values in the pixels are linear relative to the amount of light in
> the depicted scene. This implies that displaying an image requires some
> processing to account for the nonlinear response of a typical display
> device. In its simplest form, this is a power function to perform gamma
> correction, but processing may be more complex. By storing linear data in
> the file (double the number, double the light in the scene), we have the
> best starting point for these downstream algorithms. With this linear
> relationship established, the question remains, what number is white? The
> convention employed by OpenEXR is to determine a middle gray object, [i.e.,
> the user tells the system, “this object is my middle grey”], and assign it
> the photographic 18% gray value, or 0.18 in the floating point scheme.
> Other pixel values can be easily determined from there (a stop brighter is
> 0.36, another stop is 0.72). The value 1.0 has no special significance (it
> is not a clamping limit, as in other formats); it roughly represents light
> coming from a 100% reflector (slightly brighter than white paper). However,
> there are many brighter pixel values available to represent objects such as
> fire and highlights. The range of normalized 16-bit floating-point numbers
> can represent thirty stops of information with 1024 steps per stop. We have
> eighteen and a half stops above middle gray, and eleven and a half below.
> Denormalized numbers provide an additional ten stops at the low end, with
> gradually decreasing precision.
>
>
> Like I said, it works in linear space, but will also happily export (the
> final image) to an exponential space. That is where the colour space is
> chosen.
>
> CMOS Sensors
> Assuming the sensor has a 100% efficiency, (which does not exist), then
> zero photons on a pixel generates a value of zero, while 16,384 protons
> generate a value of 16,384. This is a linear correlation. Is 16,384 White?
> No; it is merely ‘saturation’. For a 14-bit sensor, that is the maximum
> value which it can store. It may have been a very dark scene, shot at *f*/1.4
> and an exposure time of 30 seconds, just to saturate that one pixel.
> Nothing was really ‘white.’ Almost everything was black. The photographer
> chooses what they want the viewer to see as black, white, or middle grey.
>
> On the other hand, it may have been a rocket launch on a very bright day
> on the beaches of the Space Coast of Florida. The image may have been taken
> at *f*/45, and at ¹/8000 seconds, and nothing was black, nor even middle
> grey. It was just one, big, blob of whiteness. Again, the photographer
> chooses what the viewer sees as black, middle grey, and white, and now we
> see a greyrocket against a dark sky, with white rocket blast.
>
> We basically have a mere 14-stops of linear data on this 14-bit sensor. On
> a 10-bit sensor, the values only go from 0 to 1,024, representing a mere
> 10-stops of linear data, but on a 16-bit sensor, from 0 to 65,536,
> representing 16-stops. Why do many 14-bit DSCs claim a mere 12.5 stops of
> dynamic range? Because no sensor has a 100% efficiency, and a lot of the
> low values are indistinguishable from noise.
>
> I.e., no pixel can be given, with any degree of certainty, a value of
> zero, or one, or two, etc. A pixel which registers zero, may have been hit
> by a photon or two, (or more), which simply did not register a reading,
> while a pixel which was not hit at all may show a reading of one or more,
> simply due to noise from heat, or electronic interference. The more
> efficient the sensor can be made, (where one photon always generates one
> electron), and the less susceptible the sensor can be made to heat, and
> electronic interference, the better one is able to get a 14-stop dynamic
> range from a 14-bit sensor.
>
> Of course, one can “cheat” the system using a logarithmic interpretation,
> or some other “curve.” Yes, that is what “curve” tools do, bending the
> interpretation of the linear graph, (usually at the extremes, near black,
> and near saturation), to “create more dynamic range,” or to “ceate more
> shadow/highlight detail.” (In reality, one cannot create what was never
> there in the first place. It is just an illusion).
>
> Hope I was not too technical. (I also hope that I got nothing wrong. 😉 )
>
> Sincerely,
>
> Karim Hosein
> Top Rock Photography
> 754.999.1652
>
>
>
>

-- 
Dr Terry Pinfold
Cytometry & Histology Lab Manager
Lecturer in Flow Cytometry
University of Tasmania
17 Liverpool St, Hobart, 7000
Ph 6226 4846 or 0408 699053

____________________________________________________________________________
darktable user mailing list
to unsubscribe send a mail to [email protected]

Reply via email to