Martin Ling <martin-uf...@earth.li> writes:

> On Sat, Feb 20, 2010 at 11:27:28PM +0100, Pascal de Bruijn wrote:
>> 
>> When I apply UFRaw's internal 'Color Matrix'  with gamma/linearity 1/1
>> I get a darkish image, and when I apply my color profile (which in
>> essence is only a color matrix as well), with gamma/linearity 1/1, it
>> get a bright "normal looking" image.
>> 
>> Can anybody explain this?
>
> Yes, I think so. But it's a bit of a long answer.

I agree with pretty much everything Martin said, but an additional
comment (rant?)  on "how the world should be":

The word "gamma" gets used to mean at least three things, and they are
often blurred.

One is an actual property of a device or process (such as a CRT or film,
describing a luminance/voltage or luminance/exposure relationship).
This is a physical reality and needs to be modeled; profiles do this.

The second is as a property of an encoding, such as sRGB, or some
enconding that a profile expects.  This is just math, and is about the
luminance that a particular value is *defined to mean*.

The third is as a way to express a tone scale transformation.  This
happens implicitly when one takes data in particular encoding and
chooses to interpret it as being in a different encoding.  I think this
is best avoided.

All that said, of course I realize that encodings are chosen to reduce
quantization noise in perceptual terms, to line up with physical
properties, and to have some perceptual benefit.  For example, CRTs have
a native gamma of 2.5 but we use sRGB which is sort of 2.2 to get good
rendering (and I don't really understand any more than that....).  But
then once we create a profile, I think we're back to photometric
accuracy between intended sRGB luminance and actual luminance.

So, in an ideal world, there would be no gamma settings in ufraw.  Input
raw data is presumed linear, and when an input profile that is expressed
in a different encoding is used, the input data would be transformed to
the encoding specified by the profile.

The problem, besides lack of round tuits, is as Martin points out:
there's something about shadow processing that is unpleasing when one is
pedantic about this, and that looks better when steps that are arguably
incorrect are used.  Since this is all about making nice photographs,
that's where we still are.

It may just be that some amount of shadow tweaking is appropriate;
displays and prints do not have as much dynamic range as the real world,
and there's a perceptual mapping going on anyway.  If this were
expressed as a tone scale transform that should be part of the baseline
processing instead of as write-with-gamma-function-x and then
read-with-gamma-function-y, then we could leave the gamma confusion behind.

(I don't mean to criticize anyone for not doing this yet; I certainly
haven't had time to be useful, and this is clearly a very difficult
area.)

Attachment: pgpJ2Ra08OndU.pgp
Description: PGP signature

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
ufraw-devel mailing list
ufraw-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ufraw-devel

Reply via email to