On Wed, Feb 24, 2010 at 11:57:07PM -0600, Udi Fuchs wrote:
> 
> > Actually we know pretty well what the problem is - colour management
> > systems seem to generally optimise for the assumption that data will be
> > spaced roughly linearly in perceptual space, rather than in absolute
> > luminosity. I looked into this quite a while back. The same issue
> > affects e.g. both lcms and Argyll, and my impression from Graeme Gill's
> > comments on the subject was that it was likely to be a widespread
> > assumption for CMS code.
> 
> I think that this is a specific issue with shadow area an sRGB. One
> the one hand, the sRGB curve is extremely steep and on the other hand
> the linear data from cameras sensors is not very accurate. The CMS
> standard was not made with digital cameras in mind, and this is one
> place where it shows.

I'm not sure I follow what you mean here. It's not something specific
about sRGB, this happens when transforming to any perceptual-gamma
colour space. The issue is one with the internal implementation of
current CMS code. I don't see that it's a fundamental flaw of the CMS
(i.e. ICC) standard.

Graeme Gill explains the problem in the case of Argyll here:

http://www.freelists.org/post/argyllcms/icclink-G-and-source-gamuts-profiles,5

I think that the LCMS problem is similar, because the two show the same
pattern of results.

> > We can just work around this. It doesn't stop us being pedantic and
> > handling gamma correctly. It just means that we need to get data into a
> > roughly perceptually linear encoding before giving it to the CMS. The
> > exact gamma function used to do this has negligible effect and does not
> > need to be exposed as a user control.
> 
> We already do our best to get raw linear data.

I said "perceptually linear". We do our best to get data which is linear
in luminosity.  Human perception of brightness is not linearly
proportional to luminosity. This is why we widely use output devices and
encodings with a gamma of around 2.2. This is what the CMS code is built
to expect - it is roughly linear in perceptual space.

> > For the common case where the Adobe matrices are used, I think we should
> > do the following:
> >
> > - Not display any gamma/linearity settings.
> >
> > - Generate a profile on the fly based on the RGB->XYZ matrices and an
> >  assumed fixed input gamma of 2.2. I posted patches to do this a long
> >  time back, and can bring them up to date if required.
> >
> > - Transform the linear data to a gamma-2.2 encoding after demosaicing,
> >  and before passing to the CMS. This can use the existing gamma/linearity
> >  application code.
> >
> > - Let the CMS do the rest.
> 
> You actually don't need to apply the gamma curve at all. If the input
> profile has just a matrix and the output profile is sRGB, lcms will
> apply a sRGB gamma curve for you. The problem is that you get "hazy"
> output.

Udi, this is exactly the problem I am talking about and this is how to
deal with it - change to a perceptually linear (i.e. gamma ~2.2)
representation before giving the data to the CMS. This *shouldn't* be
necessary, because the CMS handles all this internally, but
unfortunately the current implementations do a poor job of this, giving
us the shadow haze. So we need to do it ourselves.

> If we get rid of the gamma/linearity controls, there should be
> some replacement that controls the shadows and it should have a
> reasonable default.

The right place to do this is in the luminosity curve, where you can see
what you're doing. Hacking around with the input gamma is just a
perverse way of applying a luminosity curve, that makes no sense at all.

> Eventually, my goal is to have an option to have "16-bit linear"
> output, with a CMS profile attached, so that the image would look
> correct in any CMS aware software. Any patch that would get us closer
> to this goal is welcome.

This doesn't help! The problem is in LCMS. If you load up a 16-bit
linear image in a viewer that uses LCMS, you will get the same hazy
result when it's transformed to your display colour space that you would
have if you had done the transform in UFRaw.

The only implementation I know of which doesn't have this problem is the
slow floating-point version of the Argyll engine (cctiff -p).


Martin

------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
ufraw-devel mailing list
ufraw-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ufraw-devel

Reply via email to