Graeme Gill wrote:
Wolf Faust wrote:
1. Fault tolerance: I haven't used the latest lprof version with the new spline regression code. But I got some error values from argyl CMS users using the same regression routines?? The extremly low error values reported from ProfileChecker make me a bit sceptical if everything got better. While the new spline regression routine surely brings many advantages, has anybody looked at the fault tolerance of the approximation in practice? That is, how does the new routine behaves if the target scan or reference file is slightly faulty because of whatever reason: serious dust/scratches in the scan, scanner noise, reproduction fault of the target, reproduction fault of the measurement. I wonder if not bad noise is incorporated into the profile seeing user reports from argyll CMS users with mean dE <0.35 on batch average slide film targets.

Basically, the "avgdev" parameter which controls the trade-off between the smoothness and fitting error can be specified by the *user* of the profiler, according to the "quality" of his particular device/measurements (-> low avgdev for a low-noise device, and higher avgdev for a device which is more noisy and/or has a worse reproducibility). Thus the user can explicitly control the smoothness/dE trade-off according to his individual needs. Thus a too low dE reported by a user may simply indicate that the *user* has chosen a too low value for the parameter - it looks like many users tend to believe that the profile with the lowest achievable dE is the "best" profile, though this is rather not the case, when noisy data are being fitted.

But I also find the current default smoothness too low for typical devices (as Graeme explained below), and personally, I always did choose higher values, which looked more reasonable to me.

I'd advise making some adjustments to the spline code before doing any serious
testing. In particular, you should ensure the following:

  in Argyll/rspl/scat.c line 1119, change
    double rwf[4] = { 0.1, 0.1, 0.1, 0.1 };
  to
    double rwf[4] = { 1.0, 1.0, 1.0, 1.0 };

  For the arguments to fit_rspl():

    Make sure that the default smoothing factor is 1.0

    Make sure that the default avgdev is 0.005 (0.5%)

  (I believe the the above two are mapped to two sliders in LPROF).

otherwise the smoothing will be too low. (These are the changes
I've made for the V0.54 Argyll release, in light of more recent testing.)
(Note that the results of LPROF using just the Argyll scatter data
spline code will not result in exactly the same profiles as Argyll
produces, since Argyll also generates per device curves, as well as offering
the option of matrix/shaper profiles, etc.).

Right, lprof still uses the old code for the prelinearization tables, which converts device RGB to gamma 3.0 (or is it 2.2?) RGB (using a gamma/gain/offset model) before applying the RGB->Lab CLUT. So the profiles are indeed not the same as Argyll's profiles, but the overall difference is nevertheless not so big either - the RSPL seems to compensate the differences of the prelinearization tables pretty well. There seems to be a small problem with lprof's prelinearization tables though; in conjunction with some particular measurement data I think they're clipping dark colors slightly (which cannot be brought back by the CLUT). I'll try to investigate, when I find a free minute ...


In order to test and compare profilers, I would strongly recommend generating test data that covers most extreme cases. Let me make a suggestion: I am willing to produce five 35mm individually measured slides. I would suggest on the new Velvia filmes with extreme color gamut. One slide is a standard IT8 target and the four other slides are >1000 test patches spreaded all over the RGB space and also covering tricky areas (high saturated colors, colors near DMin/DMax,...).

A mechanism I used to test profiling in Argyll, was to generate device data with a relatively large number of test patches (say 6000), and then use splitcgats
<http://www.argyllcms.com/doc/splitcgats.html> to split the test set into
two parts, one part for generating a profile, and the other set for verifying it against (http://www.argyllcms.com/doc/profcheck.html). Various sized splits
were used to test behaviour with different chart sizes. This is very
similar to the type of testing used for genetic algorithms.

This is useful, but not perfect, as it lead me to end up with smoothing
factors that were somewhat too low (hence the subsequent adjustments above!).

I have used this method here for testing a number issues. If the appoximation is smooth, has a good fault tolerance and the 1000 patches show low error values... than I guess one can assume the profiler does work very good... but this is not easy achive with slide films ;-9

A wrong patch when generating a LUT based profile will almost always cause
noticeable disturbances, because the nature of the LUT is to try and fit
to each patch. A matrix/shaper type profile will be much more resistant
to such an error. Bumping up the smoothing factors used in the scattered
data fit will also improve robustness against such an error, but at the
expense of color accuracy.

A profile fit report is usually the best way of picking up such a problem.

The best thing of course, is to not have such a wrong patch in the test set!

Dealing with outliers is indeed difficult with non-parametric regression. Outliers originating from e.g. scratches or dust particles can be removed by computing a "robust mean" of the captured pixels instead of a simple average. Argyll's scanin tool already does that. Lprof's color picker currently does not use a robust mean, but it should not be too hard to implement it.

The smoothing nature of the spline is designed to cope with
random measurement error, and the avgdev parameter was intended as
a means to adjust the smoothness to match the level of reproduction
and measurement error (if it is known, or can be estimated.)

This struck me as a very good idea and I would like to pursue this. Is anyone here interested in assisting with this by providing high quality scans from the custom slides that Wolf is willing to produce for this effort?

I don't have a specialized slide scanner, but I do have an Epson 4990,
if that is of any interest.
I have an Epson 3170, which I expect to be even more noisy than the 4990. Basically I'm interested too, but I've doubts whether my scanner is good enough for Velvia's Dmax (I could only try to average e.g. 16 scans to get two more bits of SNR).

Regards,
Gerhard



-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
_______________________________________________
Lcms-user mailing list
Lcms-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lcms-user

Reply via email to