Cory Papenfuss wrote:
Interesting thought. Especially since the gamut of the fit color space is much wider than what is measureable from the target. It's impressive how much larger the space is than the target, so if you've got a "super-accurate" fit to the data, it could well be goofy outside.

Yes, in RGB device space, the target covers only a subset of the RGB cube. Only the colors within this subset can be reasonably characterized, using the target as reference, everything outside needs to be extrapolated. Argyll's smoothing splines extrapolate pretty smoothly, so the result should be quite pleasing, but you can't expect a really accurate mapping in the extrapolated areas. However, even though the complete RGB cube is eventually mapped by the extrapolation to CIELAB space, there may exist RGB combinations which are never ever returned by the camera at all - i.e. not the complete extrapolated space is necessarily utilized by the camera.

Just a thought about the noise. I know that the whole Bayer thing throws a wrench in the works, but maybe some median filtering rather than simple averaging?

At least the noise filters used in cameras seem to perform rather median filtering (or even more sophisticated methods like e.g. http://www.fmrib.ox.ac.uk/~steve/susan/susan/node18.html). Depending on the amount of filtering done, this results in the typical look with homogenous, clean areas (-> loss of details), in those areas where the level differences are small, but nevertheless sharp edges where the level makes a larger jump.

On the other hand, the basic task of the Bayer interpolation is rather not noise filtering, but the reconstruction of the original image from subsampled color planes which have only a quarter of the resolution of the original image (or half resolution for green). According to the sampling theorem, only a blurred, band-limited image could be reconstructed from each subsampled color plane. However, that's not satisfactory. Thus more sophisticated algorithms attempt to use the correlation between the color planes, some additional a priori knowledge about typical images, some heuristics, maximum likelihood methods, etc. in order to reconstruct the image at the full resolution.

I suspect that it's mostly evident at the low-levels where SNR is lower. That's where the pixels that are "hot" matter most.

Basically the noise floor is always the limiting factor for the dynamic range that can be captured by a camera/scanner. The resolution of the ADCs is usually chosen in a way, such that one LSB falls below the noise level, such that not the ADC resolution, but still the noise remains eventually the limiting factor.

Btw, when I talked about errors I basically mentioned noise, but there are other sources of error as well, including systematic errors, like uneven lighting or vignetting. The latter apply in particular to camera profiling. Even allegedly good shots of a target still often turn out to be not completely homogeneously illuminated - you open the image in an image editor and find with the color picker spatial differences of serveral RGB units on the grey IT8 border (though the image looks visually homogenous). I'm currently also investigating a compensation for uneven illumination, using a lighting model based on one or several point sources which are assumed to illuminate the target, and whose individual positions and intensities are being estimated by optimization.

Regards,
Gerhard



-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
_______________________________________________
Lcms-user mailing list
Lcms-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lcms-user

Reply via email to