On Wed, Jan 25, 2006 at 10:48:06AM -0600, Gonz wrote: > > > John Francis wrote: > > > > > >Anyway, just ignore the big words and consider the > >example I gave. If a colour patch illuminated by two > >different lights maps to the same tristimulus value > >for a given sensor (such as, say, the RAW readings) > >then there's nothing you can do from then on to find > >out whether the illuminant was a pure monochromatic > >source or a broad-spectrum light source, so you can't > >decide how a different sensor, with rather different > >sensitivities, would respond to that subject. > > > > > > > > One other thing I should mention is a company in Florida some years back > called Laser Photo or something similar, that would take your K25 slides > and scan them, make an enlarged internegative on color film using > tri-color lasers and using a mapping that took into account the > absorptive qualities of all the films and papers involved, and finally > produce a print that was as close as I have ever seen to real life on > paper. So, there may be cases where there is some metamerism involved, > but in an actual application of the simple concept of mapping RGB values > sure did produce some unbelievable results.
That's the main idea behind colour-managed workflows. And it really makes no difference whether you do it before or after combining RAW RGB sensor values; to a very close approximation the value of a single RAW sensor is the value of the corresponding component of the colour value at that pixel (although sometimes you may have to undo the white balance corrections). It works very well because most of the colour triples choose primary colours that are fairly well separated (such as one R, one G, one B), and the single largest thing the human eye notices is where the peak colour sensitivity of a sensor lies. So if one 'R' sensor is a little more blue-sensitive than another one the way to mimic the behavior of that sensor is to add in a little bit of the reading from the 'B' sensor as well. In fact that's how most colour-space corrections work; to go from one colour space to another the conversion is a 3x3 matrix giving the weights to be applied. In particular that's how the white balance correction in your Pentax DSLR is done. The matrices are close to diagonal, and are non-singular, so the inverse is easy to calculate (and well-conditioned).

