Sigh.  You really can't understand you are wrong, can you?

Here's an extremely simple example.

On the table in front of you are two yellow objects.
The first sensor sees these two objects as the same colour,
because they both elicit the same response from the red and
green channel.

But the first object appears yellow because it is reflecting
equal quantities of red and green light; the second because
it is reflecting actual yellow light.

Now suppose you want to take the resulting image, and post-
process it to show how it would appear when photographed
with a slightly different film; one that has equal response
to the first sensor at the red and green spectral colours
reflected by the first object, but whose red channel is
slightly more sensitive to the yellow of the second object.
On such film, the two objects would appear different colours;
the second one would be slightly more orange than the first.

There is absolutely no way to reproduce this result as a
post-process step, because all you have is the measurement
made by the first sensor, which produced identical results
for the two objects.



> if the smaller space is within the other two, it won't.
> 
> Herb....
> ----- Original Message ----- 
> From: "John Francis" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Friday, December 26, 2003 1:48 AM
> Subject: Re: What do you think?
> 
> 
> > You've got a multi-dimensional input space; the intensity of illumination
> > at all frequencies (which we can assume bandwidth-limited to remain well
> > withing the gamut of the device in question).  The sensor projects this
> > to a smaller-dimensional output space - one (or three) intensity values.
> 
> 

Reply via email to