On Mon, Apr 11, 2011 at 12:36:58PM +0800, Sandy Harris wrote:
> The usual sensor uses basically three types of element -- R, G and B
> -- in a particular layout.
> Why not X Y Z where X = R+G, Y = R+G+B, Z = G+B ?
> 
> You can get RGB from XYZ easily enough:
> 
>   Y-X = R+G+B - R+G = B
>   Y-Z = R+G+B - B+G = R
> 
>   X+Z-Y = R+G + B+G - R+G+B = R
> 
> But the total light you are accepting is 2+2+3 = 7 rather than
> 1+1+1=3, so you are getting more photons overall. Isn't that
> beneficial?

Not really.

If all you are interested in is the total number of photons arriving
at the sensor, you end up doing B&W photography.

If you want the chroma components, though, you need to be able to
approximate a measure of the stimuli that will excite the human eye.
And, in particular, you want to be able to measure the individual
R, G & B intensities, because that's how humans see colour.

If you're measuring some linear compbination of the RGB components,
you don't know whether a change in the measured value of R+G, say,
is due to a change in G or to a change in R.

The typical Bayer matrix - RGGB - is a good model for that purpose;
it provides greater spatial resolution for G, because human vision
is more sensitive at those wavelengths than it is at R or B.


-- 
PDML Pentax-Discuss Mail List
[email protected]
http://pdml.net/mailman/listinfo/pdml_pdml.net
to UNSUBSCRIBE from the PDML, please visit the link directly above and follow 
the directions.

Reply via email to