On 17 Jul 2005 at 21:52, Anthony Farr wrote:

> I wrote a long explanation of the process, and it was a mess so I deleted
> it.
> 
> The simple explanation is that each pixel location on the sensor only gets
> light of the colour that is assigned to it.  It comes off the sensor as
> luminance only, but it is not the same as a greyscale image.  Converted to
> RGB without interpolation, it would have a luminance value ONLY for its
> assigned colour, the remaining colours would have zero luminance at any
> pixel.
> 
> A white field from the bayer converted with no interpolation would look
> like: 255,0,0 - 0,255,0 - 0,0,255 - 0,255,0 - and so on. 
> After interpolation it looks like:
> 255,255,255 - 255,255,255 -255,255,255 - and so on.
> 
> The luminance data for the unrepresented colour channels is taken from the
> neighbouring pixels that represent the missing colours.  That data has
> spread from its own position to the adjacent pixel.

Thanks for writing this, the often promoted suggestion that the individual 
colour pixels in a bayer pattern capture provide broadband luminance 
information is flawed alright, broadband luminance can't be extracted from a 
bayer capture without interpolation via demosaicing.


Rob Studdert
HURSTVILLE AUSTRALIA
Tel +61-2-9554-4110
UTC(GMT)  +10 Hours
[EMAIL PROTECTED]
http://members.ozemail.com.au/~distudio/publications/
Pentax user since 1986, PDMLer since 1998

Reply via email to