Resend.  The original post is in the archive but otherwise seems to be
taking the scenic route into mailboxes.
----------------------------------------------------------------------------
--------------------------

I wrote a long explanation of the process, and it was a mess so I deleted
it.

The simple explanation is that each pixel location on the sensor only gets
light of the colour that is assigned to it.  It comes off the sensor as
luminance only, but it is not the same as a greyscale image.  Converted to
RGB without interpolation, it would have a luminance value ONLY for its
assigned colour, the remaining colours would have zero luminance at any
pixel.

A white field from the bayer converted with no interpolation would look
like: 255,0,0 - 0,255,0 - 0,0,255 - 0,255,0 - and so on. 
After interpolation it looks like:
255,255,255 - 255,255,255 -255,255,255 - and so on.

The luminance data for the unrepresented colour channels is taken from the
neighbouring pixels that represent the missing colours.  That data has
spread from its own position to the adjacent pixel.

That's interpolation.

regards,
Anthony Farr 

> -----Original Message-----
> From: Godfrey DiGiorgi [mailto:[EMAIL PROTECTED]
> 
(previous message sinipped)
>
> Spatial resolution, captured as luminance, is not interpolated. Color 
> value, captured and rendered by the Bayer, in interpolated. DOF has to 
> do with spatial resolution, not color value.
> 
(snip)
> 
> Godfrey


Reply via email to