A white field from the bayer converted with no interpolation would look
like: 255,0,0 - 0,255,0 - 0,0,255 - 0,255,0 - and so on.
After interpolation it looks like:
255,255,255 - 255,255,255 -255,255,255 - and so on.

A white field will look like that, and a black field will look like the converse. Those are the only two points in the spectrum where you don't need the rest of the surrounding pixels to demonstrate how the individual pixels' value is rendered. Translating the monochromatic value recorded by the sensor into a luminance and chrominance value through the bayer matrix, transforming to LAB color space and then discarding the chrominance value will provide the spatial resolution intensity map (luminance) that the chip's pixel sites suggest, if the algorithms are done properly. Properly done algorithms include calculation for the effect of the filters on the intensity map.

Of course this is not simple to visualize.

But to get back to the actual thing being discussed, all you have to do is look at the pictures taken with 35mm film and an *ist D/DS, using the same lens or with lenses to provide the same field of view, and you can see the difference in DoF. And it will follow the calculations roughly posed in this thread. So to say that it is "impossible to compare" them, implying to calculate them, is hogwash.

Godfrey

Reply via email to