On Mar 11, 2004, at 5:21 AM, David Kay wrote:


I know the CCD is an analogue device and it's here where voltages are
created to represent the various light levels. However, in a capture device,
the A/D converter is also of primary importance. It must be be capable of
converting every voltage at each pixel from analogue to digital data.

correct, but if bit depth is low all that happens is that a range of voltages is represented by one number. In other words the digital capture can't distinguish between small differences in light that the sensor can distinguish. The low bit depth is limiting the precision available from the chip. But its not limiting range. You seem to think that if bit depth is low that the sensors response is still read with a precision equal the the sensors ability to define small changes, thus leaving unused dynamic range. While I suppose such a system could be made I can't imagine why one would.


This is all pretty much a moot point now as even relatively low end pieces of gear make use of higher bit depth systems. Image quality improvements with higher end gear typically come from improved noise characteristics in various pieces of circuitry and chip design. It's not gained by simply throwing more bits of processing at the system. It wasn't all that long ago that memory costs were dramatically more expensive and high bit depth systems meant markedly higher cost systems. It was unusual but not all that uncommon ... especially among scanners of maybe eight to ten years ago... to find 8 bit systems that would significantly provide better performance than lesser systems with high bit depth processing. This was particularly true when prices first really started to fall and high bit depth processors started showing up in more lower end gear while there was still lots of top notch 8 bit gear in operation. It was a great marketing device. For a relatively low cost a scanner manufacturer could throw more bits into the processor and people would buy exactly the logic that you are proposing... more bits equates to some better performance characteristic. High bit depth is a good thing, but not by itself. It only adds a degree of precision to the digital representation of a captured image. By itself it doesn't change the other characteristics of the system... noise, dynamic range and so forth.

Bob Smith

===============================================================
GO TO http://www.prodig.org for ~ GUIDELINES ~ un/SUBSCRIBING ~ ITEMS for SALE

Reply via email to