Warning: There'll be computer speak in this reply. ;-)
On Aug 21, 2005, at 1:03 PM, Shel Belinkoff wrote:
The D and the Ds have 12-bit sensors and, through the magic of some
algorithms or whatnot, by the time the images are converted to a
RAW file,
they are considered 16-bit files. Some cameras have 14-bit sensors.
A 12-bit sensor reports the intensity of light falling on it as a
number which is bounded by the range from zero to 2^12, or 4096. That
means it quantizes the light into 4096 discrete steps. A 14-bit
sensor would quantize intensities similarly but into 16,384 steps.
The conclusion is that a greater bit depth nets you greater potential
accuracy to the tonal resolution.
The data capture by the sensor (they're not images yet) is stored to
a RAW format file "mostly" untouched. The Pentax D/DS bodies do
virtually nothing to the sensor's data other than wrap it into a tag-
structured file format (TIFF), write the camera's metadata to the
file (time, date, camera type, resolution, parameters for in-camera
JPEG rendering, blah blah blah), and also add the thumbnail and
preview JPEG renderings to the file. The D files are larger than the
DS files because the D files have absolutely nothing done to the
sensor data, where I think the DS strips the extra zeros from every
photosite's output (those four extra zeros are the result of storing
2 bytes data instead of 1.5 bytes of data for every photosite). John
Francis will likely point out what it actually does if I've gotten
that incorrect, but the essence is that very very little has been
done with the data from sensor to RAW format file in the Pentax
bodies: it has not been "converted", just written out along with
ancillary data. (Canon and Nikon DSLR bodies evidently do more
processing on the RAW data, including (I've heard) some sharpening
and compression (lossless).)
It's only when RAW conversion is performed and the data is written
out to an RGB rendered file format (TIFF or .PSD) that the data has
been transformed to a "16bit" representation. This conversion is
somewhat more complicated to describe, but essentially the sensor is
just a photon counter with a linear gamma ... your eye sees light
which has been gamma converted, expanded and compressed adaptively
based on illumination level and intent ... so the RAW conversion
process is designed to transform the sensor data in a similar way. It
also uses the Bayer matrix of RGB values that the data was collected
with to interpolate an approximate color value, in RGB primary
colors, for each picture element (pixel). Each color is considered a
'channel', thus we have the notion of "16 bits per channel" value for
every pixel. So each pixel is actually represented as three 16bit
numbers or 48 bits of data.
Depending upon the RAW converter and editing software, the 16bits per
channel representation for 12 or 14 bit sensor data renders the 4096
or 16384 steps from 12- and 14-bit sensors into the larger, 16bit
quantization space, numbers from 0 to 65536 (or 0 to 32768, if the
particular converter is designed to use only the positive signed
numbers ... ). The larger data space contains all possible values of
the two smaller data spaces and interpolation accuracy, even with
using just the positive signed values, is very very close to perfect.
What kind of improvement might one see when using a camera with a
14-bit sensor
compared to one with a 12-bit sensor, all else being equal. I have
heard
that dynamic range is improved, i.e., more shadow detail is
available and
highlights don't fry as easily with more bits in the sensor. Of
course, all
else isn't usually equal, so what other factors play significant
role in
determining image quality, apart from lenses.
Total dynamic range is dependent upon the analog capability of the
sensor to record light from minimum activation to total saturation as
well as the ability of the digital system to represent those
intensity values accurately. 14bits might not buy any more dynamic
range, but it should allow more accurate modeling of tonal values.
Of course, the practical reason that 14bit sensors might provide more
dynamic range is that all 14bit sensors to date are much more
expensive than 12bit sensors and only available on much more
expensive digital capture devices (like multi thousand dollar
scanning backs, etc) that include substantially better supporting
circuitry, better noise isolation, more accurate reportage of actual
intensity values, etc.
The ultimate question here, from practical point of view, is "how
many bits of quantization are enough?" The more the merrier, assuming
you can afford it. 14- and 16-bit sensor systems are wonderful, but
for most pictorial photographic work you'd be hard pressed to see
much benefit from the larger data space
It's also been stated that some cameras use a "lossy" system when
converting to RAW output, others a lossless system. Which type
does Pentax
use, and does it really matter anyway?
Canon, Nikon, Konica Minolta, Olympus, etc all apply some compression
to the sensor data, I tend to presume that they use a lossless
compression algorithm as it wouldn't make sense to use a lossy
compression algorithm to sensor data. Pentax uses no sensor data
compression, to the best of my knowledge, just bit-packing to reduce
the unneeded zero-bits and make the file size a little smaller.
It doesn't really matter much, anyway, since in practical terms a)
there's nothing you can do with the pre-RAW format data anyway and b)
the sensor data is going to be put through many many transformations
before it becomes an RGB image anyway. Losses in such transformations
will outweigh small losses in compression, ultimately. Of more
concern is whether sharpening is applied before writing to the RAW
format file, as sharpening can elide information that cannot be
retrieved.
Godfrey