Yes, of course you don't want to go beyond the sensor's saturation  
point in areas where you want detail. But "protecting" an exposure by  
underexposure is the wrong methodology. It's difficult to explain  
without some graphs to make it more intelligible, that's why I  
recommend reading Bruce's first two chapters. He did a great job. But  
I'll try.

---
A digital sensor is a photon counter. It simply counts up the amount  
of light falling on a photosite's unit area in the time of exposure  
and reports that number. As such, it is a linear gamma device ...  
unlike the human eye or film, intensity values in a scene are simply  
represented by that linear count of photons falling on the photosite  
array.

The saturation limit happens when the sensor runs out of numbers ...  
in the case of the sensor used in the Pentax DSLRs, it can count from  
zero to 4095, 12bits of quantization ... which depends upon how much  
light energy is falling on the sensor, how much loss is embedded in  
the photosite's design, and how much time the photosite is exposed.  
This all conspires together to place saturation on a hard edge ...  
the number 4090 is not saturated, the number 4095 is because you  
cannot record the number 4096.

By that we've established what "saturation" means. The next thing to  
understand is how RAW data relates to a tonal scale that makes sense  
to our eyes.

One of the basic operations performed by RAW Conversion is to do a  
gamma correction on the captured data ... By this is meant that the  
linear capture of the digital sensor is transformed to re-place the  
light values in the characteristic curve of human vision's  
sensitivity and perception. The human eye perceives fewer differences  
between bright values and greater differences between dark values  
than the sensor by comparison. So the high values recorded by the  
sensor are compressed together ... values that are insignificantly  
different in perception are thrown out ... where the low values are  
expanded ... values that are crowded too close together are  
interpolated/stretched to fit the range required.

Now, if you consider a linear scale of numbers recorded by the sensor  
as a binary representation, you'll see that the half the total amount  
of exposure falling on the sensor is stored in the last bit of  
representable numbers. Half again is stored in the next bit down.  
Half again is stored in the next bit down after that. What this means  
is that, in the range of the sensor's linear number scale, the  
midpoint of exposure to our eye (call it Zone V) is NOT in the middle  
of the scale, it's actually down around the 1/8 point in the linear  
scale. So all the tonal values that make up the important range from  
Zone II to Zone V are smashed together in the bottom end of the  
sensor curve, and most of the data values that take up more than 3/4  
of the scale are insignificant to human perception. If you  
underexpose the scene, more interpolation and expansion of that small  
part of the scale must be performed to fit the data to the proper  
perceivable range, which has as its byproduct noise and ambiguity in  
the Zone II to V range through round-off error.

What this means to a photographer making an exposure evaluation is  
that the photographer should consider the linear capture qualities of  
the sensor. If you pick the brightest points in a scene, the points  
of specular reflection for instance, you want to place your maximum  
exposure so that they are just AT the 4095 data value threshold ...  
this is hard because you can't see when you go over with them. So you  
look at the Zone IX values, the brightest parts of the exposure where  
you want to retain detail, and try to keep the values in a capture of  
those points to somewhere in the range of value 3686 (around 210-220  
in 8bit data), and leave the other values to fall where they may. If  
you look at a scene after capture with the histogram display on the  
camera, this kind of exposure will "crowd the right" ... The goal in  
doing it is to capture as much distinct data as possible without  
saturating the important detail areas.

In processing, you place the gamma correction curve to handle a given  
scene's exposure by adjusting the white clippint point (exposure),  
the brightness and contrast (essentially, it moves the nodal point of  
the gamma correction and the angular relationship of the resulting  
curve inflection) and then the black clipping point (the point at  
which you decide where the differences between low values are purely  
ambiguous and insignificant). Exposing as much as possible without  
saturation means you have more data at the low end to expand through  
interpolation with the least round-off error and noise.

Regarding the current Pentax *ist D series cameras currently  
available, I'll use the *ist DS as my specific example but I believe  
the same is true for all of them:

The *ist DS is set by default for Auto Picture exposure automation  
with JPEG *** fine quality, using a Bright color tone and intended to  
produce a snappy, pleasing image for a 4x6 inch print. What this  
means is that the in-camera RAW conversion algorithm is tuned to that  
output, and the meter is calibrated to produce results compatible  
with that algorithm. The *ist DS does NOT change the meter  
calibration curve when the user takes control of the rendering engine  
and requests a RAW file as output. The difference in output  
requirements is critical: the default JPEG *** rendering and Bright  
color tone means that meter calibration has to be set optimistic to  
suppress highlight saturation with the embedded RAW conversion.  
Switching to RAW capture and using the meter's default calibration  
results underexposure because RAW format data has more stops of  
overhead before saturation values are reached at the sensor. With a  
customized RAW calibration curve, you can obtain better data with  
more exposure on the 12bit capture. So I find that my average  
exposure compensation when capturing RAW format runs +0.3 to +0.7 EV,  
without saturating highlights, and allows much cleaner, lower noise  
data in the critical Zone II to Zone V range.
----

I hope that helps. I'm not as good at explaining this stuff as Bruce.

Godfrey


On Jul 4, 2006, at 8:23 AM, Bob Sullivan wrote:

> Godfrey,
> You've got to explain this.
> Digital sensors can't give any detail in overexposed highlights.
> You can recover details in underexposed areas with post processing.
> So don't you want to avoid blown highlights at all costs?
> Regards,  Bob S.
>
> On 7/4/06, Godfrey DiGiorgi <[EMAIL PROTECTED]> wrote:
>> Sensors respond to light differently compared to film. Chapters one
>> and two of Bruce Fraser's "Real World Camera Raw with Photoshop CS2"
>> explains why there is a difference. As a result, exposure evaluation
>> requires a different mindset and different settings. JPEG and slide
>> film, although they are different, generally end up taking about the
>> same exposure.
>>
>> However, underexposing in RAW by 0.3-0.5 EV is exactly the wrong way
>> to go. In general, with the *ist DS, I find my average exposure for
>> RAW capture requires +0.3-0.7 EV additional exposure compared to JPEG
>> or slide film.


-- 
PDML Pentax-Discuss Mail List
[email protected]
http://pdml.net/mailman/listinfo/pdml_pdml.net

Reply via email to