william.curwen wrote: >>> but adding more bits doesn't change the threshold of the >>> device. > >> Correct, but adding more bits does influence the number of subject >> brightness levels in f-stops we can record. > > Wrong!
Why then, do manufacturers make 16-bit capture devices? >> Here's what I mean: >> Take a fully saturated pixel of any bit depth. >> Let's say it has a full-well capacity of 90,000 electrons. >> Now, divide this number by a factor of 2 and keep halving the remainder. >> Each division represents one bit or one half the light, until you reach the >> noise floor. (It's here where good data is polluted by random "noise" from >> the electronic circuits and/or sources of heat within the device.) > > So if your imaginary chip pixel has a headroom of 180,000 electrons, it will > give you just one extra stop. Correct? Yes that's correct, only one f-stop more would be gained due to the proportionate doubling of light with exposure. However, I'm not sure whether there are CCD's that store that many electrons. > >> Surely it's obvious a 16-bit device can record more f-stops of subject >> information than an 8-bit device? > > No, it will give 1024 levels of grey instead of 256! Actually there are 65,536 levels per channel in a 16-bit device. Imaging a piano with that many keys! <g> 1024 levels is 10-bit. > There are limitations > as to how much light a sensor can absorb as a meaningful signal, Yes, the number of electrons it can store, the quality of the A/D converter etc - it's quantum efficiency... > very much > in the same way a piece of film can absorb light - before reaching maximum > base density. What I am describing is dependent on the signal to noise ratio > of a chip - any chip, and is what governs its dynamic range. Yes I agree. From what I understand things such as dark current and dark signal values are characteristics that influence a device's useable range. The noise floor on a CCD, even if it's actively cooled, is raised by high ambient temperature. I have seen this happen on a 40 degree day in an egg car studio. Fortunately adding some passive cooling by way of a large pedestal fan pointed at the back, solved the problem. Anyone got a long power lead? > > This is why the Hubble telescope is parked in a geo-stationary orbit in the > shadow of the Earth, where the ambient temperature is a few degrees above > absolute zero. As Hubble records extremely low levels of light, it needs a > low operating temperature to give its sensor the extremely high signal to > noise ratio it needs to do its work. BTW, most earthlings don't know this, > but Hubble's sensor is only 2meg in size......the same size as my terrestial > Canon PowerShot A40:) > And the active cooling / Peltier Element principle had its roots in astronomy too. >> This is the point I have been attempting to make! > > I am sorry, but your logic is flawed. I think the key here is whether you agree with the principle that one f-stop less and more exposure results in halving or doubling the electrons stored. Unless you agree with this, then we must agree to disagree. > >> Shooting outdoor scenes in high-contrast Aussie light with a 16-bit device >> records everything with detail as long as I expose for the highlights. >> Actually, I need to add image contrast to make the result look more >> photographic. > > I think what is confusing you is the linear response to light with digital > chips.....that is a straight ramp at an angle of 45degrees. This is why > digicams can see into shadows with relative ease using a dynamic range of > about 12 useable stops of light. I'm not confused by this at all, it's exactly this linear response on a factor of 2 or 1/2 that I'm referring to. Once captured, the extent of tonal manipulation possible, is dependent on the number of available levels between the bits. A one f-stop decrease from saturation in a 16-bit device, halves the number of levels; from 65536 to 32768. In an 8-bit device it's 256 to 128. Further down the 8-bit scale it gets much worse. So if severe tonal manipulations are performed on the 8-bit file, tonal degrades through rounding errors are likely. This is very unlikely with the generous number of levels available in the 16-bit file. > > When you say that you add contrast to an image in post-production to make it > look more photographic, it means that you are tailoring the file with a > curve (toe/slope/shoulder) to make it appear like it did to your minds eye > at the time of capture. Yes, that�s the version I liked. Fortunately the data was in range and I could have saved another version with shadows that were more open. This could be the European or softer light version? <g> It's subjective. > Hence Chris G's brief but worthy mention of > in-camera profiling - something you may wish to consider. Not wanting to open another can of worms, I think I'll leave this one alone. > I swallowed my commonsense pill this morning, so I hope this helps. <G> Now where do you get those? Can you send me one too? <g> > PS: as an aside, I am currently working with Ilford FP4+ processed in > Rodinal diluted 1:100 as a compensatory developer which puts a very round > shouldered curve to highlight detail - without any blowout. What ISO setting do you use with this combo? Is there any dumping of shadow values with this combination, something common with compensating development? > As I routinely > expose for the shadows while shooting into the light, I am getting around > about 16stops of dynamic range! How far up the scale do you place your darkest shadow with detail? > My Nikon Coolscan has a D-Max of 3.0, and is > more than able to strip out anything and everything thrown at it. The result > is quite incredible luminosity. Of course, this is dependent upon the > agitation technique used during processing to reduce fogging and therefore > give a high signal to noise ratio. This is the same as for digital capture, > as both scanned film and digital chips are dependent on the quality of the > analog to digital converter being used. If I want 16stops from the > PowerShot, I can quite easily shoot for shadows/midtones/highlights and > composite together afterwards. Basically, there is no difference. Yes, I have used this technique myself. It's a bugger when things move though! > > My own point in this debate is that there will be a day soon when processor > clock and bus speeds will be fast enough for digital sensors to record > seperate shadow/midtone/highlights exposures sequentially and composite on > the fly in real-time to give whatever dynamic range as required. Now that would be nice! This kind of thing is happening already in colour terms with the Foveon technology. One of the biggest limitations of CCD sensors is the time it takes to read the data off the chip. When I tried the Sigma ? Foveon there was a shooting rate penalty I thought was caused by this read-out bottleneck. (3 channels instead of 1) Adding multiple read-out channels on the sensor plus buffering to RAM gives us faster shooting rates. Energising the CCD three times for separate shadow / mid-tone / highlight exposures reminds me of the challenge Dicomed faced had with their proposed colour LCD shutter. I understand they were spinning an RGB LCD shutter at high speed in front of the lens attempting to sample data during the burst of one flash exposure. I don't think it ever worked due to the slow readout times. Please send that commonsense pill urgently! <G> Best regards, David Kay =============================================================== GO TO http://www.prodig.org for ~ GUIDELINES ~ un/SUBSCRIBING ~ ITEMS for SALE
