> Because at the ADC, optical brightness ratios (as analogue
> voltages) are mapped to
> bits in a linear fashion.
But that mapping might not be correct, if the sensor is insensitive to low
levels of light. That's what I got out of Julian's post. So you map what the
sensor delivers in voltage to brightness and you still might not get all the
details in shadows that are in the original slide. Instead the rest of the
image is spread out unrealistically. I'm still not convinced that there's a
necessary mapping between actual density and ADC resolution. Also, the eye
doesn't work linearly to brightness. Where along the path from sensor to
scanned image is the mapping performed that actually corresponds to the
psychological way we perceive brightness levels?
Frank Paris
[EMAIL PROTECTED]
http://albums.photopoint.com/j/AlbumList?u=62684
> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Tony Sleep
> Sent: Wednesday, January 10, 2001 5:36 AM
> To: [EMAIL PROTECTED]
> Subject: Re: So it's the bits? (Was: filmscanners: Sprintscan 120 now
>
>
> On Wed, 10 Jan 2001 19:41:35 +1100 Julian Robinson
> ([EMAIL PROTECTED])
> wrote:
>
> > I am having a fundamental problem comprehending why the number
> of bits is
> > even vaguely related to any supposed density range. I
> understand the maths
> > quoted here and in many other posts, but fail to understand why
> the fact
> > that the ratio of smallest bit size to largest number
> represented should be
> > related to density range.
>
> Because at the ADC, optical brightness ratios (as analogue
> voltages) are mapped to
> bits in a linear fashion.
>
> There was a long and involved discussion about this a while back.
> You should be
> able to locate the thread at the archive at
> http://phi.res.cse.dmu.ac.uk/Filmscan/
> - I think it was entitled 'Bit depth vs. OD'.
>
> Regards
>
> Tony Sleep
> http://www.halftone.co.uk - Online portfolio & exhibit; + film
> scanner info &
> comparisons