> 
> >The last several decades of technology improvement say that
> >you're wrong.  We're not at the level of counting individual
> >photons yet, so sensors can get smaller without quality loss.
> >  
> Actually we are.   A cooled CCD often has noise levels in the low single 
> digit photon counts.

That's sensors being used in astrophotography, at an equivalent
ISO rating far higher than the 200 (or even 3200) of today's DSLRs.
We'll get there eventually, but we've got quite a way to go.
 
> Actually sensor site size does correspond to noise floor, and sensors 
> aren't a certain number of bits (at least in any SLR camera sensor or 
> not-toy-P&S sensor).   The quanitization takes place off sensor.

No, but there's no point in quantizing to more bits than you have signal.

> >THe one indisputable argument in favour of larger sensors
> >
> Make that two:  noise.

Unstated in my post quoted above is the assumption that you don't need all
that high of a signal-to-noise ratio.  Most people look at digital images
in one of two ways; on a computer monitor, or on a paper print. In each
case a final precision of more than eight bits per component is overkill.
It's nice to work with more bits during intermediate processing steps,
but anything more than twelve bits of precision is of marginal value.
(That's better than negative film stock, but not as good as Velvia).
 
> But we're still not any good at making large chips, so big sensors are 
> still gonna be mega-$.

This comes down to the laws of probability.  You'll get flaws in the chip
distributed probabalistically across an area.  The chance of getting a chip
without any flaws decreases drastically as the size of the chip increases.
So not only do you get less chips on a wafer to start with; the chance of
any chip being usable is greatly reduced, so the circuit yield per wafer
drops off precipitously.  Doubling the area of a chip can increase the
cost by a factor of ten or more.

Reply via email to