On Jun 15, 2006, at 2:37 PM, Kenneth Waller wrote:

> LOST as compared to some non lossy capture modes. JPEG compresses  
> file size
> by selectively discarding data.. The file is compressed relative to  
> other
> possible file formats.

"Data loss" in the context of image processing jargon means a  
combinaton of two things:

- Some of the tonal resolution acquired by the capture device is  
changed in a non-reversible manner by processing operations.

- Some of the spatial resolution acquired by the capture device is  
changed in a non-reversible manner by processing operations.

(Following is a more complete yet still simplistic picture of what's  
going on when you save exposures in a digital camera with regards to  
the data representation ...)

The sensor is a linear gamma device. Each photosite simply counts how  
many photons hit it in the exposure time and reports that number. If  
there were no RGB filter mosaic in front of the sensor, you would  
have an intensity map in [x,y] coordinates with numbers from 0 to  
4096 in discrete integer values.

The human eye does not see light in linear terms like this. To  
correct the data in that map to correspond to what the human eye  
sees, you need to compress the high values together and spread the  
low values apart in a curve described by a gamma function. The  
process of doing this "gamma correction function" is tonally lossy:  
it throws away some of the tonal values in the highs and stretches  
some of the values in the lows to achieve its goal. The result,  
however, looks "normal" to the human eye. Spatial resolution is not  
lost but the map of intensities is irreversibly altered.

Now consider the RGB mosaic. Chroma interpolation is a matter of  
looking at the values of the pixels according to how the R, G and B  
pixels are deployed and interpolating a chroma value for each pixel  
position in the three channels based upon the relationships of the  
values in an [NxN] unit cell. Again, the process is tonally lossy  
because the values are not exactly what was captured and that which  
was captured cannot be retrieved exactly. This process can be  
spatially lossy as well, but generally the spatial losses with  
current algorithms are very small.

The result of this process, without downsampling to 8bit, would be  
pixel positions in [x,y] space with three numbers, each 0-4096, as  
values. But TIFF files produced in camera are 8bit per channel  
output, so the values in 0-4096 space are interpolated to best fit in  
the scale from 0-256. This process is again tonally lossy and  
spatially neutral.

That's what happens (simplistically) going from RAW to TIFF. Going  
from TIFF to JPEG applies the scalable JPEG compression algorithm,  
which sorts and changes representations of values to reduce data  
size. JPEG compress, depending upon the quality setting,  
implementation and the data itself, can be lossy both tonally and  
spatially from virtually nil significance to quite a lot.

hope that helps... ;-)

Godfrey



-- 
PDML Pentax-Discuss Mail List
[email protected]
http://pdml.net/mailman/listinfo/pdml_pdml.net

Reply via email to