Michael P. Gerlek wrote:
Bruce-
It is not clear to me what sort of "study" you would need to convince you, as the ISO standard for encoding data into the JPEG-2000 file format is by construction mathematically and numerically lossless process. (Indeed, "compression", i.e. throwing away bits so as to further reduce storage requirements, is actually not defined within the scope of the standard.)
Or to put that just a little more precisely:

- lossless compression involves throwing away redundant bits to reduce storage and/or bandwidth requirements

- lossy compression involves throwing away non-redundant bits - accepting some level of irreversible loss of quality (e.g., resolution, compression artifacts) in order to reduce storage and/or bandwidth requirements

As someone else noted, you need to vet the math, and it's implementation in hardware/software, to determine whether a process is truly lossless.

Also worth noting: error recovery is a different, but related issue. A lossless compression algorithm may be reversible (uncompression returns the original data, exactly) - but it may also be brittle. Under practical conditions, one has to look not just at the level of compression achieved, and the costs of that compression (how much quality is lost, if any; computational cycles required) - but how resistant the coding scheme is to disk and/or transmission errors. Which raises the question: anybody have any idea what the characteristics of the JPEG-2000 file format, and particularly the wavelet transforms, are vis-a-vis bit- and multi-bit errors? I.e., how much error recovery, if any, is built into the coding scheme?


--
Miles R. Fidelman, Director of Government Programs
Traverse Technologies 145 Tremont Street, 3rd Floor
Boston, MA  02111
[EMAIL PROTECTED]
617-395-8254
www.traversetechnologies.com

_______________________________________________
Discuss mailing list
Discuss@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/discuss

Reply via email to