I won't pretend I really know what I'm talking about, I'm just
guessing here, but don't you think the requirement for "independent
and identically-distributed random variable data" in Shannon's source
coding theorem may not be applicable to pictures, sounds or frame
sequences normally handled by compression algorithms? I mean, many
compression techniques rely on domain knowledge about the things to be
compressed. For instance, a complex picture or video sequence may
consist of a well-known background with a few characters from a
well-known inventory in well-known positions. If you know those facts,
you can increase the compression dramatically. A practical example may
be Xtranormal stories, where you get a cute 3-D animated dialogue from
a small script.

Best,

-Martin

On Sun, Mar 11, 2012 at 7:53 PM, BGB <cr88...@gmail.com> wrote:
> On 3/11/2012 5:28 AM, Jakub Piotr Cłapa wrote:
>>
>> On 28.02.12 06:42, BGB wrote:
>>>
>>> but, anyways, here is a link to another article:
>>> http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem
>>
>>
>> Shannon's theory applies to lossless transmission. I doubt anybody here
>> wants to reproduce everything down to the timings and bugs of the original
>> software. Information theory is not thermodynamics.
>>
>
> Shannon's theory also applies some to lossy transmission, as it also sets a
> lower bound on the size of the data as expressed with a certain degree of
> loss.
>
> this is why, for example, with JPEGs or MP3s, getting a smaller size tends
> to result in reduced quality. the higher quality can't be expressed in a
> smaller size.
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to