Hello ajb,

Thursday, July 12, 2007, 12:16:22 PM, you wrote:
>> well, ppm differs from simple order-0 coder in that it uses previous
>> symbols to calculate probability. lzw differs from order-0 coder with
>> flat static probabilities in that it encodes a whole word each time.

oh, i just recalled that flat distribution conventionally called order
-1 model. so starting from non-compressible order -1 model, lzw and
ppm goes into opposite direction:

* ppm uses several previous chars to predict one next character
* lz77/78 predicts (in the order -1 model) several next chars at once

if you interested, there are many algorithms combining both
approaches: lzrw5/rolz, lzp, lzcb, lzma


-- 
Best regards,
 Bulat                            mailto:[EMAIL PROTECTED]

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to