> An example: presume we take a simple first order statistical model. If our > input is an 8-bit sample value from a noise source, we will build a 256 > bin histogram. When we see an input value, we look its probability up in > the model, and discard every 1/(p(x)-1/256)'th sample with value x. When > this happens, the sample is just eaten and nothing appears in the output; > otherwise we copy.
I understand what you're trying to say, but this will not give a general-purpose function that "doesn't waste entropy" regardless of the input distribution. This only works when the distribution on the input stream consists of independent, memoryless samples from some distribution on 8-bit values. --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
