On Wed, Feb 07, 2007 at 05:42:49AM -0800, Sandy Harris wrote: > He starts from information theory and an assumption that > there needs to be some constant upper bound on the > receiver's per-symbol processing time. From there, with > nothing else, he gets to a proof that the optimal frequency > distribution of symbols is always some member of a > parameterized set of curves.
Do you remember how he got from the "upper bound on processing time" to anything other than a completely uniform distribution of symbols? Seems to me a flat distribution has the minimal upper bound on information content per symbol for a given amount of information! -- Good code works. Great code can't fail. -><- <URL:http://www.subspacefield.org/~travis/> For a good time on my UBE blacklist, email [EMAIL PROTECTED]
pgpmipxzIhxBi.pgp
Description: PGP signature