If I know you are against X, while X is not one of the s_i, but some
general description of it, how can you use the formula?

If the "knowledge" in a data compressor is all at the level of letter
string, how can it use the knowledge about the theme of a paper to
compress it better?

Pei

> > For example, "to predict what will happen in the environment" and "to
> > predict the next input from the environment as a TM" are two very
> > different problems, at least to me. For the former, the environment
> > can be described by a hierarchy of concepts with different
> > granularity, which for the latter, the environment is always described
> > at the same level.
> >
> > For some people in this list, I can predict their opinions on certain
> > topics with high accuracy, though I have little idea on what letter
> > will be the first letter of their next post. If I have to predict
> > that, I'll have to depend on the the occurrence distribution of
> > English letters, and my knowledge about their opinions play no role.
> >
> > Pei
>
> There is no difference.  The chain rule says that P(s) = PROD_i
> P(s_i|s_1..i-1), that any probability distribution over string s can be
> expressed as a product of conditional predictions of consecutive symbols in s.
>  If you know that I am for or against X then you have one bit of knowledge.  A
> data compressor knowing this can compress a message from me about X one bit
> smaller than a compressor without this knowledge.
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=88202012-ade734

Reply via email to