Yongmian --
The statements in this thread are a little confusing and seem to suggest
there is some mysterious non-monotonic relationship between acquiring
additional evidence, the measure of uncertainty in the network distribution
(entropy) and the expected utility of decisions that utilize the new
information. In fact there is no mystery and everything is monotonic when
the proper perspective is taken.
Let N be the set of variables represented by the nodes of the network and
suppose N partitions into sets X and Y where Y is the set of evidence
variables. We are interested in the entropy of the distribution P and Py of
the network N before and after the information about Y is acquired. The
answer is given by the chain rule of entropy [Cover and Thomas, p16]: H(N) =
H(X, Y) = H(X | Y) + H(Y) >= H(X | Y) = H(X,Y | Y) = H(N|Y) since entropy is
positive. Hence the entropy of the network joint distribution IS monotonic
with regards to acquiring additional evidence.
In a similar vein, a decision made after acquiring additional evidence will
have a higher expected utility when viewed from the prior perspective -- If
D is the decision made prior to acquiring the evidence and Dy is the
decision made after the evidence is introduced, then EU(D) <= E {E [U(Dy) |
Y]}. The simple explanation is that the decision rule D can still be
followed after new evidence is acquired, but Dy is not available a'priori.
Neither of these statements imply that the entropy of any marginal
distribution of a single node nor the posterior expected utility E [U(dy) |
Y] is monotonic in additional evidence.
Hope this helps.
Bob Welch
- ----- Original Message -----
From: "Yongmian Zhang" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, October 03, 2003 7:30 AM
Subject: Re: [UAI] Entropy of a Node in a Bayes Net?
> Dear AI colleagues,
>
> Does any one know which of Eric Horvitz's papers talks about
> "non-monotonic entropy change", as described by Robert Dodier below.
>
> Thanks for any information.
>
> Yongmian Zhang
>
>
>
> On Mon, 29 Sep 2003, Robert Dodier wrote:
> >
> > Just a footnote on this topic -- it's not the case that additional
> > evidence (nodes with assigned values) always decreases the entropy
> > of the joint distribution modeled by the belief network. For example,
> > suppose the prior for a binary node is heavily weighted toward 0 or 1,
> > yet the likelihood function is the other way around; then the posterior
> > will be more or less balanced across 0 and 1, and the posterior will
> > have greater entropy than the prior.
> >
> > There was a discussion of this non-monotonic entropy change on this
> > list a few years ago. Eric Horwitz pointed out that while entropy
> > change is not necessarily monotonic, expected gain of decisions made
> > w.r.t. the joint distribution is not decreasing (if I've remembered
> > this correctly). Eric referred to a paper he wrote in which that was
> > proved. Sorry, I don't have a reference.
> >
> > For what it's worth,
> > Robert Dodier
> >
>