At 6/10/99 06:40 PM, David Poole wrote:
>
>Here is an alternate semantics for belief and plausibility for beleif
>functions that doesn't rely on the probability of provability. Hopefully
>it is understandable by Bayesians. [I have no idea if it is standard or
>not, but I suppose I'll find out soon enough.]
>
>It makes no sense to many of us (maybe just the Bayesians) to be unsure
>about our own beliefs. It does make sense to be unsure about someone
>else's beliefs. I will cast the semantics in terms of multiple-agents.
>As Joe Halpern keeps reminding us, for multi-agent problems we have to
>be careful about the protocols of the various agents. I will be explicit
>about protocols to get the definitions of belief and plausibility.
>
>Let's consider an agent A that gets to observe Q and decides whether to
>set R or not (in an influence diagram think of R as a decision node with
>Q as a parent.).
>
>There are four possible policies or strategies for agent A
>s1: Q --> R, �Q-->R
>s2: Q --> R, �Q-->�R
>s3: Q --> �R, �Q-->R
>s4: Q --> �R, �Q-->�R
>A is going to choose a mixed strategy with Pr(s1)+P(s2) = 0.9 this will
>correspond to P(R|Q)=0.9. When Q is is true, the agent will set R true
>0.9 of the time.
>Similarly Pr(s1)+Pr(s3)=0.8� (which corresponds to P(R|�Q)=0.8).
>Lets assume (for some reason, unknown to me) that agent A chooses the
>components of the strategies independently. So that Pr(s1)=0.9*0.8=0.72
>and Pr(s2)=0.9*0.2=0.18, Pr(s3)=0.1*0.8=0.08, and Pr(s2)=0.1*0.2=0.02,
>
>Obviously (as has been pointed out by various writers) no matter whether
>Q is true or false, R is true at least 0.8 of the time.
So far, everything sounds fine.
>
>Suppose that Q is chosen by another agent B. We have to be careful about
>what information is available when B gets to decide whether Q is true or
>not. Suppose that agent B gets to observe what policy agent A has chosen
>before deciding whether to make Q tue.
>
>It turns out that, based on the constraints, R must be true at least
>0.72 of the time (in particular B chooses �Q if A chooses s2 and B
>chooses Q if A chooses s3). And R can be true at most 0.98 of the time.
>That is minimising over all strategies of B, the probability of R must
>be at least 0.72. That is maximising over all strategies of B, the
>probability of R must be at most 0.92.
Since we've already established that R >= 0.8, this carries no new
information. Likewise, since R <= 0.9, it goes without saying that R can
be at
most 0.98 (I assume that's what you meant).
>
>This seems to explain where some of the "leaked" probability goes. This
>says that the unknown probability can be set to trick you. But there is
>some of the probability that can't be used against you; this is the
>plausibility.
>
>Note that I did make independence assumptions to get exactly the belief
>and plausibility of D-S. However, without independence assumptions, if A
>and B both choose strategies to minimize R, then R will still be true in
>0.7 of the cases. (The strategy s1 must be chosen at least 0.7 of the
>time).
>
>Does this make sense?
Unfortunately, the answer is still no. You specified what B will do in case A
chooses s2 or s3. But these choices in turn constrain what B can do in cases
s1 and s4, which you have not specified. As it turns out, the choices you
specified are inconsistent, because we are still constrained by Pr(R|Q) =
0.9.
Now, we already have Pr(Q,~R) at least 0.08 (from case s3), so Pr(Q,R) must be
at least 9 times that or 0.72; since the only case that yields (Q,R) is case
s1, we must conclude that B always chooses Q whenever A chooses s1.
We are also constrained by Pr(R|~Q) = 0.8. From case s2 we have Pr(~Q,~R) at
least 0.18, so Pr(~Q,R) must be at least 4 times that or 0.72. But since case
s1 is the only one that allows R and we have already concluded that B must
choose Q in case s1, there is a contradiction. There is no strategy for B
that
is consistent with the statement of the problem, except those that abandon the
actions stipulated for cases s2 and s3. The only consistent strategies for B
are the ones that yield Pr(R) in the range [0.8, 0.9].
>
>David
>
>p.s. I do like these discussions. I always learn a lot!
>
Me too. Your formulation presented a very convincing-looking but paradoxical
situation, and it took some thinking before I realized exactly where the
"swindle" was.