>The problem is that people are often very reluctant to admit the
>modeling assumptions that they have made. In the tooth fairy example,
>one might be tempted to claim that the original model assumed only that
>the existence of tooth fairy would predispose people to wonder about the
>tooth fairy (more so than they would otherwise wonder) and then argue,
>upon the arrival of data showing wondering, that the tooth fairy is more
>likely to exist. What's wrong with this? The model does not mention
>other causes of wondering and, thus, implicitly encodes that *any* cause
>of wondering must be the tooth fairy.
>
>We have a situation where the tooth fairy believer is claiming an
>inference from a property of the a hypothetical tooth fairy + data to
>the an increased posterior on the tooth fairy. However, when we add in
>the implicit modeling assumptions, we see that it is really an inference
>from the assumption that the tooth fairy is the only cause of a
>phenomenon + data to the existence of the tooth fairy. While the first
>version purports to be making scientific evaluation about the existence
>of the tooth fairy, the second demonstrates that the modeler has simply
>codified his prejudices in the model and is making a rather
>uninteresting claim that is primarily about his own prejudices.
But within the context of Bayes rule, the modeler must have made an
assessment of P(B | A) where A = "tooth fairy exists" and B = "I am thinking
about the tooth fairy." This is a 2x2 matrix of values. That assessment
requires the modeler to consider not only the probability that he would be
thinking about the tooth fairy when the tooth fairy exists, P(B=T|A=T), but
to also consider the probability of not thinking about the tooth fairy when
the tooth fairy exists, P(B=F|A=T), or why p(B=T|A= T) is less than one.
In addition the assessment requires thinking about the tooth fairy when the
tooth fairy doesn't exists P(B=T!A=F) . And that requires the modeler to
consider all the other possible causes. He doesn't have to make the other
causes an explicit part of his model. They are in there implicitly and the
use of Bayes rule requires that they be part of the calculation. The values
of the terms in the likelihood matrix p(B | A) are not implicit assumptions.
The modeler is required to make an assessment. This is not a closed world
modeling environment.
Now it may be that in order to make that assessment, he needs to make the
other factors an explicit part of the model, expanding the original Bayesian
network. That is his choice based on his comfort with the assessments he
has made. But in any case he must give values for all the terms in the cp
table of B.
It is the rigor of assessment that is often viewed as the "pain" of using
Bayesian methods and the Bayesian network. As a consequence, many of the
competing representations of uncertainty short circuit this thought process,
to their detriment -- for the very reasons you state.
Bob Welch
p.s. Just for the hell of it, I did a search for the "tooth fairy" on the
internet. Seems like there are a lot of people out there who think about
the tooth fairy. I only hope that they continue to do so for I would be
very embarrassed if the UAI list pops up near the top of such internet
searches. Gonna be real hard for me to sell that Air Force general on
Bayesian networks when that happens.
-----Original Message-----
From: Ronald E. Parr <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Cc: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Date: Saturday, July 03, 1999 6:36 AM
Subject: Re: circular reasoning and closed worlds in probability models
>>1. What is wrong with circular reasoning.
>>In a deterministic logic model, where propositions are either true or
false,
>>circular reasoning can lead to flip/flops in the truth of a proposition,
>>each time through the cycle, the proposition changes from true to false or
>>false to true. The model is inconsistent and consequently, anything can
be
>>proven.
>
>First off, circular reasoning does not cause some kind of oscillation as
>you have suggested. If I assume A as a premise and conclude A, I have
>used circular reasoning, but I have not caused any of the difficulties
>you have described. [Perhaps you were concerned about maintaining
>consistency when you assume A to prove A when ~A is already known.
>However, assuming A when ~A is known is a false premise, which is a
>different problem.]
>
>The problem with circular reasoning is that it does nothing other than
>restate the assumptions of the reasoner. A purely deductive argument
>is, in some sense, circular since the conclusion is within the deductive
>closure of the the reasoner's premises. If you accept the premises,
>then the criticism of circularity is, in some sense, a matter of taste.
>
>I'm quite surprised by the number of emails I have recieved that seem to
>be interpreting my comments to be suggesting that there is some
>unacceptable circularity in Bayesian reasoning. The difficulty is not
>with Bayesian reasoning. The difficulty is with failure to acknowledge
>the modeling assumptions that one has made.
>
>If you are interested only in defending the validity of the inference
>tools you are using, then this matter may be of little consequencde to
>you. However if you are interested in actually applying these tools to
>make claims such as, "The tooth fairy does (not) exist," (and hopefully,
>things much more interesting than this) then things are more
>complicated.
>
>If you and I disagree about the tooth fairly we would, hopefully,
>examine the modeling assumptions that we have made and if we are unable
>to converge, we might be able to trace the disagreement to certain
>modeling assumtpions that neither of us are able to justify and we might
>simply agree to disagree until we can find a better justification for
>our modeling assumptions.
>
>Some people seem to think that this is the *only* way things can turn
>out. The problem is that people are often very reluctant to admit the
>modeling assumptions that they have made. In the tooth fairy example,
>one might be tempted to claim that the original model assumed only that
>the existence of tooth fairy would predispose people to wonder about the
>tooth fairy (more so than they would otherwise wonder) and then argue,
>upon the arrival of data showing wondering, that the tooth fairy is more
>likely to exist. What's wrong with this? The model does not mention
>other causes of wondering and, thus, implicitly encodes that *any* cause
>of wondering must be the tooth fairy.
>
>We have a situation where the tooth fairy believer is claiming an
>inference from a property of the a hypothetical tooth fairy + data to
>the an increased posterior on the tooth fairy. However, when we add in
>the implicit modeling assumptions, we see that it is really an inference
>from the assumption that the tooth fairy is the only cause of a
>phenomenon + data to the existence of the tooth fairy. While the first
>version purports to be making scientific evaluation about the existence
>of the tooth fairy, the second demonstrates that the modeler has simply
>codified his prejudices in the model and is making a rather
>uninteresting claim that is primarily about his own prejudices.
>
>Now, I suppose we can argue about whether this deception on the part of
>the tooth fairy believer is circular reasoning in a strict sense. It
>seems pretty clear to me that the claim, "The thing that I have defined
>to be the only cause of X is more likely to exist because I have
>observed X." is at best a vacuous statement. I prefer to think of this
>as circular since the decision to add a parentless non-evidence node to
>a bayes net corresonding to the existence of an object is a purely
>ontological decision. However, if some people would prefer to use the
>word vacuous, this is fine with me. In any case, it would seem that the
>tooth fairy believer is either being deceptive about his assumptions, or
>has assumed so much that his conclusions are vacuous and not
>scientifically interesting.
>
>BTW, I'm taking break from these arguments until Tuesday, so I won't
>have anything further to say until the long weekend is over.
>
>
>--
>Ron Parr email: [EMAIL PROTECTED]
>--------------------------------------------------------------------------
> Home Page: http://robotics.stanford.edu/~parr