I think we need to cut off (move to another venue) some of these
tangents:
Kevin S. Van Horn said:
>Ronald Parr recently made some comments about self-reinforcing hypotheses.
>We can prove mathematically that there is no such thing as a
>self-reinforcing hypothesis, if by this we mean a hypothesis whose
>posterior probability can only increase, and never decrease, regardless
>of the evidence. To see this, suppose that finding that D is true
>increases the probability that H is true:
Unfortunately, I think that one can create paradoxes in the style
Russell's (Bertrand). Suppose H is the hypothesis that the posterior
probability of H changes upon conditioning...
I agree with Kevin that if we play nice, then everything is
well-behaved. However, if we start making nasty, self-referential
hypotheses, then things get somewhat muddled. However, I admit that I
haven't thought too much about these sorts of paradoxes.
[Kevin's argument that the tooth fairy hypothesis, as originally stated,
is not self reinforcing.]
>questions then increase the probability of T? That depends on what
>other hypotheses are considered. If we factor in the obsession
>hypothesis O (that Joe has a psychological disorder causing him to
>obsess about the tooth fairy), then his questions no longer give much if
If we factor in other hypotheses, as I emphatically believe we should,
then I agree completely. The problem with the tooth fairy example is
the failure to include other hypotheses.
Me, then Kevin:
>> However you interpret Bayes rule, you make the assumption that the
>> evidence upon which you are conditioning is germane to the proposition
>> in question.
>
>This is a demonstrably false statement, as I demonstrated in a previous post.
>Bayes' Rule handles cases where the conditioning information is irrelevant
>with just as much ease as it handles cases where the conditioning information
>is relevant. Any assumptions of relevance or germane-ness are contained in
>the prior information X of P(H | D, X) (H being the hypothesis, D the data).
>So rather than attacking Bayes' Rule, you should be looking with a critical
>eye at the prior information X used in any proposed induction scenario.
I have never, and expect I never shall, attacked Bayes rule.
When I say that you cannot condition on the color of my shirt to predict
IBM's closing stock price tomorrow, I am saying that these two things
are indepedent and that you cannot condition on one with the hope of
gaining information about the other. This is, I think, a standard use
of these terms. Strictly speaking, however, Kevin is correct that one
can go through the motions of conditioning on irrelevant data. So, as I
expected would have been clear, when I have been saying that one cannot
condition on irrelevent data, I have meant the more colloquial sense in
which one cannot condition with any hope of changing the posterior.
None of this changes any part of the induction argument.
>> The topic of discussion here is scientific induction. Mathematical
>> induction, which is formally grounded, is a separate topic.
>
>The proof technique that is called "mathematical induction" is not what
>I was talking about. The making of mathematical conjectures is most
>certainly *not* formally grounded. It is a highly intuitive process
>based on a mathematician's experience solving mathematical problems and
>his/her knowledge of known theorems. Somehow, some mathematicians can
>look at a problem and guess the answer years, decades, or even centuries
>before anybody can prove that it is the right answer. This is an
>example of induction in precisely the sense we have been talking about
>-- something you can't *prove* with certainty, but have good reason for
>believing in based on some evidence.
I believe that this is a very interesting topic, but I disagree about
about its relationship to the topic at hand. A conjecture about
mathematics is a conjecture about an a priori truth, while a conjecture
about science or the validity of induction is a conjecture about
hypothesis, the truth of which is determined a posteriori.
Getting back to issues more central to the never-ending induction
argument:
Me, then Kevin:
>>Inferences made about the past are not what is typically referred to as
>>scientific induction and they do not have the same difficulty since we
>>typically view the past as static.
>
>You can just as easily view the future as static but unknown.
>Mathematically, there is no difference between making inferences about
>the past and making inferences about the future. Your distinction is an
>artificial one.
There's nothing artificial about requiring that we state any assumptions
we are making about the connection between that which we have seen and
that which we have not seen. There is a difference between assuming
that a data set is related to itself and assuming that a data set is
related to another data set that we have not touched yet.
[Kevin's induction by watching a machine argument.]
>Secondly, there is no circularity because I have no *general* assumption
>that the machine that generates the symbol at the next time step is
>related to the one I have been observing. *One* of the two hypotheses
>makes this assumption; the other does not. I allow for both hypotheses
>and then try to use the data to compare them.
One cannot simultaneously argue that one is making no assumptions about
the relationship between the past and the future, and that one may
condition non-vacuously on past data to make a prediction about the
future.
When I ask you about the next symbol (or about the validity of
induction), you have to make a choice. You can 1) Make an uninformed
guess or 2) Look at your past data. If you look at past data and use it
in some way that tells you more than your uninformed guess, then
something in your probability model tells you that the past and the
future are not independent. In other words, once you do (2), you have
made a committment to a hypothesis about the regularity of the world.
--
Ron Parr email: [EMAIL PROTECTED]
--------------------------------------------------------------------------
Home Page: http://robotics.stanford.edu/~parr