On Sep 11, 1:23 am, James Annan <[EMAIL PROTECTED]> wrote:
> Michael Tobis wrote:
> > Yes, that's the one, thanks.
>
> > Since this isn't a public talk I won't identify the frequentist in
> > question, but he was uncomfortable with the very idea of assigning a
> > probability to an event that "either happened or didn't". Something
> > about babies and bathwater comes to mind.
>
> I would be interested to know if he listens to (and acts upon) the
> weather forecast :-) Tomorrow's weather is not a random repeatable
> sample, merely an unknown deterministic event. Of course people
> (including me) do talk about frequentist notions such as reliable
> probabilities ("reliable" meaning that eg an event has historically
> happened on p% of the occasions that it was forecast to happen with p%
> probability), but I would hope that most if not all researchers would
> agree if they thought about it carefully that in fact the probabilities
> can only be Bayesian in nature.

I disagree to some degree.  The weather prediction probabilities can
be (and are) model-based frequentist probabilities.

Now there is a hidden assumption: "The model fits reality".   The
weatherman is basically acting as if he has a 100% degree of belief in
the model.   The degree of belief in the model is perhaps Bayesian in
nature.

Sometimes I have heard local weathermen make a prediction different
from the national predicition.  They have some understanding that
makes them doubt the local applicability of the general prediction.
Perhaps that's a lower degree of belief in the model.

I think this is the way it often works.  The weather prediction is
obviously not purely Bayesian.  It not like the weatherman (or some
committee) measures their psyche to determine a degree of belief.
They just commit to a model.

If this is not the way its done, then it should be done this way.  Use
a mixed Bayesian/frequentist method.  One thing that you should demand
is that the link between the model and probability be cut and dried:
nothing but pure math and (if real or simulated sampling is needed)
sound sampling procedures.   All the fuzzy "degree of belief" stuff
should be confined to the "Does the model fit reality?" issue.

If the weatherman is allowing fuzziness to infect the model-
probability connection part, then he is not being a Bayesian, he is
just making a blunder.   I have no doubt that this happen in various
applications, but its just a mistake, not a valid use of Bayesian
probability.

>
>
>
> > That said, he also described a very long and involved set of
> > calculations that went into the figure, and pointed out that no effort
> > was made to assign confidence bounds to any of it.
>
> > I don't know of any claims about this paper in the press.
>
> http://www.sciencedaily.com/releases/2007/09/070906135629.htm
>
> "the team found a 90 percent probability that the object that formed the
> Chicxulub crater was a refugee from the Baptistina family" is a rather
> typical example. But it was unfair of me to criticise the press as the
> claim appears in the paper itself. Of course my comments don't mean that
> the 90% figure is unreasonable, only that it is not directly supported
> by the research.
>
> James


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
Global Change ("globalchange") newsgroup. Global Change is a public, moderated 
venue for discussion of science, technology, economics and policy dimensions of 
global environmental change. 

Posts will be admitted to the list if and only if any moderator finds the 
submission to be constructive and/or interesting, on topic, and not 
gratuitously rude. 

To post to this group, send email to [email protected]

To unsubscribe from this group, send email to [EMAIL PROTECTED]

For more options, visit this group at 
http://groups.google.com/group/globalchange
-~----------~----~----~----~------~----~------~--~---

Reply via email to