On 5/12/2015 12:33 AM, Telmo Menezes wrote:


    I disagree. I think this criticisms comes from a misinterpretation of what 
the
    p-value means. The p-value estimates the probability of seeing results at 
least as
    helpful to the hypothesis as the ones found, assuming the null hypothesis. 
A high
    p-value is informative because it tells us that the null is a likely 
explanation
    when compared to the hypothesis. A low p-value tells us that the hypothesis 
merits
    further investigation.

    First, you've got high and low mixed up.  A low p-value, e.g. 0.05, is 
considered
    significant in medical tests, 1e-6 is considered significant in particle 
physics.


No, you misread me. Notice that I was arguing that a result in favor of the null (high p-value) is perhaps more informative than a result in favor of the hypothesis (low p-value), because the method is quite vulnerable to false positives -- you can expect to find the same ratio of false positives as the significance threshold you are using. Thus so many "cures for cancer", as you say.



    The p-value tells us nothing about the probability of any of the hypothesis 
being
    true. It's a filter for noise, given the available data.

    But the trouble is it generates noise.  The high value, 0.05, used in 
medicine with
    understandably small sample size is the reason the "New Scientist" can tout 
a new
    discovery for curing cancer every 6 months.


Yes.

    And on the other end when you have really big samples, as in the PEAR 
experiments,
    you're virtually certain to reject the null hypothesis at 0.001 simply 
because your
    testing a point hypothesis against an undefined alternative, i.e. "anything 
else".


Also true.

        Any useful analysis would have to be Bayesian and start with some prior
        alternative hypotheses one of which would be Prob(temperature goes 
up|lots of
        CO2 is added to the atmosphere).  That already has a high prior 
probability
        based on the analysis of Savante Arrhenius in 1890.


    If you did Bayesian analysis in this fashion, you would be assuming at the 
start
    what you want to test for.

    Yeah, just as if you did a Bayesian analysis of whether gravity made things 
fall
    down: Yep, that one fell.  OK, that one fell. Yep, the third one fell... 
Statistics
    isn't the best decision process for everything.


It's the worse, and should only be used when we don't have anything better. The trouble is that this "anything better" must take the form of a model capable of making reliable predictions. With gravity you don't need statistics, because the laws of motion can predict the outcome perfectly every single time. It would be silly to use statistics there, as you say.

With climate change and cures for cancer you need statistics, because there are no such laws in these fields. There is no equation where you can plug-in a CO2 concentration and get a correct prediction on global temperature change.

There's a law where you can plug in atmospheric composition and solar radiance and get a correct prediction of the equilibrium temperature. That's what Arrhenius did in 1890. It's precisely because we do have equations for the energy balance of the Earth and how CO2 affects it, that anthropic global warming is as solid a fact as evolution and nuclear fission. If it were *just* observations there might be room for doubt as to why temperature has gone up. But the mechanism is well known and has been for a century.


          But if you'd like to actually formulate the alternative hypothesis I 
might do
        the analysis.


    Ok. My alternative hypothesis is that there is no trend of global 
temperature
    increase in the period from 1998 to 2010 (as per Liz's chart's timeframe), 
when
    compared to temperature fluctuations in the 20th century (as defined by the 
metric
    in the chart).

    OK.  Here's one way to do it. The ten warmest years in the century from 
1910 to 2010
    all occurred in the interval 1998 to 2010, the last 13yrs of the century.  
Under the
    null hypothesis, where the hottest year falls is uniform random, so the 
hottest year
    had probability 13/100 of falling in that interval.  The next hottest year 
then had
    probability 12/99 of falling in the remaining 12yr of that interval, given 
the
    hottest had already fallen it. The third hottest year had probability 11/98 
of
    falling in that interval, given the first two had fallen in it, and so on.  
So the
    probability of the 10 hottest years falling in that 13yr period is

        P = (13*12*...5*4)/(100*99*...*92*91) = 1.65e-11

    To this we must add the probability of the more extreme events, e.g. the 
probability
    that the ten hottest years were in the last 12

        P = (12*11*...*5*4*3)/(100*99*...*92*91) = 3.81e-12

    and that they were in the last 11

        P = (11*10*...*5*4*3*2)/(100*99*...*92*91) = 6.35e-13

    and that they were in the last 10

        P = (11*10*...*5*4*3*2)/(100*99*...*92*91) = 5.77e-14

    Summing we get P = 2.10e-11

    A p-value good enough for CERN.  But this isn't a very good analysis for two
    reasons.  First, it's not directly measuring trend, it's the same 
probability you'd
    get for any 10 of the observed temperatures falling on any defined 13 
years.  So you
    have infer that it means a trend from the fact that these are the hottest 
years and
    they occur in the 13 at the end. Second, it implicitly assumes that yearly
    temperatures are independent, which they aren't.  If temperatures always 
occurred in
    blocks of ten for example the observed p-value would be more like 0.1.  But 
this
    shows why you need to consider well defined, realistic alternatives.  Your
    alternative was "no trend", but no trend can mean a lot of things, 
including random
    independent yearly temperatures.

    A better analysis is to select two different years at random and count how 
many
    instances there are in which the later year is hotter.  Under the null 
hypothesis
    only half should count. This directly counts trends. And this is 
independent of
    whether successive years are correlated.  There are 10000 possible pairs in 
a
    century which is large enough we can just sample it. I got the NOAA data 
from 1880
    thru 2013, so I used a little more than a century.

    For example taking a sample of 100 pairs gives 86 in which the later year 
was warmer
    (I counted ties as 0.5).  The null hypothesis says this is like getting 86 
heads in
    100 tosses, which obeys a binomial distribution.  The probability of 
getting 86 or
    more heads in a 100 tosses is 4.14e-14.


Brent, I tip my hat to you.
I was preparing to write some objections after reading your first analysis, but your pair sampling analysis already addresses them. You convinced me that there is, in fact, a global temperature increase trend in the last century.

So are you also convinced that increased CO2 is causing it?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to