I recently read a similar thing in the book "Data
Analysis and Graphs Using R" from Mainload & Braun. I
will reproduce it here. In fact, it is already a
quotation from Tukey, J. W. (1991). The philosophy of
multiple comparisons. Statistical Science 6:100-116.

"Statisticians classically asked the wrong question -
and were willing o answer with a lie, one that was
often a downright lie. They asked 'Are the effects of
A and B different?' and they were willing to say 'no'.

All we know about the world teaches us that the
effects of A and B are always different - in some
decimal place - for every A and B. Thus, asking 'Are
the effects different?' is foolish. What we should be
answering first is 'Can we tell the direction in which
the effects of A differ from the effects of B?' In
other words, can we be confident about the direction
from A to B? Is it 'up', 'down', or 'uncertain'?

Latter, in the words of the book author:

"Turkey argues that we should never conclude that we
'accept the null hypothesis'.


--- Wirt Atmar <[EMAIL PROTECTED]> escreveu:

> I just purchased David Anderson's new book, "Model
> Based Inference in the Life
> Sciences: a primer on evidence," and although I've
> only had the opportunity to
> read just the first two chapters, I wanted to write
> and express my enthusiasm
> for both the book and especially its first chapter.
> 
> David and Ken Burnham once bought me lunch, and
> because my loyalties are easily
> purchased, I may be somewhat biased in my approach
> towards the book, but David
> writes something very important in the first chapter
> that I have been mildly
> railing against for sometime now too: the uncritical
> overuse of null hypotheses
> in ecology. Indeed, I believe this to be such an
> important topic that I wish he
> had extended the section for several more pages.
> 
> What he does write is this, in part:
> 
> "It is important to realize that null hypothesis
> testing was *not* what
> Chamberlin wanted or advocated. We so often
> conclude, essentially, 'We rejected
> the null hypothesis that was uninteresting or
> implausible in the first place, P
> < 0.05.' Chamberlin wanted an *array* of *plausible*
> hypotheses derived and
> subjected to careful evaluation. We often fail to
> fault the trivial null
> hypotheses so often published in scientific
> journals. In most cases, the null
> hypothesis is hardly plausible and this makes the
> study vacuous from the
> outset...
> 
> "C.R. Rao (2004), the famous Indian statistician,
> recently said it well, '...in
> current practice of testing a null hypothesis, we
> are asking the wrong question
> and getting a confusing answer'" (2008, pp. 11-12).
> 
> This is so completely different than the
> extraordinarily successful approach
> that has been adopted by physics.
> 
> In ecology, an experiment is most normally designed
> so its results may be
> statistically tested against a null hypothesis. In
> this procedure, data analysis
> is primarily a posteriori process, but this is an
> intrinsically weak test
> philosophically. In the end, you rarely understand
> more about the processes in
> force than you did before you began. But the
> analyses characteristic of physics
> don&#8217;t work that way.
> 
> In 1964, Richard Feynman, in a lecture to students
> at Cornell that's available
> on YouTube, explained the standard procedure that
> has been adopted by
> experimental physics in this manner:
> 
> "How would we look for a new law? In general we look
> for a new law by the
> following process. First, we guess it. (laughter)
> Then we... Don't laugh. That's
> the damned truth. Then we compute the consequences
> of the guess... to see if
> this is right, to see if this law we guessed is
> right, to see what it would
> imply. And then we compare those computation results
> to nature. Or we say to
> compare it to experiment, or to experience. Compare
> it directly with
> observations to see if it works.
> 
> "If it disagrees with experiment, it's wrong. In
> that simple statement is the
> key to science. It doesn't make a difference how
> beautiful your guess is. It
> doesn't make a difference how smart you are, who
> made the guess or what his name
> is... (laughter) If it disagrees with experiment,
> it's wrong. That's all there
> is to it."
> 
>     -- http://www.youtube.com/watch?v=ozF5Cwbt6RY
> 
> In physics, the model comes first, not afterwards,
> and that small difference
> underlies the whole of the success that physics has
> had in explaining the
> mechanics of the world that surrounds us.
> 
> The entire array of plausible hypotheses that were
> advocated by Chamberlin don't
> all have to present during the first experimental
> attempt at verification of the
> first hypothesis; they can occur sequentially over a
> period of years.
> 
> As David continues, "We must encourage and reward
> hard thinking. There must be a
> premium on thinking, innovation, synthesis and
> creativity" (p. 12), and this
> hard thinking must be done in advance of the
> experiment. Science is a predictive
> enterprise, not some form of mindless after-the-fact
> exercise in number
> crunching.
> 
> Although expressed in a different format, David
> Anderson is saying the same
> thing as Richard Feynman, and I very much
> congratulate him for it.
> 
> Wirt Atmar
> 


Matheus C. Carvalho
PhD student
Kitasato University - School of Fishery Sciences
Japan


      Abra sua conta no Yahoo! Mail, o único sem limite de espaço para 
armazenamento!
http://br.mail.yahoo.com/

Reply via email to