Honorable Forum:

Well, Feynman was one of the great geniuses of all time, I suspect (but cannot prove, only assert). He had one helluva fine sense of humor, and he applied its fires to the frontiers of physics.

There's one distinction that might need to be made, maybe not. When Feynman said, "If it disagrees with [the?] experiment, it's wrong. In that simple statement is the key to science. It doesn't make a difference how beautiful your guess is. It doesn't make a difference how smart you are, who made the guess or what his name is... (laughter) If it disagrees with [the?] experiment, it's wrong. That's all there is to it." I hope everyone who reads this list understands that Feynman means that the guess is wrong if the experiment demonstrates otherwise (not the experiment), or that if I am mistaken in this presumption that I will be corrected. I suspect that a transcript of Feynman's lecture, especially a fragment thereof, could be misinterpreted in the absence of the context of the actual lecture, even Feynman's way of expressing himself.

However, "experiments" in ecology may not be so "simple." If the experiment is faulty, the guess might not be wrong. Even "replication," repeating the experiment, might not prove that the guess is wrong.

Guesses can be WAGs (wild-assed guesses) or SWAGs (scientific wild-assed guesses). In the latter cases, a reasonable hypothesis can be constructed with something more substantial than chewing-gum, upon theoretical foundations consisting (at least largely or substantially) of at least suppositions that are more reasonable than unreasonable, more true than untrue, or even established laws--of, say, physics. Such guesses can be place-holders until something more solid comes along. And, bedrock CERTAINTY just may not be possible in ecology the way it seems that it can be in physics. I wonder what Feynman would have to say about string theory?

In my own work (in "applied" ecology, ecosystem "restoration") I could not wait for the "Savior" to come in the form of various formulae. I had to guess, hunch, and apply, even without the luxury of formal "field trials," not to mention some elegant experiment that tested all the (infinite?) relevant variables. "Worse" yet, I had to guess at degrees of relevance, not absolutes. But gravity and thermodynamics, for example, do underlie such guesses--or should.

The most fundamental "law" I could come up with: "Organisms are indicators of their environments in all four dimensions." It's a guess. It might be wrong. As Einstein once said, "I don't know--it might be five." Or the various applications that did repeatedly behave more or less predictably, and more so than a number of failed applications, might be "wrong." Ewel has asserted that "Ecosystem restoration is the ultimate test of ecological theory," but I suspect that "ultimate" is perhaps too strong a word--and that ecological theory may not be substantial enough to match up perfectly with the test. Call it "experiment," but until more disciplined testing is done on ecosystem restoration applications, the foundations, while perhaps substantial in themselves, may not be able to hold up the squirming mass of twigs, mycelia, roots, and wiggling, growling organisms of which ecosystems, in part, consist. "Purity" may fall short. Perhaps resilience will do (in "restoration," we don't actually do anything except juggle environments and organisms) as a space-holder until the Savior, the ultimate ecological theory comes. Maybe it has, and I am simply ignorant of it. There may be, out there somewhere, an equation of infinite length, or one of ultimate elegance in its simplicity.

I wish I knew what it was.  Tell me, please.  I don't know.

WT

At 12:46 PM 2/20/2008, Jeff Houlahan wrote:
Hi Wirt, I completely agree with almost all of what you (and David) wrote. Feynman is talking about a real hypothesis that arose from a great deal of thought and creativity...not one that has been attached with baling wire, duct tape and a little leftover Juicy Fruit to a pile of data that happened to be sitting around.
That said, science is many things - 'a predictive
enterprise, not some form of mindless after-the-fact exercise in number crunching.' - fits under the umbrella but I don't think captures the whole enterprise. Sequencing the human genome was, in my opinion, a version of mindless number crunching (although perhaps somebody can put that effort in a hypothesis testing context that I haven't thought of). I think most people would be hard pressed to say it wasn't science. In fact, there is an emerging field of statistics (data mining) that seems to be useful in developing scientific hypotheses and is all about the 'mindless after-the-fact exercise in number crunching'. My feeling is that data can provide hypotheses or test them. When it does the first, it is a very useful part of science but it is not predictive and it does not test hypotheses (null, competing or otherwise). When it does the latter it falls ito the category that Feynman was describing. I think the reason we often get these trivial tests of hypotheses is because there is this sense that science is only about testing hypotheses - therefore to do science I must test a hypothesis...whether there is a meaningful one or not. In my opinion, science can also just be about looking for patterns that we can use to suggest hypotheses. Hypotheses have to be tested to be useful but the patterns we see in nature (and those patterns are often less distinct without number crunching)are almost always the birthplace of hypotheses. Best.

Jeff H

-----Original Message-----
From: Wirt Atmar <[EMAIL PROTECTED]>
To: [email protected]
Date: Wed, 20 Feb 2008 12:03:54 -0700
Subject: [ECOLOG-L] Anderson's new book, "Model Based Inference in the Life Sciences"

I just purchased David Anderson's new book, "Model Based Inference in the Life
Sciences: a primer on evidence," and although I've only had the opportunity to
read just the first two chapters, I wanted to write and express my enthusiasm
for both the book and especially its first chapter.

David and Ken Burnham once bought me lunch, and because my loyalties are easily
purchased, I may be somewhat biased in my approach towards the book, but David
writes something very important in the first chapter that I have been mildly
railing against for sometime now too: the uncritical overuse of null hypotheses in ecology. Indeed, I believe this to be such an important topic that I wish he
had extended the section for several more pages.

What he does write is this, in part:

"It is important to realize that null hypothesis testing was *not* what
Chamberlin wanted or advocated. We so often conclude, essentially, 'We rejected the null hypothesis that was uninteresting or implausible in the first place, P
< 0.05.' Chamberlin wanted an *array* of *plausible* hypotheses derived and
subjected to careful evaluation. We often fail to fault the trivial null
hypotheses so often published in scientific journals. In most cases, the null
hypothesis is hardly plausible and this makes the study vacuous from the
outset...

"C.R. Rao (2004), the famous Indian statistician, recently said it well, '...in current practice of testing a null hypothesis, we are asking the wrong question
and getting a confusing answer'" (2008, pp. 11-12).

This is so completely different than the extraordinarily successful approach
that has been adopted by physics.

In ecology, an experiment is most normally designed so its results may be
statistically tested against a null hypothesis. In this procedure, data analysis
is primarily a posteriori process, but this is an intrinsically weak test
philosophically. In the end, you rarely understand more about the processes in
force than you did before you began. But the analyses characteristic of physics
don’t work that way.

In 1964, Richard Feynman, in a lecture to students at Cornell that's available
on YouTube, explained the standard procedure that has been adopted by
experimental physics in this manner:

"How would we look for a new law? In general we look for a new law by the
following process. First, we guess it. (laughter) Then we... Don't laugh. That's
the damned truth. Then we compute the consequences of the guess... to see if
this is right, to see if this law we guessed is right, to see what it would
imply. And then we compare those computation results to nature. Or we say to
compare it to experiment, or to experience. Compare it directly with
observations to see if it works.

"If it disagrees with experiment, it's wrong. In that simple statement is the
key to science. It doesn't make a difference how beautiful your guess is. It
doesn't make a difference how smart you are, who made the guess or what his name
is... (laughter) If it disagrees with experiment, it's wrong. That's all there
is to it."

    -- http://www.youtube.com/watch?v=ozF5Cwbt6RY

In physics, the model comes first, not afterwards, and that small difference
underlies the whole of the success that physics has had in explaining the
mechanics of the world that surrounds us.

The entire array of plausible hypotheses that were advocated by Chamberlin don't all have to present during the first experimental attempt at verification of the
first hypothesis; they can occur sequentially over a period of years.

As David continues, "We must encourage and reward hard thinking. There must be a
premium on thinking, innovation, synthesis and creativity" (p. 12), and this
hard thinking must be done in advance of the experiment. Science is a predictive
enterprise, not some form of mindless after-the-fact exercise in number
crunching.

Although expressed in a different format, David Anderson is saying the same
thing as Richard Feynman, and I very much congratulate him for it.

Wirt Atmar

Reply via email to