Hello,
 
a had another thought about which model to select when reading
Bob O'Hara's and Michelle Scardi's messages which echos Wirt's
ideas with which I fully agree. 
 
I think considering whether you have a process pased model which
actually tries to get to the fundamental cause and effect relationships
in the sense of fundamental relationships, or something like a neural 
network, which is sort of a better classificator, is important.
 
In the first case, I would argue, you can achieve some understanding
and may dare to venture with your modelling a little bit beyond the
range of the data you have used to calibrate the model. If you are
reasonable confident that your model captures the essential 
fundamental relationships. 
I would not recommend that with a neural network, which is in principle
more limited to the range of data used for calibration
Clearly, these are somewhat broad brush generalisations, but if I had
two models, one trying to capture the essence of the system under
consideration and another black be/neural network, I would accept 
some penalty in the fit of the process based model and opt for 
that model for the understanding and the predictions it allows. 
Clearly this is even more subjective than any criterion, but I believe
this is something we should not forget.
 
Btw, I would even be a little more critical than Bob, who used the
word "explanation" in the statistical sense perfectly, but I would
argue, that even though we say "factor X explains"... statistical 
models actually explain very little at best, in the broader sense
of the term. 
I do not mean to imply that Bob wanted to say otherwise - I
an sure he didn't - just a thought about the usage of "explain"  
 
Cheers,
Joerg

________________________________

From: Ecological Society of America: grants, jobs, news on behalf of Michele 
Scardi
Sent: Mon 3/6/2006 10:38
To: [email protected]
Subject: Re: AIC



Monday, March 6, 2006, 1:04:33 AM, Wirt Atmar wrote:
WA> ...Gareth's comments do allow me however an opportunity to expand
WA> a little bit on my previous posting. I personally hold David
WA> Anderson and Ken Burnham in very high regard, but I worry that the
WA> AIC is being oversold to the ecological community -- for two
WA> different reasons. ...

I really enjoyed reading Wirt Atmar's insightful opinion about the
role AIC is supposed to play in (ecological) modeling, and I entirely
agree with him about the arbitrariness of its formulation. Of course,
this doesn't mean that AIC is useless. In fact, it's actually as good
as Akaike's personal opinion and therefore it can be used as one of
the (many) possible criteria for selecting models.

However, Wirt Atmar's post leads to a more general consideration about
indices and the way they are used. For instance, let's think about
biotic indices and the practical consequences of their uncritical
application in terms of environmental management. As ecologists, we
should be used to deal with complexity, but many of us just can't
refrain from turning that complexity into a single value, especially
if a pre-compiled scale is available for interpreting that value as
excellent, good, average, etc.

Like in the AIC case, of course, some biotic indices are probably very
smart, but they are still inherently subjective. So, are we doing good
science when we base our conclusions on them? I don't think so, but
I'm afraid that some ecologists don't even ask themselves this
question.

WA> ... Nevertheless, let me also say at this point that this
WA> scattershot method has also received a measure of high acceptance
WA> in the scientific community of late. The most exquisite example of
WA> the simultaneous engineering utility and scientific meaningless of
WA> the procedure exists in the training of neural networks. ...

As for the "simultaneous engineering utility and scientific
meaningless" Wirt Atmar mentioned, I agree with him: neural networks
can be regarded as a dumb (although practically useful) tool. However,
if properly trained, they're able to capture relevant relationships in
very complex, non-linear systems. Of course, this is possible because
some "knowledge" gets implicitly embedded into a neural network during
its training. So, our problem is to extract (i.e. to understand) at
least some of that knowledge.

Basically, a properly trained neural network can be regarded as a
simplified (but still very complex) model of a real system. However,
we can "play" with it more easily than with the real thing. For
instance we can do sensitivity analyses and try to figure out which
stimuli (i.e. independent variables, using a regression-based analogy)
are relevant with respect to each response (e.g. dependent variables).

In other words, we can do experiments with the neural network model
and make inferences about the properties of the real system, and then
plan further research on the real system on the basis of those
inferences. And this can be definitely meaningful from a scientific
point of view.

Cheers,

Michele

--------------------------------
Michele Scardi
Associate Professor of Ecology

Department of Biology
University of Rome "Tor Vergata"
Via della Ricerca Scientifica
00133 Roma
Italy

http://www.mare-net.com/mscardi
--------------------------------

Reply via email to