Rich�Ulrich�<[EMAIL PROTECTED]>�wrote�on�6/13/03�4:14:55�PM:
>
>>�The�F-test�can�answer�the�question�"Are�these�two�models
significantly
>>�different�at�the�X%�level?".
>
>This�bothers�me�-�that's�not�the�way�that�I�would�describe
>the�question.��Down�below:��Clearly�the�two�models�are�*different*,
>anyway,�by�a�factor�of�1/T.����Does�one�have�a�lower�residual?
>
They�will�almost�certainly�have�different�residuals�for�a�particular
set�of�data.�The�F-test�quantifies�the�confidence�that�two�residuals
are�indeed�different.
 
>You�can�solve�by�least-squares,�comparing�predicted�to�
>observed,�if�the�errors�of�prediction�are�of�similar�size�
>across�the�range.��But�that's�the�non-assumption,�for�
>non-linear�regression,��isn't�it?
 
I�purposely�avoided�the�issue�of�what�to�do�if�the�errors�vary�across
the�range�of�variables.
 
>>�
>>�One�model�can�be�constant;�i.e.�assume�that�data�scatter�is
entirely
>>�random�about�the�dependent�variable�mean�&�not�dependent�on�the
>>�independent�variable(s)�at�all.
>>�
>>�Or�one�can�see�if�adding�another�parameter�to�a�fitting�function
makes
>>�a�significant�difference.
>>�
>
>Nested�models;�assessed�by�reduction�of�residuals
>of�least�squares,�say,�or�by�increase�of�Likelihood.
>
>>�Or�one�can�find�out�which�of�two�arbitrary�models�best�fits�the
data;
>>�this�is�very�useful�for�comparing�two�theories,�
 
nb.�even�if�the�two�models�are�related�such�that�one�has�one�extra
parameter.
>
>AIC��and�BIC��are�keywords�for�looking�up�comparisons
>of�non-nested�models.
>
I�will�follow�up�on�AIC�and�BIC.�Thanks.
 
A�simple�decrease�in�residuals�can�be�purely�by�chance.�

It�is�not�necessary�to�use�nested�model�logic for the F-test to be
applicable.

>>�
>>�As�an�example,�alternate�theories�for�chemical�diffusivity�D�are:
>>�
>>�(1)�D=(K)*exp(-Q/T)
>>�(2)�D=(K/T)*exp(-Q/T)�
>>�where�K�and�Q�are�experimentally�determined�constants�and�T�is
>>�temperature.
>>�
>>�Given�a�set�of�(D,T)�data�the�F-test�can�be�used�to�see�if�there�is
a
>>�significant�difference�between�these�two�models.
>>�
>
>The�F-test�is�used,�by�theory�and�by�custom,�to�test
>models�that�are�*nested*,�using�the�difference�in�d.f.��as�
>the�numerator�degrees�of�freedom.��Here,�that�d.f.��seems
>to�be�zero....��
>I'll�try�to�find�something�in�this�library�book�I�have�on�the�topic.
>
It�is�true�that�the�F-test�is�often�used�to�tell�if�the�change�in
residual�due�to�an�additional�parameter�is�significant.�

However,�by�its�defiinition,�the�F-test�can�be�applied�to�test�the
ratio�of�residuals�with�the�same�or�different�
degrees�of�freedom.�There�is�no�reason�these�residuals�must�come�from
functionally�nested�or�even�related�
models.
 
For�example�one�could�use�the�F-test�with�some�velocity/distance
measurements�to�see�if�velocity�varies�
exponentially�or�linearly�with�distance�for�a�falling�body.
 
>This�does�seem�to�be�a�curious�example.��
>I�think�I�would�show�folks��the�F-test,�assuming�one�
>d.f.,��as�a�'demonstration'�of�the�size�of�the�difference.��
>
>But�these�models�are�surely��*different*��in�a�way
>that�seems�pretty�strong.���I�guess,�I�am�accustomed�
>to�worrying�more�about�whether�a�variable�is�*in*��a
>model�at�all,�instead�of�worrying�about�what�form�it�takes.
 
This�is�a�real�example�drawn�from�materials�science/physical
chemistry.�The�two�models�have�the�same�degrees�
of�freedom�(N-2)�that's�why�I�chose�to�use�it�as�the�example.
 
I've�come�across�cases�in�my�research�where�one�really�couldn't
distinguish�between�these�two�models�with�a�
particular�set�of�experimental�data.
 
>Biomedical�data�with�subjective�reports�is�usually�not�
>so�definitive,�not�so�well-measured�as�to�select�between
>models�like�that.�
 
That's�why�it�is�important�to�apply�such�a�test.�Many�times�the
impression�is�given�that�an�experiment�is�consistent�
with�model�"A"��to�the�actual�or�implied�exclusion�of�model�"B"
without�an�appropriate�confidence�being�stated.
 
I'd�say�that�one�should�use�an�F-test�to�be�sure�the�models�differ.
 
 
PS.�The�models�I�presented�can�be�linearized�by�taking�logs.�Whether
one�should�do�so�depends�on�how�the�
measurement�errors�depend�on�variable�magnitude.�If�errors�vary�in
proportion�to�magnitude�one�is�best�off�fitting�
data�that's�been�linearized�by�taking�logs.��However,�numeric
experiments�I've�done�imply�this�isn't�as�big�a�factor�
as�one�might�think.
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to