On Wed, 17 May 2000 01:57:41 GMT, [EMAIL PROTECTED] wrote:
< snip, stuff from previous response. About F-max >
> ... And finally could one say that there
> is a "significant" difference in heteroscedasticity between the "A"
> samples than the "B" samples based soley on the difference between the
> F-max statistics? Of course if one or the other is significant at the
> 95% level then it's a "no brainer" but even in that case is it possible
> to compare the F-max statistics assuming the means are not equal a
> priori?
Among all gross misconceptions of statistics, this is a fairly common
one, but it is still a gross misconception to say two results differ,
statistically or meaningfully, because ONE of them is "significant."
Well, there is a "meaning" to the extent that some journals insist on
the magic of 5%. But it requires a direct statistical test to show
that two samplings are "different."
Again, if result A is "significant" and result B is not, it is NOT
proper to conclude that A differs from B. Speaking of the test's
p-level, instead of the Confidence Interval, the difference between 4%
and 6% is almost nonexistent for small N. In fact, it is really hard
to say what you have shown, if the difference owes entirely to the
fact that one test uses a slightly larger N, whereas the other test
may have a slightly bigger "effect" at the same time.
So, here is a misunderstanding of basic inference, on top of the
choice of a "variance test" that is generally abandoned for that
purpose (because of its over-sensitivity to non-normality). I think
the person with the question needs some close-up consulting and
hand-holding. Despite that, I offer prospects for answering the
question.
Here are some choices that have been used to measure "variability
around the <local> means" -- absolute differences (from mean, or from
median); squared differences; log of the squared differences. What
method looks useful? In an exploratory mode, you should look at them
all.
With your data rescored as (positive) Difference, you can compute
tests within data sets, and between data sets. It sounds as if the
ultimate comparison Between data sets might properly use the error
term of the "Mean Square between Groups, Within data sets."
--
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
===========================================================================
This list is open to everyone. Occasionally, less thoughtful
people send inappropriate messages. Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.
For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===========================================================================