opaqueice;179972 Wrote: 
> In any case, for blind testing, what is being tested is whether or not
> the subject can actually hear a difference.  A "positive" result
> provides evidence that s/he can, a "negative" result that s/he can't. 
> That's it; the "negative" result is just as meaningful and just as
> useful.

Nope. You're back at the basic error. How do you distinguish the
results obtained in a negative DBT from those obtained in a
hearing-impaired sample?

Let's take a basic perceptual test.  We test for a difference between
two stimuli, and try to figure out whether or not we heard a
difference. We then do statistics, and figure out whether or not the
significance, a statistic called alpha, is less that 0.05.  This number
is a probability.  It means that if we say the difference is real, the
odds that we're wrong is less than 5% (or 19 to 1 odds).  Note that if
the odds that a difference is real are something like 3 to 1 in favor
or the difference being real, we will still determine that the
difference is not significant.  That is because the probability of
committing a Type I error (saying a difference is real when in fact it
is not) is higher than we will accept.  This is where the problem comes
in, as the failure to obtain a significant alpha says nothing about a
negative result. In science, the failure to obtain significantly
significant differences can still mask real differences, and in fact
sometimes the odds favor the existence of such differences...but not by
enough for us to accept them as "real". The normal scenario is to run
the test, fail to obtain alpha less than 0.05, and then jump to the
conclusion that since we didn't get a significant difference, there
isn't one. Wrong.

The converse of a Type I error in statistics is a Type II error (saying
that there is no difference when in fact there is a real difference). In
order to make statements about negative results, we need to compute a
statistic called beta (probability of committing a Type II error, or
saying that there is no difference when in fact a difference is real)
which needs to be below 0.05 before we can attribute any meaning to a
negative result. Figuring beta is complicated, and cannot be done
without some sort of a priori power analysis (which determines just how
big an N is needed to make sense of failure to achieve a significant
alpha).  If you have not computed beta, a negative result has no
meaning in a statistical sense. To "prove the negative", you need to be
able to calculate the odds that your conclusion is wrong, the same as
for a positive result.  It's a lot harder for a negative, however.

Note that blinding is not even mentioned in the above.  It's simply a
way of removing a confounding variable so that a significant alpha
becomes more interpretable.  That's it.  If you think what I'm saying
is in any way false, I strongly recommend reading a book on
statistics/experimental design.


-- 
hirsch
------------------------------------------------------------------------
hirsch's Profile: http://forums.slimdevices.com/member.php?userid=7288
View this thread: http://forums.slimdevices.com/showthread.php?t=32352

_______________________________________________
audiophiles mailing list
[email protected]
http://lists.slimdevices.com/lists/listinfo/audiophiles

Reply via email to