hirsch;180007 Wrote: 
> Nope. You're back at the basic error. How do you distinguish the results
> obtained in a negative DBT from those obtained in a hearing-impaired
> sample?
> 
> Let's take a basic perceptual test.  We test for a difference between
> two stimuli, and try to figure out whether or not we heard a
> difference. We then do statistics, and figure out whether or not the
> significance, a statistic called alpha, is less that 0.05.  This number
> is a probability.  It means that if we say the difference is real, the
> odds that we're wrong is less than 5% (or 19 to 1 odds).  Note that if
> the odds that a difference is real are something like 3 to 1 in favor
> or the difference being real, we will still determine that the
> difference is not significant.  That is because the probability of
> committing a Type I error (saying a difference is real when in fact it
> is not) is higher than we will accept.  This is where the problem comes
> in, as the failure to obtain a significant alpha says nothing about a
> negative result. In science, the failure to obtain significantly
> significant differences can still mask real differences, and in fact
> sometimes the odds favor the existence of such differences...but not by
> enough for us to accept them as "real". The normal scenario is to run
> the test, fail to obtain alpha less than 0.05, and then jump to the
> conclusion that since we didn't get a significant difference, there
> isn't one. Wrong.
> 
> The converse of a Type I error in statistics is a Type II error (saying
> that there is no difference when in fact there is a real difference). In
> order to make statements about negative results, we need to compute a
> statistic called beta (probability of committing a Type II error, or
> saying that there is no difference when in fact a difference is real)
> which needs to be below 0.05 before we can attribute any meaning to a
> negative result. Figuring beta is complicated, and cannot be done
> without some sort of a priori power analysis (which determines just how
> big an N is needed to make sense of failure to achieve a significant
> alpha).  If you have not computed beta, a negative result has no
> meaning in a statistical sense. To "prove the negative", you need to be
> able to calculate the odds that your conclusion is wrong, the same as
> for a positive result.  It's a lot harder for a negative, however.
> 
> Note that blinding is not even mentioned in the above.  It's simply a
> way of removing a confounding variable so that a significant alpha
> becomes more interpretable.  That's it.  If you think what I'm saying
> is in any way false, I strongly recommend reading a book on
> statistics/experimental design.

You are correct regarding statistical significance.

However, if you take one specific individual who claims to hear a
difference, and that specific individual cannot identify the difference
in a DBT, then there is an extremely high probability that that specific
individual does not really hear a difference.

Conversely, if a different specific individual could differentiate in a
DBT, there is an extremely high probability that the difference is
audible, even though the first specific individual could not hear it.

Statistical significance is all well and good, but I would suggest that
the former individual should refrain from spending the money to make the
change that he cannot hear.

Further, I would suggest that finding at least one individual who can
differentiate in a properly conducted DBT should satisfy most
reasonable objectivists, even though there is a higher probability of a
false positive than there would be for a larger sample size.


-- 
jeffmeh
------------------------------------------------------------------------
jeffmeh's Profile: http://forums.slimdevices.com/member.php?userid=3986
View this thread: http://forums.slimdevices.com/showthread.php?t=32352

_______________________________________________
audiophiles mailing list
[email protected]
http://lists.slimdevices.com/lists/listinfo/audiophiles

Reply via email to