On Sat, 15 May 2004 19:42:42 GMT, David Winsemius
<[EMAIL PROTECTED]> wrote:

>Richard Hoenes wrote in news:[EMAIL PROTECTED]:
>
>> 
>>>> If it was the ICC he was pushing I wouldn't mind so much, but he has
>>>> insisted we include Bland & Altman's limits of agreement (which is
>>>> simply the mean difference +/- [1.96*stddev] which has no signficance
>>>> test), and he is now systematically having us remove every other
>>>> statistical test we've included in the paper.  The only other test
>>>> left in the paper is the paired t-test and now he wants a reference 
>to
>>>> show it is valid to use.  I'm hoping to find a reference that will
>>>> allow us to keep the paired t-test and bring back the Pearson's r.
>>>> 
>>>> The question regarding Pearson's r and ICC below just popped into my
>>>> head when I was working on all this and for this paper.
>>>> 
>>>You didn't mention the discipline of the journal. Is it perhaps a 
>>>medical or physiology journal? Is it in the UK or Europe? I've seen a 
>>>trend for those journals to reject articles that report significance 
>>>testing. they have been insisting on statistics such as confidence 
>>>intervals rather than p-values. If this is the case you may need to 
>>>revisit your data with a slightly different methodological approach.
>>>
>>>This is just a suspicion I have based on the editor's insistance on 
>>>using Bland and Altman (Both of which I greatly admire, BTW.). Altman 
>>>has an excellent book on calculating and applying various confidence 
>>>interval approaches. I know that Altman is/was involved with BMJ's move 
>>>away from sig testing towards CI. I don't know your circumstances so 
>>>everything I've written may be bunk. Anyway, I hope I've helped.
>> 
>> It is a behavioral optometry journal in the US.  We've published
>> similar articles in this journal before, but this new statistical
>> reviewer they hired must be one of those who doesn't like significance
>> testing.
>> 
>You are be making this needlessly difficult. And you are further wrong in 
>saying there is no inference possible with confidence intervals. 
>
>The paired t-test is just a one sample t-test of the hypothesis that the 
>mean of the differences is "truly" zero. If you want to turn this result 
>into a confidence interval, just report the mean and the proper number of 
>standard deviations of the differences. The paired t-test will be 
>significant at the 95% level in exactly those situations when the mean is 
>more than t(0.025,n-1) or t(0.975,n-1) standard deviations away from 
>zero. t(0.025, n) will generally be greater than 1.96 but by not much. 
>Reporting the confidence interval is more informative than merely 
>reporting that the test was "significant", because it defines a range of 
>plausible values for the difference.

We did this when we were told to put in the Bland Altman since we
understood it more and, at least to us, it was more widely used.  We
were told to take it out of the paper since we'll have the Bland
Altman intervals.

>You could use Rosner's "Fundamentals of Biostatistics" or most basic 
>stats books for this assertion. The accept-reject formalism and the 
>confidence interval formalism have been shown to be equivalent, oh, about 
>a half century ago.
>
>If you want to "bring back the Pearson's correlation", why don't you 
>instead create a scatterplot of subject pre-post measures. That way you 
>audience will be able to see the actual data, and it would be natural to 
>report an R^2 if the the data didn't deviate too much from bivariate 
>normal. You may need to do some dancing around the independence 
>assumptions, however.

The test we are reporting on is actually a series of vision tests with
different scales, so this would entail a large number of plots.  We
were told not to include Bland Altman plots because of the large
number, so scatterplots wouldn't work.

.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to