On Sat, 16 Mar 2002 19:54:37 +0000 (UTC), [EMAIL PROTECTED]
wrote:

> I am exploring power/sample size and confidence intervals for 
> differences in proportions (paired samples) using the software that comes 
> with Altman et al (2000) CIA and DSTPlan.

 - a useful exercise -  But you should read the documentation 
so you can see (or say)  what the model assumes.
> 
> Changing the proportions in CIA with a sample sizes of 30 and 70 suggests 
> that the Confidence level approach to determining Statistical significance 
> is very sensitive ie CI range not including zero. Even with the smaller 
> sample size a change in proportions of .1 is considered statistically 
> significant at the 0.1 level. I was in fact surprised at how sensitive 
> this was and played around with several proportions and effect sizes just 
> to confirm these findings

You don't mention what you assumed for the correlation of the 
paired data.  That should make a powerful amount of difference.

10% difference?  For unpaired data, the 2x2  table (30,0)(27,3)  
yields chisquared of 3.16, uncorrected, for p= 0.076.  I think your
paired test has about that size, too.  So, you have to assume
'one-tailed' to hit it as significant.  You have to have about 20% 
to have 80% power.

Another possibility is that CIA  follows what J. Cohen did for
proportions in his textbook on power analysis, and do his 
mathematics based on an unusual t-test:  It starts with the 
'arc-sin squareroot(p)'.  It has the advantage of being simple
in its math.  It has the disadvantage, for your purposes, of
being a lousy t-test, which rejects far too often, when 
the proportions are small.  

That chapter is one of my few complaints against Cohen's book.

> 
> However, using DSTPlan with the routine for detecting the difference 
> between two proportions with permanent sampling units suggests that my 
> sample size of 70 has a power of only 55% to detect a 10% change at the 
> 0.1 significance level and the smaller sample has a power of only 17%. 
> 
> The minimum sample size to detect my 10% change at a significance level of 
> 0.1 is 134.
> 
> These results seem some what contradictory, and obviously there is 
> something I am not grasping in what I am doing.

You make no mention this time of proportions near zero.  
Again, you make no mention of the correlation.

If you were using McNemar's test for changes, you would be
comparing the difference in changes.  The test can be a
z-score or a chi-squared.  For small samples, you should be
computing the discrete tests, I think, and using the noncentrality
in a direct way.  -- Your power is 50%  for any two-tailed table 
(which you assume)  which is 3.84, 'just significant';  and 
it is about 80%  for tables where the chisquared is about 8.00.

> 
> Can anyone make any suggestions as to what I am doing wrong and possibly 
> point me towards some reading or web sites.

Cohen has more discussion than most of the books.  
For more detail or for references, you might check the 
web-pages of people who post answers to the stats-groups.
My own stats-FAQ  lists several folks with good pages.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to