Been doing a little cross checking between sources on the confidence interval about sample means, where the sample (n) comes from a normal population with unknown mean and unknown variance.
Several textbooks give the confidence interval about the sample mean as t*s/sqrt(n), where n is the number in the sample, t is the t distribution value at n-1 df, and s is the sample unbiased standard deviation. Excel also calculates the confidence interval about the mean based on this equation. Fisher says that s/sqrt(n) is t distributed, and on page 122 (Statistical Methods) calculates the interval as t*s/sqrt(n). Hogg and Craig (4th edition) on page 214 (and text material previous to this, page 144) gives the interval as t*s/sqrt(n-1). Hogg and Craig on page 145 says that the t distribution parameter is degrees of freedom (n-1). I know this is "small potatoes", but can anybody explain the differences between sources on the interval size? DAHeiser . . ================================================================= Instructions for joining and leaving this list, remarks about the problem of INAPPROPRIATE MESSAGES, and archives are available at: . http://jse.stat.ncsu.edu/ . =================================================================
