I realize I'm a little late to this discussion, but I haven't heard anyone mention the "Extra Sums of Squares" or "Additional Sums of Squares" principal which can be used to compare slopes and/or intercepts of different regression models. I don't have a good reference for the procedure used, and it can require some care in the way the data is set up to test different hypothesis about how models differ, but I know it is another possible approach to this problem.
Jane F. > Your approach is valid ONLY IF you are willing to ignore the fact that the > slope to which you are comparing your slope is itself an estimate. That > is > - you can use your CI to compare to a particular hypothesized value - > basically testing the hypothesis Ho: beta = beta_0, where beta_0 is some > hypothesized value, possibly from the literature. However, if you really > want to see if two slopes are equal, say Ho: beta_1 = beta_2, you are > better > off using the test on p. 360 of Zar. This essentially looks at the CI of > the difference in slopes (b_1 - b_2) to see if it includes 0. > > On 8/16/06, David Whitacre <[EMAIL PROTECTED]> wrote: >> >> While we're on regression--I know this is a really dumb question and I >> should know the answer. But here goes, my ignorance on display: >> >> In comparing some regressions to published ones, how do I test for >> significant difference in slope? I have calculated the 95% C.I. of my >> slope by using the t distribution applied to the SE of the slope, as >> described on p. 331 of Zar (1996, 3rd edition). >> >> If somebody else's slope is outside of this C.I., are the two slopes >> significantly different at p = 0.05? That is, I don't have to consider >> the >> C.I. on their slope? >> >> Thanks much for any enlightenment on this very basic issue. >> >> Dave W. >> > > >
