In sci.stat.math Herman Rubin <[EMAIL PROTECTED]> wrote:
> In article <9dpcei$qcf$[EMAIL PROTECTED]>,
> Ronald Bloom <[EMAIL PROTECTED]> wrote:
>>In sci.stat.edu Herman Rubin <[EMAIL PROTECTED]> wrote:
>>> In article <9deiug$l0h$[EMAIL PROTECTED]>,
>>> Ronald Bloom <[EMAIL PROTECTED]> wrote:
>>
In article <9dpcei$qcf$[EMAIL PROTECTED]>,
Ronald Bloom <[EMAIL PROTECTED]> wrote:
>In sci.stat.edu Herman Rubin <[EMAIL PROTECTED]> wrote:
>> In article <9deiug$l0h$[EMAIL PROTECTED]>,
>> Ronald Bloom <[EMAIL PROTECTED]> wrote:
>>>2.) The two row (col)marginals are treated as independent; and
+++
THIS IS A GREAT TWO~DAY CONFERENCE ...
IF YOU HAVEN'T ALREADY MADE PLANS TO ATTEND
IT'S NOT TOO LATE ! ! !
Fifth Annual BEYOND THE FORMULA - Introductory Statistics For A New
Century: Integrating New Curriculum Ideas and Modern Techniques into Our
Beg
Great explanation! Thank you!
Just need to clarify a few points below.
On 14 May 2001 06:08:14 -0700, [EMAIL PROTECTED] (Robert J.
MacG. Dawson) wrote:
>
>
>RD wrote:
>>
>> Hi,
>> I am puzzled with the following question:
>> In z test for continuous variables we just use the sum of estimated
>
In sci.stat.edu Herman Rubin <[EMAIL PROTECTED]> wrote:
> In article <9deiug$l0h$[EMAIL PROTECTED]>,
> Ronald Bloom <[EMAIL PROTECTED]> wrote:
>>2.) The two row (col)marginals are treated as independent; and the
>>observed table under the null hypothesis is regarded as
>>being the result of two
In sci.stat.math Juha Puranen <[EMAIL PROTECTED]> wrote:
: Hhen N is small this is not true. Here a small example By Survo
the example is irreelevaant; there are different tests of the same
hypothesis eg do a t test with only the first 10 observations. Both tests
are valid, the large n test is
Dianne Worth <[EMAIL PROTECTED]> wrote:
: I have a multiple regression y=a+b1+b2+b3+b4+b5. My Adj. R-sq is .403.
you can't decompose adjusted R-sqs. The only additive decomposition (and
the only decomposition that makes sense) is the stepwise composition of
R-sq,
adding additional variables
RD wrote:
>
> Hi,
> I am puzzled with the following question:
> In z test for continuous variables we just use the sum of estimated
> variances to calculate the variance of a difference of two means ie
> s^2=s1^2/n1+s2^2/n2.
> For purcentages we proceed as follows:
> s^2=p(1-p)(1/n1+1/n2)
> whe
Elliot Cramer wrote:
>
> In sci.stat.consult Juha Puranen <[EMAIL PROTECTED]> wrote:
> :>
> :> Please clarify what is meant by "the distribution does not
> :> involve [the fixed marginals]". I am not clear on this:
> :> the Fisher test statistic (hypergeometric upper tail probability)
> :> cert