>>>>> "Renaud" == Renaud Lancelot <[EMAIL PROTECTED]>
>>>>>     on Tue, 6 Dec 2005 08:09:35 +0100 writes:

    Renaud> For example:

....

    >> vc <- vcov(m1, useScale = FALSE)
    >> b <- fixef(m1)
    >> se <- sqrt(diag(vc))
    >> z <- b / sqrt(diag(vc))
    >> P <- 2 * (1 - pnorm(abs(z)))
    >> 
    >> cbind(b, se, z, P)
    Renaud>                   b            se         z  P
    Renaud> (Intercept)  0.3596720 0.007023556  51.20939 0
    Renaud> x1           0.2941068 0.002371353 124.02487 0
    Renaud> x2          -0.9272545 0.010087717 -91.91917 0

I still see much too many uses of  "1 - p<dist>(...)" 
which in cases as the above case leads to complete loss of
accuracy (1 - 1 = 0) -- well actually the above case is too
extreme to make any difference; but let me explain the general principle:
Though the loss is usually no problem for decision making based on
P-values, it is unnecessary:

One of the (extra) features of R are the arguments 'lower.tail'
and 'log.p' of all the  p<dist>() functions -- which (in not yet
quite all cases) allow avoid precision loss.

E.g.,

  > 1 - pnorm(c( 6,8,10,20))
  [1] 9.865877e-10 6.661338e-16 0.000000e+00 0.000000e+00
  > pnorm(c(6,8, 10,20), lower.tail=FALSE)
  [1] 9.865876e-10 6.220961e-16 7.619853e-24 2.753624e-89

BTW,   example(pnorm)  ends in two plots which show the
advantage of using  'log.p' for additional precision gain
e.g. for log-likelihood computation.

Martin Maechler, ETH Zurich

______________________________________________
[email protected] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Reply via email to