Incidentally, this may well be a reason for the irritating habit of DLLs
in setting the floating point precision to 53 bits (well, the irritating
bit is not changing it back). This makes the results of computations
more nearly independent of where the numbers end up being stored.
- I suspect
Does anyone know precisely what is different about the arithmetic
and/or storage of double precision floating point to produce the
following differences between the Sun and Windows versions (Splus 6
on the same Windows 2000 machine gives the same results as Solaris)?
R 1.6.1, Sun Solaris, gcc +
Might have something to do with .Machine$double.eps on the respective
machines.
From help(.Machine),
double.eps: the smallest positive floating-point number `x' such that
`1 + x != 1'. It equals `base^ulp.digits' if either `base'
is 2 or `rounding' is 0; otherwise, it is
It's a difference in the `libc'. Asking for more precision than the
arithmetic has is asking for fairly random results. The differences are as
likely to be in the *printing* as in the computations.
On Fri, 31 Jan 2003, Bob Gray wrote:
Does anyone know precisely what is different about the