On Thu, 1 Jan 2015, Duncan Murdoch wrote:

On 01/01/2015 1:21 PM, Mike Miller wrote:

I understand that it's all about the problem of representing digital numbers in binary, but I still find some of the results a little surprising, like that list of numbers from the table() output. For another example:

1000+3 - 1000*(1+3/1000)
[1] 1.136868e-13

3 - 1000*(0+3/1000)
[1] 0

2000+3 - 1000*(2+3/1000)
[1] 0

See what I mean? So there is something special about the numbers around 1000.

I think it's really that there is something special about the numbers near 1, and you're multiplying that by 1000.

Numbers from 1 to just below 2 are stored as their fractional part, with 52 bit precision. Some intermediate calculations will store them with 64 bit precision. 52 bits gives about 15 or 16 decimal places.


This is how big those errors are:

512*.Machine$double.eps
[1] 1.136868e-13

Under other conditions you also were seeing errors of twice that, or 1024*.Machine$double.eps. It might not be a coincidence that the largest number giving me an error was 1023.

2^-43
[1] 1.136868e-13

.Machine$double.eps
[1] 2.220446e-16

2^-52
[1] 2.220446e-16

I guess the 52 comes from the IEEE floating point spec...

http://en.wikipedia.org/wiki/Double-precision_floating-point_format#IEEE_754_double-precision_binary_floating-point_format:_binary64

...but why are we seeing errors so much bigger than the machine precision? Why does it change at 2?

It doesn't really matter to my work, but it is a curious thing, so I would be interested to learn about it.

Mike

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to