I realize this is stating the obvious, but the loss of precision is the
result of 64 bit integer support. Previously "upgrading" a number from
integer to float was exact. Though the residue problem for very large
numbers still existed, at least it didn't involve loss of precision.
It's my personal opinion that one should always be careful when working
around the limits of a system. But what should be done when things go a
little crazy around those limits? It is unfortunate that IEEE only
implemented indeterminate (_.) when it could have set other flags in the
unused bit configuration to indicate things like underflow, but not
zero or
overflow but not infinity. But they didn't.
A while back J had an option for upgrade to go to rational instead of
float. It was useful in labs to more easily show interesting
properties of
numbers. Is that option still around? If so it could be used in mod
as an
option. But it cannot be always known that the number will eventually be
used in mod. And many transcendental verbs must go to float.
Current hardware now supports quad precision float, at least some do. If
quad float were used then the loss of precision goes away when
converting
64 bit integer to float. But that doubles the size of float, and even
though memory is getting huge it's still a concern for big problems.
Not to
mention that quad float is probably slower than double float. And it may
not be supported on all hardware, similar to the AVX problem.
IBM's PLI has an interesting approach to precision. You told it (in
decimal
digits) the largest numbers you will deal with and the number of digits
after the decimal. Then it picked the best way to store the numbers
given
available hardware. In J we have 64 bit integers and floats with
maybe 16
significant decimal digits and a tremendous range for exponents. Most
problems we deal with don't need such big numbers. An argument many use
against J in that it uses so much memory for small numbers. Perhaps a
global setting with Foreign Conjunction could give a similar choice
for J.
I would argue against it saying things like single/double/quad float or
16/32/64 bit integers, but specify what range and significance is
need and
let J choose how to handle it. Including totally ignoring it for some
implementations. Supporting this could make the J engine larger, but
nobody
seems too concerned with the monstrous size Qt.
Whatever happened with the idea bouncing around of defining a floating
point of arbitrary size and precision like with extended integers and
rationals?
And now IEEE has a decimal float standard. Right now it seems that
only IBM
has implemented it in hardware. But think of all the confusion we see
when
decimal numbers like 1.1 are not represented exactly in J.
Maybe I rambled a bit. But this all involves problems when, for one
reason
or another, the hardware can't handle needed precision.
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm