On Sat, Oct 04, 2008 at 09:37:29PM -0700, Mark Biggar wrote:

> trivial and vice versa. But promotion (or demotion) between IEEE floats 
> and rationals is really hard and I don't know of a language that even 
> tries.  The major problem is that the demotion from rational to IEEE 
> float is very lossy.  In general, there are many possible distinct 
> rationals that convert into the same IEEE value and converting  IEEE 
> float to the simplest rational out of that set is a very expensive 
> operation..  Once you're in rationals you probably never want to demote 

And I can't see why it's anything other than a heuristic. For example,
0.2 is an infinite binary fraction. So store that as an IEEE float (heck,
in any form of float with a binary mantissa of fixed precision) and 1/5 isn't
the *only* rational that could have got you there. Something like

0b00011001100110011001100110011001100110011001100110011 /

(900719925474099 / 4503599627370496)

would also result in the same bit pattern in the mantissa.

How does the implementation tell which rational the floating point value
actually came from

> back to IEEE (except possibly at the very end).  Every language that 
> supports rationals, that I know of, leaves it up to the programmer to 
> decide whether they will be doing computations in IEEE float or 
> rationals and do not try to automatically convert back and forth.  I 
> looked at thos and basically gave up when I was writing the perl 5 
> Bigint and Bigrat packages.
> Before you discuss implementations, you should define exactly what rules 
> you are going to use for promotion and demotion between the various types.

Studiously ignoring that request to nail down promotion and demotion, I'm
going to jump straight to implementation, and ask:

If one has floating point in the mix [and however much one uses rationals,
and has the parser store all decimal string constants as rationals, floating
point enters the mix as soon as someone wants to use transcendental functions
such as sin(), exp() or sqrt()], I can't see how any implementation that wants
to preserve "infinite" precision for as long as possible is going to work,
apart from

    storing every value as a thunk that holds the sequence of operations that
    were used to compute the value, and defer calculation for as long as
    possible. (And possibly as a sop to efficiency, cache the floating point
    outcome of evaluating the thunk if it gets called)

Nicholas Clark

Reply via email to