Larry Wall wrote:
Doing these as Rat would avoid a lot of the precision issues that
arithmetic has all the time. It will actually work perfectly well with
the denominator is always a small power of 10, so that is true for the
sum as well.
Multiplying might be an issue, because the denominator becomes a large
power of 10,
but I think that that can be handled pretty well, unless the
multiplication is really performed
Another note, it's likely that numeric literals such as 1.23 will turn
into Rats rather than Nums, at least up to some precision that is
to an extent that the result uses significant amounts of memory.
But as soon as division is occuring, these rational numbers tend to
that are not powers of 10 any more. Combining this with some
multiplications and additions
this may result in huge numerators and denominators that are somewhat
expensive to handle.
So what would happen after such a long calculation:
- would the Rats somehow know that they are all derived from Rats that
were just used instead of floats because of being within a pragmatically
determined precision? Then the result of * or / could just as
pragmatically become a floating point number?
- would the Rats grow really huge numerators and denominators, making it
expensive to work with them?
- would the first division have to deal with the conversion from Rat to
- or should there be a new numeric type similar to Rat that is always
having powers of 10 as denominator (like BigDecimal in Java or
LongDecimal for Ruby or decimal in C# or so)?
Even in this last case the division is not really easy to define,
because the exact result cannot generelly be expressed with a
denomonator that is a power of 10.
This can be resolved by:
- requires additional rounding information (so writing something like
a.divide(b, 10, ROUND_UP) or so instead of a/b
- implicitely find the number of significant digits by using partial
derivatives of f(x,y)=x/y
- express the result as some kind of rational number
- express the result as some kind of floating point number.