Prompted by the David's message, I'd like to let you know,
that I also have a simple implementation of rational arithmetic.
I am still testing it, but it should be available soon, along
with few other modules.
Possibly not as clever, as David's implementation of rationals
but it has its merits:
1. It still fits into the current framework of types and classes.
I simply did not dare to play with the current classification
hierarchy -- leaving it to the more experienced people. Frustrated
by the compiler error messages, which forbid instantiation of type
synonyms -- in this case Rational -- I declared a new data instead,
the Fraction.
Fraction is very similar to Rational, but it is also an instance of
a class Transcendental, which handles logs, exponentials, trigonometry
and exponentials. I do plan to add few more functions that
are frequently used in engineering: Bessel, Gamma, etc.
Fraction ignores current Floating hierarchy and initiates a set
of replacements for existing transcendentals, such as
log' eps x,
sqrt' eps x,
exp' eps x, etc.
where eps is a requested accuracy of computations. Names are so
chosen so one can execute primed functions side by side with
their floating point conterparts - to compare speeds and accuracies.
2. I decided to explore infinity as a means for error reporting,
without causing the run-time errors. Fraction 1//0 stands for
"infinity" and is legal, while Rational 1%0 would cause the
error. Simple arithmetic examples demonstrate a concept:
1//3 * 1//0 = 1//0
1//3 + 1//0 = 1//0
The above operations do not check for infinities, because the
addition and multiplication rules produce the right results
anyway. Other functions have to test for the infinity of arguments
to force the "infinity" again, as in:
1//3 / 1//0 = 1//0 -- this is just a made up rule
4. Implementation is based on continued fraction. To improve
efficiency certain identity relations has beeen explored
and tested. For example, a naive implementation of atan'
is good for small values of x only, so when |x| > 1, I do
invert it, and the use:
atan' eps x = pi'/2 sps - atan' eps (1/x)
This is just to demonstrate that testing is very time
consuming. It is easier to declare that something is
doable than to actually implement the damn thing to
get a good balance between speed and accuracy.
5. But this is just a beginning. Module Cofra.hs defines all
the basic operations on complex fractions. It is also
an instance of Transcendental, so all complex logarithms,
etc. are named the same way as in the case of Fraction.
6. Finally, the module Numerus defines data type Numerus
- a generalization of Integer, Fraction and Cofra.
data Numerus = Integer Integer | Fraction Fraction |
Cofra Cofra
Funny name, but I tried to avoid adding more confusion
to the existing names: Num, Numeric, Number...
Yet I wanted something suggesting generality. So here
comes Latin name for a number. No harm to anyone..
Fun begins right here where Numerus undertakes variety of
rescue missions for the cases which would lead to errors
or "infinities" - as in Fractions. Numerus is also an
instance of Transcendental, so for example, for negative
fraction x the result of sqrt is still valid:
sqrt' eps (Fraction x) = Cofra (0:+ (sqrt' eps (-x))
Just give me few more days to clean it up before I post the
URL. The main reason for doing all this job was to demonstrate
that everything that I had mentioned in my original post is
doable and does not have to be overly complex. I am not trying
to push any alternative solution. In contrary I would be just
as happy to use David's, Sergey's or somebody else's working
code instead.
One thing that has been overlooked in all this discussion is
a paper of Peter J. Potts, where he demonstrates that Mobious
transformations are generalizations of continued fractions and
lead to matrix implementation of approximation of reals. I mentioned
his work in my original post, but I did not post its URL, and no
one took notice of it. So here it is now:
http://theory.doc.ic.ac.uk/~pjp/ieee/ieee.html
I am not in a position yet to evaluate a practicality of Peter's
work. Is it more efficient than continued fraction? But it is
certainly interesting.
To finish it up - here is something for joy:
David Lester gives one example in his post that supposes to
convince us why real are sometimes better than floats.
I came across much more striking example in a paper of Valerie
Menissier-Morain: "Arbitrary precision real arithmetic: design
and algorithms". You will find it somewhere at ftp://ftp.inria.fr
(sorry, I forgot the exact location). There she quotes the
example from Jean-Michel Muller work:
Imagine a sequence:
1130 - 3000/a[n-1]
a[0] = 11/2; a[1]= 61/11; a[n+1] = 111 - ------------------
a[n]
Theoretical limit of this series is 6. Rational arithemtic
leads to 6 in 7 steps. Single precision float gets confused
at about step 8 and pesrsists around value of 100, starting with
the step 12.
Double precision needs about 18 steps to get screwed up as
well - wrong value 100 again!
Maple or other packages with user defined size of floating
point bits have exactly the same problem: only the number
of steps becomes bigger, but the end result is the same
- 100!
I enjoyed reading this and other examples. The paper is valuable
in its merits too.
Jan