On 5/1/07, Erick Tryzelaar <[EMAIL PROTECTED]> wrote:
> I was reading through the lexer and I noticed that we don't support
> imaginary and complex literals like "1.0d + 0.5di". Would this be
> something useful to have? We'd possibly would have to write our own
> imaginary and complex types if there isn't a c++ version of them
> available, unless you guys think it's standard enough to depend on. This
> could be another handy thing for 1.1.4.

If we do decide on complex numbers in 1.1.4, please DON'T do what
C/C++/Python/O'Caml/etc. do and represent them as simple Cartesian
coordinates.  This leaves the user set up for all sorts of ridiculous
surprises.  For example, in O'Caml:

# open Complex;;
# pow zero one;;
- : Complex.t = {re = nan; im = nan}

Why is this?  Let's look at the implementation of pow:

let pow x y = exp (mul y (log x))

log x is -inf+0i, okay, but when we multiply this by one, we get:

# mul one {re = neg_infinity; im = 0.}
- : Complex.t = {re = neg_infinity; im = nan}

Certainly not correct!  This is because in mul, we cross-multiply -inf by 0 and
get nan.

The correct solution is to represent infinite values in polar coordinates,
so e.g. -inf+0i becomes [EMAIL PROTECTED], and extend the arithmetic logic to 
handle
these "directed infinity" cases properly.  (Mathematica is an example of one
of the few languages that handle this properly, search for "directed
infinity" on MathWorld.)

Sometimes however the angle of infinity is unknown, for example when
dividing 1 by 0 (this is because angle of zero is unknown).  In this case
the representation is [EMAIL PROTECTED] and continues to be handled gracefully 
by the
complex arithmetic operators (for example, norm correctly returns inf, which
is not the case for Cartesian implementations).  This case is called
"complex infinity".

Of course this can be extended to deal with quantites such as "directed
zero" and "directed NaN" (known angle, unknown magnitude) but these cases
are less practically useful (though trivial and certainly worth it to
implement).

Since implementing this requires representing some complex numbers in polar
form, it makes sense to allow *all* complex numbers to be represented in
polar form, and lazily convert them to/from Cartesian only when necessary
(e.g. addition requires Cartesian, exponentation requires polar).  This
gives a speed boost when working mostly with multiplication, and the only
overhead is boxing/unboxing the values (and branching on their type).  Of
course even this small slowdown could be unacceptable to some people, so it
would make sense to provide dedicated Cartesian / polar modules without this
functionality.

But here's an idea: do everything in the type system.  Ad-hoc polymorphism
combined with Felix's reduction rules should be enough to allow the compiler
to alternate between using the "Cartesian" module and the "polar" module for
different operations, only falling back to the "generic" module when, say,
a function needs to return a value.

If you don't mind, I'd like to get my hands dirty implementing this as my
first Felix project... school's finally out so I've got some free time and
would love to give Felix this feature that so few other languages have :)

- Chris

-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Felix-language mailing list
Felix-language@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/felix-language

Reply via email to