On 5/1/07, Erick Tryzelaar <[EMAIL PROTECTED]> wrote:
> Does this problem occur if we do proper conversions to and from polar
> coordinates?

Although that would work in some cases (such as exponentiation), there
would still be information loss in certain edge cases, since Cartesian
coordinates cannot represent values such as directed infinity.

> Are polar coordinates uniformly better than cartesian? If
> so, could we just fix this by making the underlying system implemented
> with them, and convenience functions to get the cartesian form?

Yes, in that all operations with polar coordinates are consistent,
whereas they aren't with Cartesian coordinates.  The problem though is
that you can't add polar coordinates directly---you must convert to
Cartesian first.  Since converting between the two takes *much* longer
than the addition itself, forcing polar coordinates to be used
everywhere entails a huge performance hit.  But by working with both
representations and converting only when necessary, the only overhead
comes from the logic which determines when conversion is needed.

Of course on modern machines, these tests would probably take as many
cycles as the addition itself, since modern FPUs can do addition (and
even multiplication and sometimes division!) in a single clock cycle.
But here we could probably exploit pipelining, and perform the
"default" operation on the FPU in parallel with the representation
tests on the main CPU, after which the FPU results are scrapped and a
conversion is performed if the checks determine that one is needed.

> Another idea is to have two distinct types with their own functions, a
> pcomplex for polar, and a ccomplex for cartesian, and join the two via a
> Complex typeclass to capture all the shared functions? It's a little
> ugly though since they really are just two views of the same data.

This is more or less what I'm thinking... operations such as
exponentiation that require polar coords will always return a
pcomplex, whereas operations such as addition will always return a
ccomplex, thus moving a lot of the burden into the typesystem.  A
problem arises though when considering operations such as
multiplication and division which are efficient in either
representation, but *sometimes* require that the result be represented
in polar coords (as in one of my examples).  Aside from complex (and
probably undecidable) data-flow analysis the only immediate solution
that comes to mind is to fall back to returning a union type in this
case, so we wouldn't be able to totally eliminate the runtime checks.

BTW I've been looking at C99 and D... both languages seem to recognize
the problem and provide remedies for it; C99 in particular does a good
job of laying out the arithmetic and trigonometry rules required to
eliminate the class of errors which pop up in naive complex
implementations such as O'Caml's.  (Running through the spec we find
that cexp(cmul(1, clog(0))) is in fact 0 and not NaN.)  So maybe a
Cartesian complex module which follows these rules will be "good
enough" for most users who don't need things such as directed
infinities and just want predictable results... but a full-fledged
hybrid complex library remains interesting nonetheless.

- Chris

-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Felix-language mailing list
Felix-language@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/felix-language

Reply via email to