The numerical robustness of Python is very poor - this is not its fault, but that of IEEE 754 and (even more) C99. In particular, erroneous numerical operations often create apparently valid numbers, and the NaN state can be lost without an exception being raised. For example, try int(float("nan")).
Don't even ASK about complex, unless you know FAR more about numerical programming than 99.99% of programmers :-( Now, I should like to improve this, but there are two problems. The first is political, and is whether it would be acceptable in Python to restore the semantics that were standard up until about 1980 in the numerical programming area. I.e. one where anything that is numerically undefined or at a singularity which can deliver more than one value is an error state (e.g. raises an an exception or returns a NaN). This is heresy in the C99 and Java camps, and is none too acceptable in the IEEE 754R one. My question here is would such an attempt be opposed tooth and nail in the Python context, the way it was in C99? The second is technical. I can trivially provide options to select between a restricted range of behaviours, but the question is how. Adding a method to an built-in class doesn't look easy, from my investigations of floatobject.c, and it is very doubtful that is the best way, anyway - one of the great problems with "object orientation" is how it handles issues that occur at class conversions. A run-time option or reading an environment variable has considerable merit from a sanity point of view, but calling a global function is also possible. Any ideas? Regards, Nick Maclaren. -- http://mail.python.org/mailman/listinfo/python-list