From: Kitching Simon [mailto:[EMAIL PROTECTED]
>>Hmm .. rather than talking about doing "textual comparisons on numbers", why >>not say "BigInt"( BigNum)? >>There are lots of packages out there that already deal in infinite-precision >>numbers, together with methods for converting them to and from strings, and >>for doing arithmetic on them too, as well as doing comparisons. All that schema validation needs is comparison which should be a trivial thing to implement compared to trying to do infinite-precision mathematics. I would think that the issues of code size and performance would inhibit us from using any standard BigInt package and I'm not sure if there are any appropriate BigReal's. The platform issues that I was discussing were primarily on C++ since Java compels you to have a consistent IEEE double and float. However, if there is any existing code that we can integrate that does the job efficiently that is fine by me. I'd have to benchmark it, but I think that a textual based comparision would have to be much faster than the C RTL conversion routines. Since this also seems to be the only place that floating point is used in the parser, it would seem desirable to avoid it so that micro-platforms that might not have an implementation of floating point RTL could still use to parser. The Dec 17th went into great pains to make the floating points tied into specific implementations (though thankfully the most widely supported) while the integer type is blissfully abstract. However, I'd much rather prefer the abstract real datatype to come back and to leave any details of double and real types and how to handle overflow, underflow and loss of precision to a type specific DOM. Part of that desire to get specific may have been to try to get consistent behavior from the min/max constraints. However, I'd hate to be the guy who has to write the code for a FORTRAN program running on a platform with greater precise than IEEE that makes sure that nothing overflows or underflows or loses precision when interpreted as IEEE. I'd much rather deal with the those things at the receiving end where I know if I am going to try to do math on it, than purposefully throw away precision at the sending end. If I'm right on the relative speed and complexity, we could provide implementation feedback to the schema committee that not delving into implementation specifics and working with unlimited precision comparisions is faster and more portable.