Dylan Thurston writes: > Do you ever use floating point addition? > > I rarely use floating point, but it is sometimes more useful than the > alternatives, as long as you bear in mind its limitations.
Yep, floating point is by necessity a bit of a mess. On the other hand, I don't think we ought to be claiming that addition of Nums is associative, because it just isn't true. (Indeed, if we trap overflow then even Int addition is non-associative; I'm agin overflow trapping on Int for this among other reasons.) I see four "solutions" to the problem of non-associativity of floating point, in approximate order of flexibility: 1) Declare that Num is never associative. Compilers may re-associate (+) only if the types involved can be shown to be associative. Programmers may of course re-associate (+) in other cases as well, if they know what they are doing. 2) Provide a compiler flag which indicates that the compiler can assume assocativity of Num and optimize accordingly. Similar flags exist for other compilers (gcc's -ffast-math comes to mind). 3) Provide an associative floating-point hierarchy, to be used with the knowledge that "associativity" of such a type is only an approximate notion. 4) Provide a way of annotating Num instances to indicate the associativity property. I have no idea what form such an annotation would take. I therefore suspect (1) or (2) is good enough. (3) is way more trouble than it's worth, I bet; (4) is a nifty research problem with (so I hear) a certain amount of related research already out there. The real problem of course is things like this: "Solve y = x + b for x given y and b". We can't technically even *solve* the equation if addition is non-associative. But this is, of course, a problem in *every* language with limited-precision floating point. We probably need to be forthright about this when educating new programmers. I'm less clear how to couch this when presenting Num to the experienced programmer. Note that non-associativity is only one difficulty in working with floating point. Instability caused by different register and memory precision on x86 has proven to be a very noticable problem for me during compiler development. The -fforce-mem option on gcc addresses this issue at the cost of performance; my (limited) understanding of the problem indicates that this flag sacrifices (false) extra precision for predictability and thus for better accuracy (if you know what you are doing). This has nothing to do with declared semantics and everything to do with implementation hackery. > This sounds very interesting! Is your dissertation available? I'm making the last few fixes; it will be signed at the end of the week and I'll gladly send out a link to the Haskell mailing list when it's done. -Jan-Willem Maessen _______________________________________________ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell