On Mon, 2006-11-13 at 17:13 -0800, Erick Tryzelaar wrote: > Ben Smith wrote: > > On 13/11/06, Rhythmic Fistman <[EMAIL PROTECTED]> wrote: > > > >>> Well .. of course the distributive law doesn't hold for > >>> floats. The question is .. what to do about it? > >>> > >>> Options: > >>> > >>> 1. Change 'eq' for floats > >>> > >>> 2. Change the axiom for floats to be > >>> > >>> axiom distrib (x:t,y:t,z:t): close (x * ( y + z), x * y + x * z); > >>> > >> 3. not expect that axiom to hold for floats. > >> > > > > I like 3). 2) is asking for trouble, but so is using 'eq' with floats > > anyway, so I vote for doing both 1) and 3).:) > > > > ben > > > > > I'm with 3. floats are just tricky.
Clearly distrib and many other related axioms don't hold, but simply giving up isn't an option either. In order to test, for example, trig functions, we *need* to have axioms 'like' sin x * sin x + cos x * cos x = 1 > The fact that a large number x plus > a small number y can result in x just makes it hard to apply normal > mathematical axioms. Axioms now support preconditions so you can write, for example: axiom silly(x:float,y:float where close(x,y): x + y > x; not that I'm suggesting that particular axiom. One should note that MANY of the axioms do not even hold for integers! After all 'int' etc are not mathematical set Z, only a local approximation. Of course ints and unsigned ints, like float, do have precise laws. Unsigned int is actually harder, since it is both a local approximation to N_0 and also modular. > I did a quick google, and here's how Intel used HOL > Light to verify their floating point math. Perhaps there are some simple > axioms or other mathematical structures we could use from it: > > http://www.cl.cam.ac.uk/~jrh13/papers/iday.html > > One thing that would be really *really* interesting for engineers is a > mechanism to automatically calculate the error of floating point > mathematical operations. The data type is easy. But the calculations will rapidly become unbounded without some serious math support .. I mean without some theorem proving, the error range would soon become absurd, especially using 'high power' functions like tan, exp, pow etc, and for bounded functions like sin, cos, the phase angle would soon become arbitrary. What I mean is a typical iterative or recursive calculation would just blow the naively calculated errors out of the water, without both knowledge of the input data and some numerical analysis. > One of the tricky things with analyzing things > like this is that you have to know what the language does to optimize > your expressions, as each of these may modify your precision. If we had > some way of the felix optimizer to generate the expression as > calculating the error according to the IEEE spec, it could be very > interesting. > > Not sure exactly how it could work though. Perhaps something like this: > > val result, error = witherror (x * (y * z)); I think you would use a record: struct { approx: float; lobound: float; hibound: float; ... } and just define + - * / etc etc on it using a typeclass, then you could write algorithms which work for real numbers, including this version of them .. I guess this is monadic programming .. Felix does have typeclasses now :) [] Thx for references! -- John Skaller <skaller at users dot sf dot net> Felix, successor to C++: http://felix.sf.net ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ Felix-language mailing list Felix-language@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/felix-language