Hi,
32-bit Intel: All tests give exactly the same results (supposedly
correct) as 64-bit Intels when using sse instead of 387 floating point
unit (-mfpmath=sse -msee2). Using 387, which is often more precise,
makes the simulation to diverge quickly from expected path.
64-bit PowerPC: at one point in the middle of simulation, return value
of function pow() from glibc on my system (Fedora 16) differs in one
least significant bit of mantissa (when compared to Intels). Differences
grow from that point on and lead to failed test.
I believe that the design of test_extreme_problems() is not good. Such
an unstable combination of equation, solver and parameters tests one
thing - if a machine behaves exactly as 64-bit Intels do (not sure about
IEEE 754). What should it actually test? Is there some
documentation/explanation/rationale/guideline? I can not fix it if I do
not understand, what is it supposed to do.
Regarding msbdf decrease-order-by-2 problem, I was unable to find the
bug in a reasonable amount of time and energy. I can not reproduce it
consistently. Sorry.
Frantisek Kluknavsky