----- Original Message -----

[snip]

I'll start with a few general comments. I've been working on engineering 
software for most of my career, and with very few exceptions, double precision 
is perfectly adequate (and the fastest).

The real world (at least, the Intel/AMD part of it) is somewhat complicated by 
there being two floating point coprocessors, the venerable x87 and SSE. By 
default, 32bit compiles use x87 and 64bit compiles use SSE. SSE is fairly 
straightforward: if you use doubles, all calculations are done at 64bit 
precision. x87 uses 80bit for internal calculations and converts the final 
result to 64bit. GCC does have an option, -ffloat-store, that inhibits this 
behaviour (and decent performance!). But there is more. You can compile in 
mixed x87/sse (though I've found this to be somewhat error prone if you also 
play with the floating point control and status flags). Lastly, if you do 
compile with sse on 32bit, by default you will link with the x87 based libm 
part of the standard C library. Again, GCC has an option to link with an sse 
version of libm, -msselibm.

What is the upshot of all this?

If you compile on both 32bit and 64bit platforms without changing the default 
options, you will get different results.

Now, let Valgrind enter the picture. All calculations are done at 64bit, so 
there should be little or no change on 64bit platforms using sse. However, 
32bit platforms using x87 will probably change to give the same results 
obtained with 64bit and sse.

> Uh. The impact here is DATA CORRUPTION, caused by doing calculations
> to be meant using 80bit datatypes being done with 64bit datatypes and
> the remaining bits filled with garbage.
> 
> If this can't be solved or is not going to be solved then valgrind
> should ABORT instead of causing a silent corruption of data.

Is this really true? I would expect that Valgrind does all calculations at 
64bit double precision and converts. This will result in truncation and/or 
under/overflow.

My feeling is that unless the usually small numerical differences change your 
control flow, then just ignore the differences. The aim of testing with 
Valgrind isn't to validate numerical results, it is to validate memory use or 
performance or threading.

A+
Paul

------------------------------------------------------------------------------
Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester  
Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the  
endpoint security space. For insight on selecting the right partner to 
tackle endpoint security challenges, access the full report. 
http://p.sf.net/sfu/symantec-dev2dev
_______________________________________________
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users

Reply via email to