Hi, Is there a sensible way to make GSL use floats instad of doubles internally at compile time? This may sound like a strange idea, but I am asking becaue of performance reasons.
1) SSE1 doesn't have support for vectorized doubles, and my dev box only has SSE1, not SSE2 2) floats are faster than doubles even with later SSE instruction sets because twice as many get processed in parallel 3) I don't need the extra precision for my app. The obvious bodge to pass something like -Ddouble=float breaks most things, unsurprisingly. I don't suppose there is an option in there somewhere to achieve this? How much am I likely to have to change if I want to modify the multi parameter fitting engine to use floats instad of doubles? On a separate note, has there been any effort to test or improve automatic vectorization of GSL libraries when compiling with ICC? I noticed that some of the tests fail when compiling with ICC, but that could just be due to the fact that I use optimizations that cut the odd corner on floating point rounding for extra speed. Thanks. Gordan _______________________________________________ Help-gsl mailing list [email protected] http://lists.gnu.org/mailman/listinfo/help-gsl
