The calculation algorithms  are the same either way so that explains the speed given identical input.
As far as a single vs double float the only difference is where it truncated the resulting number so the should be the same up to the second decimal place unless the library internally is using a single float.



-- Sent from my HP Pre3


On Mar 15, 2013 11:59 AM, Keith Lofstrom <[email protected]> wrote:

I'm doing some very big phased array calculations on an oldish
Core2 Duo, preparing to migrate the inner loops to an nVidia
GPU. These calculations do a lot of differencing when
computing nulls in the interference patterns (as does nature!)
and I presume that single precision will do them relatively
inaccurately.

I'm running 32 bit SL6.2 on the test machine, and gcc with libm .

I ran two calculations side by side, one with floats and one
with doubles, and they appear to have done the exact same thing,
even the same runtime, interesting given that 90% of the
calculation is a sin() and cos() calculation in a tight loop.
One would expect that the double precision calculation would
have more iterations and be slower. And of course slightly
different placements for the nulls.

What am I missing? Are double and float synonyms for the same
double precision representation? If so, how do I emulate the
single-precision behavior of the GPU? Note that the outer
outer outer loop of the calculation takes 6 days, though once
I locate some differences in a very large simulation field,
I can restrict the field and work faster.

Keith

(What? Using scientific linux for Science? Oh, the horror...)

--
Keith Lofstrom [email protected] Voice (503)-520-1993

Reply via email to