Here is the body of a post I made on stackoverflow, but it seems to be a
non-obvious issue. I was hoping someone here might be able to shed light
on it...
On my 32-bit Windows Vista machine I notice a significant (5x) slowdown
when taking the absolute values of a fairly large |numpy.complex64|
array when compared to a |numpy.complex128| array.
|>>> import numpy
a= numpy.random.randn(256,2048) + 1j*numpy.random.randn(256,2048)
b= numpy.complex64(a)
timeit c= numpy.float32(numpy.abs(a))
10 loops, best of3: 27.5 ms per loop
timeit c= numpy.abs(b)
1 loops, best of3: 143 ms per loop
|
Obviously, the outputs in both cases are the same (to operating precision).
I do not notice the same effect on my Ubuntu 64-bit machine (indeed, as
one might expect, the double precision array operation is a bit slower).
Is there a rational explanation for this?
Is this something that is common to all windows?
In a related note of confusion, the times above are notably (and
consistently) different (shorter) to that I get doing a naive `st =
time.time(); numpy.abs(a); print time.time()-st`. Is this to be expected?
Cheers,
Henry
_______________________________________________
NumPy-Discussion mailing list
[email protected]
http://mail.scipy.org/mailman/listinfo/numpy-discussion