On Fri, Jun 5, 2009 at 5:19 PM, Christopher Barker <chris.bar...@noaa.gov> wrote: > Robert Kern wrote: >>>>> x = np.array([1,2,3]) >>>>> timeit x.sum() >>> 100000 loops, best of 3: 3.01 µs per loop >>>>> from numpy import sum >>>>> timeit sum(x) >>> 100000 loops, best of 3: 4.84 µs per loop > > that is a VERY short array, so one extra function call overhead could > make the difference. Is it really your use case to have such tiny sums > inside a big loop, and is there no way to vectorize that?
I was trying to make the timeit difference large. It is the overhead that I was interested in. But it is still noticable when x is a "typical" size: >> x = np.arange(1000) >> timeit x.sum() 100000 loops, best of 3: 5.46 µs per loop >> from numpy import sum >> timeit sum(x) 100000 loops, best of 3: 7.31 µs per loop >> x = np.random.rand(1000) >> timeit x.sum() 100000 loops, best of 3: 6.81 µs per loop >> timeit sum(x) 100000 loops, best of 3: 8.36 µs per loop That reminds me of a big difference between arrays and matrices. Matrices have the overhead of going through python code (the matrix class) to get to the core C array code. I coverted an interative optimization function from matrices to arrays and got a speed up of a factor of 3.5. Array size was around (500, 10) and (500,) _______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion