OK. I get the idea, but I can't see it. In both cases, as the print statement shows, offspr is already created.
I need light :S On Wed, Sep 9, 2009 at 8:17 PM, Dag Sverre Seljebotn < [email protected]> wrote: > Ruben Salvador wrote: > > Your results are what I expected...but. This code is called from my main > > program, and what I have in there (output array already created for both > > cases) is: > > > > print "lambd", lambd > > print "np.shape(a)", np.shape(a) > > print "np.shape(r)", np.shape(r) > > print "np.shape(offspr)", np.shape(offspr) > > t = clock() > > for i in range(lambd): > > offspr[i] = r[i] + a[i] > > t1 = clock() - t > > print "For loop time ==> %.8f seconds" % t1 > > t2 = clock() > > offspr = r + a[:,None] > > t3 = clock() - t2 > > print "Pythonic time ==> %.8f seconds" % t3 > > > > The results I obtain are: > > > > lambd 80000 > > np.shape(a) (80000,) > > np.shape(r) (80000, 26) > > np.shape(offspr) (80000, 26) > > For loop time ==> 0.34528804 seconds > > Pythonic time ==> 0.35956192 seconds > > > > Maybe I'm not measuring properly, so, how should I do it? > > Like Luca said, you are not including the creation time of offspr in the > for-loop version. A fairer comparison would be offspr[...] = r + a[:, None] > > Even fairer (one less temporary copy): > > offspr[...] = r > offspr += a[:, None] > > Of course, see how the trend is for larger N as well. > > Also your timings are a bit crude (though this depends on how many times > you ran your script to check :-)). To get better measurements, use the > timeit module, or (easier) IPython and the %timeit command. > > > > > On Wed, Sep 9, 2009 at 1:20 PM, Citi, Luca <[email protected] > > <mailto:[email protected]>> wrote: > > > > I am sorry but it doesn't make much sense. > > How do you measure the performance? > > Are you sure you include the creation of the "c" output array in the > > time spent (which is outside the for loop but should be considered > > anyway)? > > > > Here are my results... > > > > In [84]: a = np.random.rand(8,26) > > > > In [85]: b = np.random.rand(8) > > > > In [86]: def o(a,b): > > ....: c = np.empty_like(a) > > ....: for i in range(len(a)): > > ....: c[i] = a[i] + b[i] > > ....: return c > > ....: > > > > In [87]: d = a + b[:,None] > > > > In [88]: (d == o(a,b)).all() > > Out[88]: True > > > > In [89]: %timeit o(a,b) > > %ti10000 loops, best of 3: 36.8 µs per loop > > > > In [90]: %timeit d = a + b[:,None] > > 100000 loops, best of 3: 5.17 µs per loop > > > > In [91]: a = np.random.rand(80000,26) > > > > In [92]: b = np.random.rand(80000) > > > > In [93]: %timeit o(a,b) > > %ti10 loops, best of 3: 287 ms per loop > > > > In [94]: %timeit d = a + b[:,None] > > 100 loops, best of 3: 15.4 ms per loop > > > > _______________________________________________ > > NumPy-Discussion mailing list > > [email protected] <mailto:[email protected]> > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > NumPy-Discussion mailing list > > [email protected] > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > -- > Dag Sverre > _______________________________________________ > NumPy-Discussion mailing list > [email protected] > http://mail.scipy.org/mailman/listinfo/numpy-discussion >
_______________________________________________ NumPy-Discussion mailing list [email protected] http://mail.scipy.org/mailman/listinfo/numpy-discussion
