Hi, I started with numpy a few days ago. I was timing some array operations
and found that numpy takes 3 or 4 times longer than Matlab on a simple
array-minus-scalar operation.
This looks as if there is a lack of vectorization, even though this is just
a guess. I hope this is not reposting. I
this is probably silly question, I've seen in this in one of the tutorials:
from tkinter import *
import tkinter.messagebox
given that * implies importing the whole module, why would anyone bother
with importing a specific command on top of it?
___
Tue, 19 Jul 2011 11:05:18 +0200, Carlos Becker wrote:
Hi, I started with numpy a few days ago. I was timing some array
operations and found that numpy takes 3 or 4 times longer than Matlab on
a simple array-minus-scalar operation.
This looks as if there is a lack of vectorization, even though
Hi Pauli, thanks for the quick answer.
Is there a way to check the optimization flags of numpy after
installation?
I am away of a matlab installation now, but I remember I saw a single
processor active with matlab. I will check it again soon
Thanks!
El 19/07/2011, a las 13:10, Pauli
Dear all,
I would like to avoid the use of a boolean array (mask) in the following
statement:
mask = (A != 0.)
B = A[mask]
in order to be able to move this bit of code in a cython script (boolean
arrays are not yet implemented there, and they slow down execution a lot as
they can't be
I made more tests with the same operation, restricting Matlab to use a
single processing unit. I got:
- Matlab: 0.0063 sec avg
- Numpy: 0.026 sec avg
- Numpy with weave.blitz: 0.0041
Note that weave.blitz is even faster than Matlab (slightly).
I tried on an older computer, and I got similar
On Tue, Jul 19, 2011 at 07:38, Andrea Cimatoribus
g.plantagen...@gmail.com wrote:
Dear all,
I would like to avoid the use of a boolean array (mask) in the following
statement:
mask = (A != 0.)
B = A[mask]
in order to be able to move this bit of code in a cython script (boolean
Yes, you're right. The problem is, when you use the first one, you may cause
a 'name pollution' to the current namespace.
read this: http://bytebaker.com/2008/07/30/python-namespaces/
cheers,
Chao
2011/7/19 Alex Ter-Sarkissov ater1...@gmail.com
this is probably silly question, I've seen in
On Tue, Jul 19, 2011 at 9:49 AM, Carlos Becker carlosbec...@gmail.comwrote:
I made more tests with the same operation, restricting Matlab to use a
single processing unit. I got:
- Matlab: 0.0063 sec avg
- Numpy: 0.026 sec avg
- Numpy with weave.blitz: 0.0041
Note that weave.blitz is even
On Tue, Jul 19, 2011 at 11:19 AM, Charles R Harris
charlesr.har...@gmail.com wrote:
On Tue, Jul 19, 2011 at 9:49 AM, Carlos Becker carlosbec...@gmail.comwrote:
I made more tests with the same operation, restricting Matlab to use a
single processing unit. I got:
- Matlab: 0.0063 sec avg
Hi, everything was run on linux.
I am using numpy 2.0.0.dev-64fce7c, but I tried an older version (cannot
remember which one now) and saw similar results.
Matlab is R2011a, and I used taskset to assign its process to a single core.
Linux is 32-bit, on Intel Core i7-2630QM.
Besides the
On Tue, Jul 19, 2011 at 4:05 AM, Carlos Becker carlosbec...@gmail.com wrote:
Hi, I started with numpy a few days ago. I was timing some array operations
and found that numpy takes 3 or 4 times longer than Matlab on a simple
array-minus-scalar operation.
Doing these kinds of timings correctly
On Tue, Jul 19, 2011 at 2:27 PM, Carlos Becker carlosbec...@gmail.com wrote:
Hi, everything was run on linux.
Placing parentheses around the scalar multipliers shows that it seems to
have to do with how expressions are handled, is there sometihng that can be
done about this so that numpy can
On Sun, Jul 17, 2011 at 11:48 PM, Darren Dale dsdal...@gmail.com wrote:
In numpy.distutils.system info:
default_x11_lib_dirs = libpaths(['/usr/X11R6/lib','/usr/X11/lib',
'/usr/lib'], platform_bits)
default_x11_include_dirs =
On Sun, Jul 17, 2011 at 11:55 PM, Chris Barker chris.bar...@noaa.govwrote:
On 7/14/2011 8:04 PM, Christoph Gohlke wrote:
A patch for the build issues is attached. Remove the build directory
before rebuilding.
Christoph,
I had other issues (I think in one case, a *.c file was not getting
On Tue, 19 Jul 2011 17:49:14 +0200, Carlos Becker wrote:
I made more tests with the same operation, restricting Matlab to use a
single processing unit. I got:
- Matlab: 0.0063 sec avg
- Numpy: 0.026 sec avg
- Numpy with weave.blitz: 0.0041
To check if it's an issue with building without
For such expressions you should try numexpr package: It allows the same type of
optimisation as Matlab does: run a single loop over the matrix elements
instead of repetitive loops and intermediate objects creation.
Nadav
Besides the matlab/numpy comparison, I think that there is an
Carlos Becker wrote:
Besides the matlab/numpy comparison, I think that there is an inherent
problem with how expressions are handled, in terms of efficiency.
For instance, k = (m - 0.5)*0.3 takes 52msec average here (2000x2000
array), while k = (m - 0.5)*0.3*0.2 takes 0.079, and k = (m -
Thanks Chad for the explanation on those details. I am new to python and I
still have a lot to learn, this was very useful.
Now I get similar results between matlab and numpy when I re-use the memory
allocated for m with 'm -= 0.5'.
However, if I don't, I obtain this 4x penalty with numpy, even
https://github.com/numpy/numpy/pull/116
This pull request deprecates direct access to PyArrayObject fields. This
direct access has been discouraged for a while through comments in the
header file and documentation, but up till now, there was no way to disable
it. I've created such a mechanism,
On Tue, Jul 19, 2011 at 3:35 PM, Carlos Becker carlosbec...@gmail.com wrote:
Thanks Chad for the explanation on those details. I am new to python and I
However, if I don't, I obtain this 4x penalty with numpy, even with the
8092x8092 array. Would it be possible to do k = m - 0.5 and
On Jul 19, 2011, at 3:15 PM, Chad Netzer wrote:
%python
import timeit
import numpy as np
t=timeit.Timer('k = m - 0.5', setup='import numpy as np;m =
np.ones([8092,8092],float); k = np.zeros(m.size, m.dtype)')
np.mean(t.repeat(repeat=10, number=1))
0.58557529449462886
t=timeit.Timer('k
On Tue, 19 Jul 2011 17:15:47 -0500, Chad Netzer wrote:
On Tue, Jul 19, 2011 at 3:35 PM, Carlos Becker carlosbec...@gmail.com
[clip]
However, if I don't, I obtain this 4x penalty with numpy, even with the
8092x8092 array. Would it be possible to do k = m - 0.5 and
pre-alllocate k such that
On Tue, Jul 19, 2011 at 6:10 PM, Pauli Virtanen p...@iki.fi wrote:
k = m - 0.5
does here the same thing as
k = np.empty_like(m)
np.subtract(m, 0.5, out=k)
The memory allocation (empty_like and the subsequent deallocation)
costs essentially nothing, and there are no
24 matches
Mail list logo