When using vstack or hstack for large arrays, are there any performance
penalties eg. takes longer time-wise or makes a copy of an array during
operation ?___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
On Fri, 2014-01-24 at 06:13 -0800, Dinesh Vadhia wrote:
When using vstack or hstack for large arrays, are there any
performance penalties eg. takes longer time-wise or makes a copy of an
array during operation ?
No, they all use concatenate. There are only constant overheads on top
of the
I want to write a general exception handler to warn if too much data is being
loaded for the ram size in a machine for a successful numpy array operation to
take place. For example, the program multiplies two floating point arrays A
and B which are populated with loadtext. While the data is
There is no reliable way to predict how much memory an arbitrary numpy
operation will need, no. However, in most cases the main memory cost will
be simply the need to store the input and output arrays; for large arrays,
all other allocations should be negligible.
The most effective way to avoid
Yeah, numexpr is pretty cool for avoiding temporaries in an easy way:
https://github.com/pydata/numexpr
Francesc
El 24/01/14 16:30, Nathaniel Smith ha escrit:
There is no reliable way to predict how much memory an arbitrary numpy
operation will need, no. However, in most cases the main
c = a + b: 3N
c = a + 2*b: 4N
Does python garbage collect mid-expression? I.e. :
C = (a + 2*b) + b
4 or 5 N?
Also note that when memory gets tight, fragmentation can be a problem. I.e.
if two size-n arrays where just freed, you still may not be able to
allocate a size-2n array. This seems to
If A is very large and B is very small then np.concatenate(A, B) will copy
B's data over to A which would take less time than the other way around - is
that so?
Does 'memory order' mean that it depends on sufficient contiguous
memory being available for B otherwise it will be fragmented or
On Fri, Jan 24, 2014 at 4:01 PM, Dinesh Vadhia dineshbvad...@hotmail.com
wrote:
If A is very large and B is very small then np.concatenate(A, B) will copy
B's data over to A which would take less time than the other way around -
is
that so?
No, neither array is modified in-place. A new array
On 24 Jan 2014 15:57, Chris Barker - NOAA Federal chris.bar...@noaa.gov
wrote:
c = a + b: 3N
c = a + 2*b: 4N
Does python garbage collect mid-expression? I.e. :
C = (a + 2*b) + b
4 or 5 N?
It should be collected as soon as the reference gets dropped, so 4N. (This
is the advantage of a
Hi,
I just came across this unexpected behaviour when creating
a np.array() from two other np.arrays of different shape.
Have a look at this example:
import numpy as np
a = np.zeros(3)
b = np.zeros((2,3))
c = np.zeros((3,2))
ab = np.array([a, b])
print ab.shape, ab.dtype
ac = np.array([a,
On Fri, Jan 24, 2014 at 11:30 AM, Emanuele Olivetti
emanu...@relativita.com wrote:
Hi,
I just came across this unexpected behaviour when creating
a np.array() from two other np.arrays of different shape.
Have a look at this example:
import numpy as np
a = np.zeros(3)
b =
So, with the example case, the approximate memory cost for an in-place
operation would be:
A *= B : 2N
But, if the original A or B is to remain unchanged then it will be:
C = A * B : 3N ?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
Yes.
On 24 Jan 2014 17:19, Dinesh Vadhia dineshbvad...@hotmail.com wrote:
So, with the example case, the approximate memory cost for an in-place
operation would be:
A *= B : 2N
But, if the original A or B is to remain unchanged then it will be:
C = A * B : 3N ?
Francesc: Thanks. I looked at numexpr a few years back but it didn't support
array slicing/indexing. Has that changed?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Thu, Jan 23, 2014 at 11:58 PM, jennifer stone
jenny.stone...@gmail.comwrote:
Scipy doesn't have a function for the Laplace transform, it has only a
Laplace distribution in scipy.stats and a Laplace filter in scipy.ndimage.
An inverse Laplace transform would be very welcome I'd think -
On Fri, Jan 24, 2014 at 8:25 AM, Nathaniel Smith n...@pobox.com wrote:
If your arrays are big enough that you're worried that making a stray copy
will ENOMEM, then you *shouldn't* have to worry about fragmentation -
malloc will give each array its own virtual mapping, which can be backed by
Oscar,
Cool stuff, thanks!
I'm wondering though what the use-case really is. The P3 text model
(actually the py2 one, too), is quite clear that you want users to think
of, and work with, text as text -- and not care how things are encoding in
the underlying implementation. You only want the
On Fri, Jan 24, 2014 at 10:29 PM, Chris Barker chris.bar...@noaa.gov wrote:
On Fri, Jan 24, 2014 at 8:25 AM, Nathaniel Smith n...@pobox.com wrote:
If your arrays are big enough that you're worried that making a stray copy
will ENOMEM, then you *shouldn't* have to worry about fragmentation -
Hi all,
in https://github.com/numpy/numpy/pull/3514 I proposed some changes to
the comparison operators. This includes:
1. Comparison with None will broadcast in the future, so that `arr ==
None` will actually compare all elements to None. (A FutureWarning for
now)
2. I added that == and !=
On 25 Jan 2014 00:05, Sebastian Berg sebast...@sipsolutions.net wrote:
Hi all,
in https://github.com/numpy/numpy/pull/3514 I proposed some changes to
the comparison operators. This includes:
1. Comparison with None will broadcast in the future, so that `arr ==
None` will actually compare
On Fri, 24 Jan 2014 17:30:33 +0100, Emanuele Olivetti wrote:
I just came across this unexpected behaviour when creating
a np.array() from two other np.arrays of different shape.
The tuple parsing for the construction of new numpy arrays is pretty
tricky/hairy, and doesn't always do exactly what
On Fri, Jan 24, 2014 at 5:43 PM, Chris Barker chris.bar...@noaa.gov wrote:
Oscar,
Cool stuff, thanks!
I'm wondering though what the use-case really is. The P3 text model
(actually the py2 one, too), is quite clear that you want users to think of,
and work with, text as text -- and not care
22 matches
Mail list logo