In python2 it appears that multiprocessing uses pickle protocol 0 which
> must cause a big slowdown (a factor of 100) relative to protocol 2, and
> uses pickle instead of cPickle.
>
>
Even on Python 2.x, multiprocessing uses protocol 2, not protocol 0. The
default for the `pickle` module changed,
Slicing with None adds a new dimension. It's a common paradigm, though
usually you'd use A[np.newaxis] or A[np.newaxis, ...] instead for
readibility. (np.newaxis is None, but it's a lot more readable)
There's a good argument to be made that slicing with a single None
shouldn't add a new axis,
On Tue, Sep 29, 2015 at 11:14 AM, Antoine Pitrou
wrote:
>
> None for example? float('nan') may be a bit weird amongst e.g. an array
> of Decimals
The downside to `None` is that it's one more thing to check for and makes
object arrays an even weirder edge case.
On Tue, Aug 12, 2014 at 11:17 AM, Eelco Hoogendoorn
hoogendoorn.ee...@gmail.com wrote:
Thanks. Prompted by that stackoverflow question, and similar problems I
had to deal with myself, I started working on a much more general extension
to numpy's functionality in this space. Like you noted,
On Sat, Mar 15, 2014 at 1:28 PM, Nathaniel Smith n...@pobox.com wrote:
On Sat, Mar 15, 2014 at 3:41 AM, Nathaniel Smith n...@pobox.com wrote:
Hi all,
Here's the main blocker for adding a matrix multiply operator '@' to
Python:
we need to decide what we think its precedence and
You can use np.pad for this:
In [1]: import numpy as np
In [2]: x = np.ones((3, 3))
In [3]: np.pad(x, [(0, 0), (0, 1)], mode='constant')
Out[3]:
array([[ 1., 1., 1., 0.],
[ 1., 1., 1., 0.],
[ 1., 1., 1., 0.]])
Each item of the pad_width (second) argument is a tuple of
You just need to supply the offset kwarg to memmap.
for example:
with open(localfile, r) as fd:
# read offset from first line of file
offset = int(next(fd).split()[-2])
np.memmap(fd, dtype=float32, mode=r, offset=offset)
Also, there's no need to do things like offset =
To me, `unique_rows` sounds perfect. To go along columns: unique_rows(A.T)
Stéfan
Personally, I like this idea as well. A separate `unique_rows` function,
which potentially takes an `axis` argument. (Alternately,
`unique_sequences` wouldn't imply a particular axis.)
Of course, the
...snip
However, my first interpretation of an axis argument in unique would
be that it treats each column (or whatever along axis) separately.
Analogously to max, argmax and similar.
Good point!
That's certainly a potential source of confusion. However, I can't seem to
come up with a
Hi everyone,
I've recently put together a pull request that adds an `axis` kwarg to
`numpy.unique` so that `unique`can easily be used to find unique
rows/columns/sub-arrays/etc of a larger array.
https://github.com/numpy/numpy/pull/3584
Currently, this works as a warpper around `unique`. If
For anyone attending the AGU (American Geophysical Union) fall meeting this
year, there will be a session on python and big data in the earth
sciences. Abstract submission is still open until Aug. 6th. See below for
more info.
Cheers,
-Joe
-- Forwarded message --
From: IRIS
On Fri, Nov 2, 2012 at 9:18 AM, Neal Becker ndbeck...@gmail.com wrote:
I'm trying to convert some matlab code. I see this:
b(1)=[];
AFAICT, this removes the first element of the array, shifting the others.
What is the preferred numpy equivalent?
I'm not sure if
b[:] = b[1:]
Unless
On Sat, Mar 3, 2012 at 10:06 AM, Olivier Delalleau sh...@keba.be wrote:
Le 3 mars 2012 11:03, Robert Kern robert.k...@gmail.com a écrit :
On Sat, Mar 3, 2012 at 15:51, Olivier Delalleau sh...@keba.be wrote:
Le 3 mars 2012 10:27, Robert Kern robert.k...@gmail.com a écrit :
On Sat, Mar 3,
wrote:
On Sat, Mar 3, 2012 at 3:05 PM, Robert Kern robert.k...@gmail.com
wrote:
On Sat, Mar 3, 2012 at 13:59, Ralf Gommers
ralf.gomm...@googlemail.com
wrote:
On Thu, Mar 1, 2012 at 11:44 PM, Joe Kington jking...@wisc.edu
wrote:
Is there a numpy function for testing
On Sat, Mar 3, 2012 at 12:50 PM, Olivier Delalleau sh...@keba.be wrote:
Would it be helpful if I went ahead and submitted a pull request with the
function in my original question called isclose (along with a complete
docstring and a few tests)?
One note:
At the moment, it deliberately
Is there a numpy function for testing floating point equality that returns
a boolean array?
I'm aware of np.allclose, but I need a boolean array. Properly handling
NaN's and Inf's (as allclose does) would be a nice bonus.
I wrote the function below to do this, but I suspect there's a method in
On Thu, Dec 1, 2011 at 2:47 PM, kneil magnetotellur...@gmail.com wrote:
Hi Pierre,
I was thinking about uploading some examples but strangely, when I store
the
array using for example: np.save('Y',Y)
and then reload it in a new workspace, I find that the problem does not
reproduce. It
On Fri, Nov 4, 2011 at 5:26 AM, Pierre GM pgmdevl...@gmail.com wrote:
On Nov 03, 2011, at 23:07 , Joe Kington wrote:
I'm not sure if this is exactly a bug, per se, but it's a very confusing
consequence of the current design of masked arrays…
I would just add a I think between
Forgive me if this is already a well-know oddity of masked arrays. I hadn't
seen it before, though.
I'm not sure if this is exactly a bug, per se, but it's a very confusing
consequence of the current design of masked arrays...
Consider the following example:
import numpy as np
x =
Similar to what Matthew said, I often find that it's cleaner to make a
seperate class with a data (or somesuch) property that lazily loads the
numpy array.
For example, something like:
class DataFormat(object):
def __init__(self, filename):
self.filename = filename
for key,
Hi Andre,
Assuming that you want the exact point (date and value) where each crossing
occurs, you'll need to interpolate where they cross.
There are a number of different ways to do so, but assuming you're okay with
linear interpolation, and everything's sampled on the same dates, you can
simply
Do you expect to have very large integer values, or only values over a
limited range?
If your integer values will fit in into 16-bit range (or even 32-bit, if
you're on a 64-bit machine, the default dtype is float64...) you can
potentially halve your memory usage.
I.e. Something like:
data =
Hi all,
I just wanted to check if this would be considered a bug.
numpy.histogram does not appear to preserve subclasses of ndarrays (e.g.
masked arrays). This leads to considerable problems when working with
masked arrays. (As per this Stack Overflow
On Thu, Sep 2, 2010 at 5:31 PM, josef.p...@gmail.com wrote:
On Thu, Sep 2, 2010 at 3:50 PM, Joe Kington jking...@wisc.edu wrote:
Hi all,
I just wanted to check if this would be considered a bug.
numpy.histogram does not appear to preserve subclasses of ndarrays (e.g.
masked arrays
Is it possible to downcast an array in-place?
For example:
x = np.random.random(10) # Placeholder for real data
x -= x.min()
x /= x.ptp() / 255
x = x.astype(np.uint8) -- returns a copy
First off, a bit of background to the question... At the moment, I'm trying
to downcast a large (10GB) array
I know you're looking for something with much more fine-grained control,
(which I can't help much with) but I often find it useful to just plot the
overall memory of the program over time.
There may be an slicker way to do it, but here's the script I use, anyway...
(saved as ~/bin/quick_profile,
See itertools.permutations (python standard library)
e.g.
In [3]: list(itertools.permutations([1,1,0,0]))
Out[3]:
[(1, 1, 0, 0),
(1, 1, 0, 0),
(1, 0, 1, 0),
(1, 0, 0, 1),
(1, 0, 1, 0),
(1, 0, 0, 1),
(1, 1, 0, 0),
(1, 1, 0, 0),
(1, 0, 1, 0),
(1, 0, 0, 1),
(1, 0, 1, 0),
(1, 0, 0, 1),
, 0, 0, 1),
(1, 0, 1, 0),
(1, 1, 0, 0)])
On Fri, Mar 19, 2010 at 10:17 AM, Joe Kington jking...@wisc.edu wrote:
See itertools.permutations (python standard library)
e.g.
In [3]: list(itertools.permutations([1,1,0,0]))
Out[3]:
[(1, 1, 0, 0),
(1, 1, 0, 0),
(1, 0, 1, 0),
(1, 0
I'm just guessing here, but have you tried completely destroying the figure
each time, as Michael suggested?
That should avoid the problem you're having, I think...
At any rate, if you don't do a fig.clf(), I'm fairly sure matplotlib keeps a
reference to the data around.
Hope that helps,
-Joe
There may be a more elegant way, but:
In [2]: a = np.arange(10)
In [3]: a[(a5) (a8)]
Out[3]: array([6, 7])
On Wed, Sep 30, 2009 at 1:27 PM, Gökhan Sever gokhanse...@gmail.com wrote:
Hello,
How to conditionally index an array as shown below :
a = arange(10)
a[5a8]
to get
array([6,7])
Well, this is messy, and nearly unreadable, but it should work and is pure
python(and I think even be endian-independent).
struct.unpack('b',struct.pack('d', X)[0])[0] = 0
(where X is the variable you want to test)
In [54]: struct.unpack('b',struct.pack('d',0.0)[0])[0] = 0
Out[54]: True
In
helping that much,
-Joe
On Tue, Sep 29, 2009 at 12:19 PM, Joe Kington jking...@wisc.edu wrote:
Well, this is messy, and nearly unreadable, but it should work and is pure
python(and I think even be endian-independent).
struct.unpack('b',struct.pack('d', X)[0])[0] = 0
(where X is the variable you
I know it's a bit pointless profiling these, but just so I can avoid doing
real work for a bit...
In [1]: import sys, struct, math
In [2]: def comp_struct(x):
...: # Get the first or last byte, depending on endianness
...: # (using 'f' or 'f' loses the signbit for -0.0 in older
a bug that was fixed somewhere in between?
On Tue, Sep 29, 2009 at 4:39 PM, Robert Kern robert.k...@gmail.com wrote:
On Tue, Sep 29, 2009 at 16:37, Joe Kington jking...@wisc.edu wrote:
I know it's a bit pointless profiling these, but just so I can avoid
doing
real work for a bit
scipy.ndimage.zoom is exactly what you're looking for, as Zach Pincus
already said.
As far as I know, numpy doesn't have any 3D interpolation routines, so
you'll have to install scipy. Interp2d will only interpolate slices of your
data, not the whole volume.
-Joe
On Thu, Jul 9, 2009 at 8:42 AM,
Hi folks,
This is probably a very simple question, but it has me stumped...
I have an integer 2D array containing 3rd dimesion indicies that I'd like to
use to index values in a 3D array.
Basically, I want the equivalent of:
output = np.zeros((ny,nx))
for i in xrange(ny):
for j in
Thank you! That answered things quite nicely. My apologies for not finding
the earlier discussion before sending out the question...
Thanks again,
-Joe
On Sat, Jun 13, 2009 at 7:17 PM, Robert Kern robert.k...@gmail.com wrote:
On Sat, Jun 13, 2009 at 19:11, Joe Kingtonjking...@wisc.edu wrote:
37 matches
Mail list logo