Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-11 Thread Joe Kington
In python2 it appears that multiprocessing uses pickle protocol 0 which
> must cause a big slowdown (a factor of 100) relative to protocol 2, and
> uses pickle instead of cPickle.
>
>
Even on Python 2.x, multiprocessing uses protocol 2, not protocol 0.  The
default for the `pickle` module changed, but multiprocessing has always
used a binary pickle protocol to communicate between processes.  Have a
look at multiprocessing's forking.py in Python 2.7.

As some context here for folks that may not be aware, Sturla is referring
to his earlier shared memory implementation
 he wrote that avoids
actually pickling the data, and instead essentially pickles a pointer to an
array in shared memory.  As Sturla very nicely summed up, it saves memory
usage, but doesn't help the deeper issues.  You're far better off just
communicating between processes as opposed to using shared memory.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what would you expect A[none] to do?

2015-12-31 Thread Joe Kington
Slicing with None adds a new dimension.  It's a common paradigm, though
usually you'd use A[np.newaxis] or A[np.newaxis, ...] instead for
readibility.   (np.newaxis is None, but it's a lot more readable)

There's a good argument to be made that slicing with a single None
shouldn't add a new axis, and only the more readable forms like A[None, :],
A[..., None], etc should.

However, that would rather seriously break backwards compatibility. There's
a fair amount of existing code that assumes "A[None]" prepends a new axis.

On Thu, Dec 31, 2015 at 10:36 AM, Neal Becker  wrote:

> Neal Becker wrote:
>
> > In my case, what it does is:
> >
> > A.shape = (5760,)
> > A[none] -> (1, 5760)
> >
> > In my case, use of none here is just a mistake.  But why would you want
> > this to be accepted at all, and how should it be interpreted?
>
> Actually, in my particular case, if it just acted as a noop, returning the
> original array, that would have been perfect.  No idea if that's a good
> result in general.
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Sign of NaN

2015-09-29 Thread Joe Kington
On Tue, Sep 29, 2015 at 11:14 AM, Antoine Pitrou 
wrote:

>
> None for example? float('nan') may be a bit weird amongst e.g. an array
> of Decimals


The downside to `None` is that it's one more thing to check for and makes
object arrays an even weirder edge case.  (Incidentally, Decimal does have
its own non-float NaN which throws a whole different wrench in the works. `
np.sign(Decimal('NaN'))` is going to raise an error no matter what.)

A float (or numpy) NaN makes more sense to return for mixed datatypes than
None does, in my opinion. At least then one can use `isfinite`, etc to
check while `np.isfinite(None)` will raise an error.  Furthermore, if
there's at least one floating point NaN in the object array, getting a
float NaN out makes sense.

Just my $0.02, anyway.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New function `count_unique` to generate contingency tables.

2014-08-12 Thread Joe Kington
On Tue, Aug 12, 2014 at 11:17 AM, Eelco Hoogendoorn 
hoogendoorn.ee...@gmail.com wrote:

 Thanks. Prompted by that stackoverflow question, and similar problems I
 had to deal with myself, I started working on a much more general extension
 to numpy's functionality in this space. Like you noted, things get a little
 panda-y, but I think there is a lot of panda's functionality that could or
 should be part of the numpy core, a robust set of grouping operations in
 particular.

 see pastebin here:
 http://pastebin.com/c5WLWPbp


On a side note, this is related to a pull request of mine from awhile back:
https://github.com/numpy/numpy/pull/3584

There was a lot of disagreement on the mailing list about what to call a
unique slices along a given axis function, so I wound up closing the pull
request pending more discussion.

At any rate, I think it's a useful thing to have in base numpy.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [help needed] associativity and precedence of '@'

2014-03-15 Thread Joe Kington
On Sat, Mar 15, 2014 at 1:28 PM, Nathaniel Smith n...@pobox.com wrote:

 On Sat, Mar 15, 2014 at 3:41 AM, Nathaniel Smith n...@pobox.com wrote:
  Hi all,
 
  Here's the main blocker for adding a matrix multiply operator '@' to
 Python:
  we need to decide what we think its precedence and associativity should
 be.

 Another data point that might be useful:

 Matlab: same-left


 R: tight-left



I was going to ask this earlier, but I was worried I was missing something
major.

Why was tight-left not an option?

This means that if you don't use parentheses, you get:
   a @ b @ c  -  (a @ b) @ c
   a * b @ c  -  a * (b @ c)
   a @ b * c  -  (a @ b) * c

In my (very inexperienced) opinion, it seems like the most intuitive
option.

Cheers,
-Joe


 IDL: same-left

 GAUSS: same-left (IIUC -- any GAUSS experts please correct me if I
 misunderstood the fine manual)

 Mathematica: instead of having an associativity, a @ b @ c gets
 converted into mdot([a, b, c])

 --
 Nathaniel J. Smith
 Postdoctoral researcher - Informatics - University of Edinburgh
 http://vorpus.org
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Padding An Array Along A Single Axis

2014-01-03 Thread Joe Kington
You can use np.pad for this:

In [1]: import numpy as np

In [2]: x = np.ones((3, 3))

In [3]: np.pad(x, [(0, 0), (0, 1)], mode='constant')
Out[3]:
array([[ 1.,  1.,  1.,  0.],
   [ 1.,  1.,  1.,  0.],
   [ 1.,  1.,  1.,  0.]])

Each item of the pad_width (second) argument is a tuple of before, after
for each axis.  I've only padded the end of the last axis, but if you
wanted to pad both sides of it:

In [4]: np.pad(x, [(0, 0), (1, 1)], mode='constant')
Out[4]:
array([[ 0.,  1.,  1.,  1.,  0.],
   [ 0.,  1.,  1.,  1.,  0.],
   [ 0.,  1.,  1.,  1.,  0.]])

Hope that helps,
-Joe





On Fri, Jan 3, 2014 at 6:58 AM, Freddie Witherden fred...@witherden.orgwrote:

 Hi all,

 This should be an easy one but I can not come up with a good solution.
 Given an ndarray with a shape of (..., X) I wish to zero-pad it to have
 a shape of (..., X + K), presumably obtaining a new array in the process.

 My best solution this far is to use

np.zeros(curr.shape[:-1] + (curr.shape[-1] + K,))

 followed by an assignment.  However, this seems needlessly cumbersome.
 I looked at np.pad but it does not seem to provide a means of just
 padding a single axis easily.

 Regards, Freddie.


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Reading from binary file with memmap, with offset

2013-10-10 Thread Joe Kington
You just need to supply the offset kwarg to memmap.

for example:

with open(localfile, r) as fd:
  # read offset from first line of file
  offset = int(next(fd).split()[-2])
  np.memmap(fd, dtype=float32, mode=r, offset=offset)

Also, there's no need to do things like offset =
int(fd.readlines()[0].split()[-2])

Just do offset = int(next(fd).split()[-2]) instead.  Readlines reads the
entire file into memory. You only want the first line.

Hope that helps!
-Joe


On Thu, Oct 10, 2013 at 7:43 AM, Andreas Hilboll li...@hilboll.de wrote:

 Hi,

 I have a problem using memmap correctly. I need to read a data file
 which consists of an ASCII header and appended binary single precision
 floating point values. memmap complains that the Size of available data
 is not a multiple of the data-type size. But as far as I can tell, the
 size *doas* match the data-type size.

 The problem is illustrated by the following code:

 ---8---

 url = http://www.iup.uni-bremen.de/~hilboll/download/20120204.XD4_N2;
 localfile = np_memmap.dat

 import os
 import urllib

 import numpy as np


 # download data file
 if not os.access(localfile, os.R_OK):
   urllib.urlretrieve(url, localfile)

 with open(localfile, r) as fd:
   # read offset from first line of file
   offset = int(fd.readlines()[0].split()[-2])
   # jump to begin of data block
   fd.seek(offset)
   # read until EOF
   blob = fd.read()
   print(Size of data blob [bytes]: {}.format(len(blob)))
   print(This can actually be divided by 4: {} / 4.0 = {}.format(
 len(blob), len(blob) / 4.0))
   # go back to begin of data block
   fd.seek(offset)
   print(But it cannot be loaded as np.memmap with dtype float32:)
   np.memmap(fd, dtype=float32, mode=r)

 ---8---

 Any help is greatly appreciated :)

 -- Andreas.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Adding an axis argument to numpy.unique

2013-08-20 Thread Joe Kington

 To me, `unique_rows` sounds perfect.  To go along columns: unique_rows(A.T)


Stéfan


Personally, I like this idea as well.  A separate `unique_rows` function,
which potentially takes an `axis` argument.  (Alternately,
`unique_sequences` wouldn't imply a particular axis.)

Of course, the obvious downside to this is namespace pollution.  The upside
would be discoverability.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Adding an axis argument to numpy.unique

2013-08-19 Thread Joe Kington
...snip


 However, my first interpretation of an axis argument in unique would
 be that it treats each column (or whatever along axis) separately.
 Analogously to max, argmax and similar.


Good point!

That's certainly a potential source of confusion.  However, I can't seem to
come up with a better name for the kwarg. Matlab's unique function has a
rows option, which is probably a more intuitive name, but doesn't imply
the expansion to N-dimensions.

axis is still fairly idiomatic, despite the confusion over unique
rows/columns/etc vs unique items within each row/column/etc.

Any thoughts on a better name for the argument?


 On second thought:
 unique with axis working on each column separately wouldn't create a
 nice return array, because it won't be rectangular (in general)

Josef


Yeah, and unique items within each row/column/etc would be best
implemented as a one-line list comprehension for that reason, rather than
an addition to unique, i.m.o.

Thanks for the feedback!
-Joe
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Adding an axis argument to numpy.unique

2013-08-18 Thread Joe Kington
Hi everyone,

I've recently put together a pull request that adds an `axis` kwarg to
`numpy.unique` so that `unique`can easily be used to find unique
rows/columns/sub-arrays/etc of a larger array.

https://github.com/numpy/numpy/pull/3584

Currently, this works as a warpper around `unique`. If `axis` is specified,
it reshapes the input to a 2D contiguous array, views each row as a single
item, then passes it on to `unique`.  For int and string dtypes, each row
is viewed as a void dtype and therefore bitwise-equality is used for
comparisons.  For all other dtypes, the each row is viewed as a structured
array.

The current implementation has two main drawbacks:

   1. For anything other than ints and strings, it's relatively slow.
   2. It doesn't work with object arrays of any sort.

I'd appreciate any thoughts/feedback folks might have on both the general
idea and this specific implementation.  It think it's a worthwhile
addition, but I'm biased.

Thanks!
-Joe
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Fwd: Python Session at AGU 2013

2013-08-01 Thread Joe Kington
For anyone attending the AGU (American Geophysical Union) fall meeting this
year, there will be a session on python and big data in the earth
sciences. Abstract submission is still open until Aug. 6th. See below for
more info.

Cheers,
-Joe

-- Forwarded message --
From: IRIS Webmaster webmas...@iris.washington.edu
Date: Thu, Aug 1, 2013 at 11:18 AM
Subject: [iris-bulk] Python Session at AGU 2013
To: bulkm...@iris.washington.edu


Forwarded on behalf of:
Lion Krischer
LMU Munich
krisc...@geophysik.uni-muenchen.de


Dear members of the IRIS community,

with the deadline for abstract submission to the AGU Fall Meeting 2013
approaching fast, I wanted to point out a session revolving around the
Python programming language.

If you will be attending the meeting and are using Python for your research
or workflows, please consider submitting an abstract to the IN-034 session
until *next week Tuesday, August 6th*.

https://fallmeeting.agu.org/2013/scientific-program/session-search/sessions/in034-ultra-scale-earth-systems-analyses-using-python/

It aims to promote the use of Python in the earth-science community.

All the best,

Lion Krischer and Thomas Lecocq
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] yet another trivial question

2012-11-02 Thread Joe Kington
On Fri, Nov 2, 2012 at 9:18 AM, Neal Becker ndbeck...@gmail.com wrote:

 I'm trying to convert some matlab code.  I see this:

 b(1)=[];

 AFAICT, this removes the first element of the array, shifting the others.

 What is the preferred numpy equivalent?

 I'm not sure if

 b[:] = b[1:]


Unless I'm missing something, don't you just want:

b = b[1:]



 is safe or not



It's not exactly the same as the matlab equivalent, as matlab will always
make a copy, and this will be a view of the same array.  For example, if
you do something like this:

import numpy as np

a = np.arange(10)

b = a[1:]

b[3] = 1000

print a
print b

You'll see that modifying b in-place will modify a as well, as b is
just a view into a.  This wouldn't be the case in matlab (if I remember
correctly, anyway...).

 _
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Floating point close function?

2012-03-03 Thread Joe Kington
On Sat, Mar 3, 2012 at 10:06 AM, Olivier Delalleau sh...@keba.be wrote:

 Le 3 mars 2012 11:03, Robert Kern robert.k...@gmail.com a écrit :

 On Sat, Mar 3, 2012 at 15:51, Olivier Delalleau sh...@keba.be wrote:
  Le 3 mars 2012 10:27, Robert Kern robert.k...@gmail.com a écrit :
 
  On Sat, Mar 3, 2012 at 14:34, Robert Kern robert.k...@gmail.com
 wrote:
   On Sat, Mar 3, 2012 at 14:31, Ralf Gommers 
 ralf.gomm...@googlemail.com
   wrote:
 
   Because this is also bad:
   np.TAB
   Display all 561 possibilities? (y or n)
  
   Not as bad as overloading np.allclose(x,y,return_array=True). Or
   deprecating np.allclose() in favor of np.close().all().
 
  I screwed up this paragraph. I meant that as Another alternative
  would be to deprecate 
 
 
  np.close().all() would probably be a lot less efficient in terms of CPU
 /
  memory though, wouldn't it?

 No. np.allclose() is essentially doing exactly this already.


 Ok. What about then, np.allclose() could theoretically be a lot more
 efficient in terms of CPU / memory? ;)


allclose() does short-circuit in a few cases where the pattern of Inf's
doesn't match.  E.g.

if not all(xinf == isinf(y)):
return False
if not all(x[xinf] == y[xinf]):
return False

At least for the function I wrote, allclose() would be a bit faster than
isclose().all() in those specific cases.  It's not likely to be terribly
significant, though.
-Joe



 -=- Olivier

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Floating point close function?

2012-03-03 Thread Joe Kington
On Sat, Mar 3, 2012 at 9:26 AM, Robert Kern robert.k...@gmail.com wrote:

 On Sat, Mar 3, 2012 at 15:22, Benjamin Root ben.r...@ou.edu wrote:
 
 
  On Saturday, March 3, 2012, Robert Kern robert.k...@gmail.com wrote:
  On Sat, Mar 3, 2012 at 14:31, Ralf Gommers ralf.gomm...@googlemail.com
 
  wrote:
 
 
  On Sat, Mar 3, 2012 at 3:05 PM, Robert Kern robert.k...@gmail.com
  wrote:
 
  On Sat, Mar 3, 2012 at 13:59, Ralf Gommers 
 ralf.gomm...@googlemail.com
  wrote:
  
  
   On Thu, Mar 1, 2012 at 11:44 PM, Joe Kington jking...@wisc.edu
   wrote:
  
   Is there a numpy function for testing floating point equality that
   returns
   a boolean array?
  
   I'm aware of np.allclose, but I need a boolean array.  Properly
   handling
   NaN's and Inf's (as allclose does) would be a nice bonus.
  
   I wrote the function below to do this, but I suspect there's a
 method
   in
   numpy that I missed.
  
  
   I don't think such a function exists, would be nice to have. How
 about
   just
   adding a keyword return_array to allclose to do so?
 
  As a general design principle, adding a boolean flag that changes the
  return type is worse than making a new function.
 
 
  That's certainly true as a general principle. Do you have a concrete
  suggestion in this case though?
 
  np.close()
 
 
  When I read that, I mentally think of close as in closing a file.  I
 think
  we need a synonym.

 np.isclose()


Would it be helpful if I went ahead and submitted a pull request with the
function in my original question called isclose (along with a complete
docstring and a few tests)?

One note:
At the moment, it deliberately compares NaN's as equal. E.g.

isclose([np.nan, np.nan], [np.nan, np.nan])

will return:

[True, True]

This obviously runs counter to the standard way NaN's are handled (and
indeed the definition of NaN).

However, in the context of a floating point close to function, I think it
makes the most sense.

I've had this sitting around in a small project for awhile now, and it's
been more useful to have it compare NaN's as approximately equal than not
for my purposes at least.

Nonetheless, it's something that needs additional consideration.

Thanks,
-Joe



 --
 Robert Kern
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Floating point close function?

2012-03-03 Thread Joe Kington
On Sat, Mar 3, 2012 at 12:50 PM, Olivier Delalleau sh...@keba.be wrote:


 Would it be helpful if I went ahead and submitted a pull request with the
 function in my original question called isclose (along with a complete
 docstring and a few tests)?

 One note:
 At the moment, it deliberately compares NaN's as equal. E.g.

 isclose([np.nan, np.nan], [np.nan, np.nan])

 will return:

 [True, True]

 This obviously runs counter to the standard way NaN's are handled (and
 indeed the definition of NaN).

 However, in the context of a floating point close to function, I think
 it makes the most sense.

 I've had this sitting around in a small project for awhile now, and it's
 been more useful to have it compare NaN's as approximately equal than not
 for my purposes at least.

 Nonetheless, it's something that needs additional consideration.

 Thanks,
 -Joe


 It would be confusing if numpy.isclose().all() was different from
 numpy.allclose(). That being said, I agree it's useful to have NaNs compare
 equal in some cases, maybe it could be a new argument to the function?


Good point. I went ahead and added an equal_nan kwarg and removed the
check_invalid kwarg I had in before.  I also made it mimic what
np.equal() does in the case of two scalars (return a scalar instead of an
array).

I went ahead an make a pull request:
https://github.com/numpy/numpy/pull/224  Hope that's alright.

Cheers,
-Joe



 -=- Olivier

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Floating point close function?

2012-03-01 Thread Joe Kington
Is there a numpy function for testing floating point equality that returns
a boolean array?

I'm aware of np.allclose, but I need a boolean array.  Properly handling
NaN's and Inf's (as allclose does) would be a nice bonus.

I wrote the function below to do this, but I suspect there's a method in
numpy that I missed.

import numpy as np

def close(a, b, rtol=1.e-5, atol=1.e-8, check_invalid=True):
Similar to numpy.allclose, but returns a boolean array.
See numpy.allclose for an explanation of *rtol* and *atol*.
def within_tol(x, y, atol, rtol):
return np.less_equal(np.abs(x-y), atol + rtol * np.abs(y))
x = np.array(a, copy=False)
y = np.array(b, copy=False)
if not check_invalid:
return within_tol(x, y, atol, rtol)
xfin = np.isfinite(x)
yfin = np.isfinite(y)
if np.all(xfin) and np.all(yfin):
return within_tol(x, y, atol, rtol)
else:
# Avoid subtraction with infinite/nan values...
cond = np.zeros(np.broadcast(x, y).shape, dtype=np.bool)
mask = xfin  yfin
cond[mask] = within_tol(x[mask], y[mask], atol, rtol)
# Inf and -Inf equality...
cond[~mask] = (x[~mask] == y[~mask])
# NaN equality...
cond[np.isnan(x)  np.isnan(y)] = True
return cond

# A few quick tests...
assert np.any(close(0.31, np.array([0.1, 0.2, 0.3, 0.4])))

x = np.array([0.1, np.nan, np.inf, -np.inf])
y = np.array([0.101, np.nan, np.inf, -np.inf])
assert np.all(close(x, y))

x = np.array([0.1, 0.2, np.inf])
y = np.array([0.101, np.nan, 0.2])
assert not np.all(close(x, y))


Thanks,
-Joe
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Apparently non-deterministic behaviour of complex array multiplication

2011-12-01 Thread Joe Kington
On Thu, Dec 1, 2011 at 2:47 PM, kneil magnetotellur...@gmail.com wrote:


 Hi Pierre,
 I was thinking about uploading some examples but strangely, when I store
 the
 array using for example: np.save('Y',Y)
 and then reload it in a new workspace, I find that the problem does not
 reproduce.  It would seem somehow to be
 associated with the 'overhead' of the workspace I am in...


 The context here is that I read in 24 files, totaling about 7GB, and then
 forming data matrices of size 24 x N, where N varies.  I tried for example
 this morning to run the same code, but working with only 12 of the files -
 just to see if NaNs appeared.  No NaN appeared however when the machine was
 being less 'taxed'.


Are you using non-ECC RAM, by chance?  (Though if you have 4GB of ram, I
can't imagine that you wouldn't be using ECC...)

Alternately, have you run memtest lately?  That sound suspiciously like bad
ram...



 Strangely enough, I also seterr(all='raise') in the workspace before
 executing this (in the case where I read all 24 files and do net NaN) and I
 do not get any messages about the NaN while the calculation is taking
 place.

 If you want to play with this I would be willing to put the data on a file
 sharing site (its around 7.1G of data) together with the code and you could
 play with it from there.  The code is not too many lines - under 100 lines,
 and I am sure I could trim it down from there.

 Let me know if you are interested.
 cheers,
 K


 Pierre Haessig-2 wrote:
 
  Le 01/12/2011 02:44, Karl Kappler a écrit :
  Also note that I have had a similar problem with much smaller arrays,
  say 24 x 3076
  Hi Karl,
  Could you post a self-contained code with such a small array (or even
  smaller. the smaller, the better...) so that we can run it and play with
  it ?
  --
  Pierre
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 

 --
 View this message in context:
 http://old.nabble.com/Apparently-non-deterministic-behaviour-of-complex-array-multiplication-tp32893004p32898383.html
 Sent from the Numpy-discussion mailing list archive at Nabble.com.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Indexing a masked array with another masked array leads to unexpected results

2011-11-04 Thread Joe Kington
On Fri, Nov 4, 2011 at 5:26 AM, Pierre GM pgmdevl...@gmail.com wrote:


 On Nov 03, 2011, at 23:07 , Joe Kington wrote:

  I'm not sure if this is exactly a bug, per se, but it's a very confusing
 consequence of the current design of masked arrays…
 I would just add a I think between the but and it's before I could
 agree.

  Consider the following example:
 
  import numpy as np
 
  x = np.ma.masked_all(10, dtype=np.float32)
  print x
  x[x  0] = 5
  print x
 
  The exact results will vary depending the contents of the empty memory
 the array was initialized from.

 Not a surprise. But isn't mentioned in the doc somewhere that using a
 masked array as index is a very bad idea ? And that you should always fill
 it before you use it as an array ? (Actually, using a MaskedArray as index
 used to raise an IndexError. But I thought it was a bit too harsh, so I
 dropped it).


Not that I can find in the docs (Perhaps I just missed it?). At any rate,
it's not mentioned in the numpy.ma section on indexing:
http://docs.scipy.org/doc/numpy/reference/maskedarray.generic.html#indexing-and-slicing

The only mention of it is a comment in MaskedArray.__setitem__ where the
IndexError is commented out.


 ma.masked_all is an empty array with all its elements masked. Ie, you have
 an uninitialized ndarray as data, and a bool array of the same size, full
 of True. The operative word is here uninitialized.

  This wreaks havoc when filtering the contents of masked arrays (and
 leads to hard-to-find bugs!).  The mask of the array in question is altered
 at random (or, rather, based on the masked values as well as the masked
 ones).

 Once again, you're working on an *uninitialized* array. What you should
 really do is to initialize it first, e.g. by 0, or whatever would make
 sense in your field, and then work from that.


Sure, I shouldn't have used that as the example.

My point was that it's counter-intuitive that something like x[x  0] = 0
alters the mask of x based on the values of _masked_ elements.  How it's
initialized is irrelevant (though, of course, it wouldn't be semi-random if
it were initialized in another way).


  I can see the reasoning behind the way it works. It makes sense that x
  0 returns a masked boolean array with potentially several elements
 masked, as well as the unmasked elements greater than 0.

 Well, x  0 is also a masked array, with its mask full of True. Not very
 usable by itself, and especially *not* for indexing.


  However, wouldn't it make more sense to have MaskedArray.__setitem__
 only operate on the unmasked elements of the indx passed in (at least in
 the case where the assigned value isn't a masked array)?


 Normally, that should be the case. But you're not working in normal
 conditions, here. A bit like trying to boil water on a stove with a plastic
 pan.


x[x  threshold] = something is a very common idiom for ndarrays.

I think most people would find it surprising that this operation doesn't
ignore the masked values.

I noticed this because one of my coworkers was complaining that a piece of
my code was messing up their masked arrays.  I'd never tested it with
masked arrays, but it took me ages to find, just because I wasn't looking
in places where I was just using common idioms.  In this particular case,
they'd initialized it with masked_all, so it effectively altered the mask
of the array at random.  Regardless of how it was initialized, though, it
is surprising that the mask of x is changed based on masked values.

I just think it would be useful for it to be documented.

Cheers,

-Joe
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Indexing a masked array with another masked array leads to unexpected results

2011-11-03 Thread Joe Kington
Forgive me if this is already a well-know oddity of masked arrays. I hadn't
seen it before, though.

I'm not sure if this is exactly a bug, per se, but it's a very confusing
consequence of the current design of masked arrays...

Consider the following example:

import numpy as np

x = np.ma.masked_all(10, dtype=np.float32)
print x
x[x  0] = 5
print x

The exact results will vary depending the contents of the empty memory the
array was initialized from.

This wreaks havoc when filtering the contents of masked arrays (and leads
to hard-to-find bugs!).  The mask of the array in question is altered at
random (or, rather, based on the masked values as well as the masked ones).

Of course, once you're aware of this, there are a number of workarounds
(namely, filling the array or explicitly operating on x.data instead of
x).

I can see the reasoning behind the way it works. It makes sense that x 
0 returns a masked boolean array with potentially several elements masked,
as well as the unmasked elements greater than 0.

However, wouldn't it make more sense to have MaskedArray.__setitem__ only
operate on the unmasked elements of the indx passed in (at least in the
case where the assigned value isn't a masked array)?

Cheers,
-Joe
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] lazy loading ndarrays

2011-07-26 Thread Joe Kington
Similar to what Matthew said, I often find that it's cleaner to make a
seperate class with a data (or somesuch) property that lazily loads the
numpy array.

For example, something like:

class DataFormat(object):
def __init__(self, filename):
self.filename = filename
for key, value in self._read_header().iteritems():
setattr(self, key, value)

@property
def data(self):
try:
return self._data
except AttributeError:
self._data = self._read_data()
return self._data

Hope that helps,
-Joe

On Tue, Jul 26, 2011 at 4:15 PM, Matthew Brett matthew.br...@gmail.comwrote:

 Hi,

 On Tue, Jul 26, 2011 at 5:11 PM, Craig Yoshioka crai...@me.com wrote:
  I want to subclass ndarray to create a class for image and volume data,
 and when referencing a file I'd like to have it load the data only when
 accessed.  That way the class can be used to quickly set and manipulate
 header values, and won't load data unless necessary.  What is the best way
 to do this?  Are there any hooks I can use to load the data when an array's
 values are first accessed or manipulated?  I tried some trickery with
 __array_interface__ but couldn't get it to work very well.  Should I just
 use a memmapped array, and give up on a purely 'lazy' approach?

 What kind of images are you loading?   We do lazy loading in nibabel,
 for medical image type formats:

 http://nipy.sourceforge.net/nibabel/

 - but our images _have_ arrays and headers, rather than (appearing to
 be) arrays.  Thus something like:

 import nibabel as nib

 img = nib.load('my_image.img')
 # data not loaded at this point
 data = img.get_data()
 # data loaded now.  Maybe memmapped if the format allows

 If you think you might have similar needs, I'd be very happy to help
 you get going in nibabel...

 Best,

 Matthew
  ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to get the prices of Moving Averages Crosses?

2011-03-01 Thread Joe Kington
Hi Andre,

Assuming that you want the exact point (date and value) where each crossing
occurs, you'll need to interpolate where they cross.

There are a number of different ways to do so, but assuming you're okay with
linear interpolation, and everything's sampled on the same dates, you can
simply do something like this:

import numpy as np
import matplotlib.pyplot as plt

def main():
x = np.linspace(0, 2*np.pi, 20)
y1 = np.sin(2*x)
y2 = np.cos(x)
crossings = find_crossings(x, y1, y2)
cross_x, cross_y = crossings.T
plt.plot(x, y1, 'bx-')
plt.plot(x, y2, 'gx-')
plt.plot(cross_x, cross_y, 'ro')
plt.show()

def find_crossings(x, y1, y2):
diff = np.diff(np.sign(y1 - y2))
indicies, = np.nonzero(diff)
crossings = [interpolate_crossing(i, x, y1, y2) for i in indicies]
return np.array(crossings)

def interpolate_crossing(i, x, y1, y2):
slope = (   (y1[i] - y2[i])
   / ((y2[i+1] - y2[i]) - (y1[i+1] - y1[i])))
x = x[i] + slope * (x[i+1] - x[i])
y = y1[i] + slope * (y1[i+1] - y1[i])
return x, y

main()

[image: VXsqp.png]

On Tue, Mar 1, 2011 at 10:07 AM, Andre Lopes lopes80an...@gmail.com wrote:

 Hi,

 I'm new to Numpy. I'm doing some tests with some Stock Market Quotes

 My struggle right now is how to get the values of the moving averages
 crosses, I send an image in attach to illustrate what I'm trying to
 get.

 I'm using the this computation to get when the moving averages
 crosses, but when I look at the graph, the values doesn't seem ok.

 [quote]
 # Get when the ma20 cross ma50
 equal = np.round(ma20,2)==np.round(ma50,2)
 dates_cross  = (dates[equal])
 prices_cross = (prices[equal])
 [/quote]


 The full code is this:
 [quote]
 # Modules
 import datetime
 import numpy as np
 import matplotlib.finance as finance
 import matplotlib.mlab as mlab
 import matplotlib.pyplot as plot

 # Define quote
 startdate = datetime.date(2008,10,1)
 today = enddate = datetime.date.today()
 ticker = 'uso'

 # Catch CSV
 fh = finance.fetch_historical_yahoo(ticker, startdate, enddate)

 # From CSV to REACARRAY
 r = mlab.csv2rec(fh); fh.close()
 # Order by Desc
 r.sort()


 ### Methods Begin
 def moving_average(x, n, type='simple'):

compute an n period moving average.

type is 'simple' | 'exponential'


x = np.asarray(x)
if type=='simple':
weights = np.ones(n)
else:
weights = np.exp(np.linspace(-1., 0., n))

weights /= weights.sum()


a =  np.convolve(x, weights, mode='full')[:len(x)]
a[:n] = a[n]
return a
 ### Methods End


 prices = r.adj_close
 dates = r.date
 ma20 = moving_average(prices, 20, type='simple')
 ma50 = moving_average(prices, 50, type='simple')

 # Get when the ma20 cross ma50
 equal = np.round(ma20,2)==np.round(ma50,2)
 dates_cross  = (dates[equal])
 prices_cross = (prices[equal])

 # Ver se a ma20  ma50
 # ma20_greater_than_ma50 = np.round(ma20,2)  np.round(ma50,2)
 # dates_ma20_greater_than_ma50  = (dates[ma20_greater_than_ma50])
 # prices_ma20_greater_than_ma50 = (prices[ma20_greater_than_ma50])

 print dates_cross
 print prices_cross
 #print dates_ma20_greater_than_ma50
 #print prices_ma20_greater_than_ma50


 plot.plot(prices)
 plot.plot(ma20)
 plot.plot(ma50)
 plot.show()
 [/quote]

 Someone can give me some clues?

 Best Regards,

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Memory error with numpy.loadtxt()

2011-02-25 Thread Joe Kington
Do you expect to have very large integer values, or only values over a
limited range?

If your integer values will fit in into 16-bit range (or even 32-bit, if
you're on a 64-bit machine, the default dtype is float64...) you can
potentially halve your memory usage.

I.e. Something like:
data = numpy.loadtxt(filename, dtype=numpy.int16)

Alternately, if you're already planning on using a (scipy) sparse array
anyway, it's easy to do something like this:

import numpy as np
import scipy.sparse
I, J, V = [], [], []
with open('infile.txt') as infile:
for i, line in enumerate(infile):
line = np.array(line.strip().split(), dtype=np.int)
nonzeros, = line.nonzero()
I.extend([i]*nonzeros.size)
J.extend(nonzeros)
V.extend(line[nonzeros])
data = scipy.sparse.coo_matrix((V,(I,J)), dtype=np.int, shape=(i+1,
line.size))

This will be much slower than numpy.loadtxt(...), but if you're just
converting the output of loadtxt to a sparse array, regardless, this would
avoid memory usage problems (assuming the array is mostly sparse, of
course).

Hope that helps,
-Joe



On Fri, Feb 25, 2011 at 9:37 AM, Jaidev Deshpande 
deshpande.jai...@gmail.com wrote:

 Hi

 Is it possible to load a text file 664 MB large with integer values and
 about 98% sparse? numpy.loadtxt() shows a memory error.

 If it's not possible, what alternatives could I have?

 The usable RAM on my machine running Windows 7 is 3.24 GB.

 Thanks.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Histogram does not preserve subclasses of ndarray (e.g. masked arrays)

2010-09-02 Thread Joe Kington
Hi all,

I just wanted to check if this would be considered a bug.

numpy.histogram does not appear to preserve subclasses of ndarrays (e.g.
masked arrays).  This leads to considerable problems when working with
masked arrays. (As per this Stack Overflow
questionhttp://stackoverflow.com/questions/3610040/how-to-create-the-histogram-of-an-array-with-masked-values-in-numpy
)

E.g.

import numpy as np
x = np.arange(100)
x = np.ma.masked_where(x  30, x)

counts, bin_edges = np.histogram(x)

yields:
counts -- array([10, 10, 10, 10, 10, 10, 10, 10, 10, 10])
bin_edges -- array([  0. ,   9.9,  19.8,  29.7,  39.6,  49.5,  59.4,
69.3,  79.2, 89.1,  99. ])

I would have expected histogram to ignore the masked portion of the data.
Is this a bug, or expected behavior?  I'll open a bug report, if it's not
expected behavior...

This would appear to be easily fixed by using asanyarray rather than asarray
within histogram.  E.g. this diff for numpy/lib/function_base.py
Index: function_base.py
===
--- function_base.py(revision 8604)
+++ function_base.py(working copy)
@@ -132,9 +132,9 @@

 

-a = asarray(a)
+a = asanyarray(a)
 if weights is not None:
-weights = asarray(weights)
+weights = asanyarray(weights)
 if np.any(weights.shape != a.shape):
 raise ValueError(
 'weights should have the same shape as a.')
@@ -156,7 +156,7 @@
 mx += 0.5
 bins = linspace(mn, mx, bins+1, endpoint=True)
 else:
-bins = asarray(bins)
+bins = asanyarray(bins)
 if (np.diff(bins)  0).any():
 raise AttributeError(
 'bins must increase monotonically.')

Thanks!
-Joe
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Histogram does not preserve subclasses of ndarray (e.g. masked arrays)

2010-09-02 Thread Joe Kington
On Thu, Sep 2, 2010 at 5:31 PM, josef.p...@gmail.com wrote:

 On Thu, Sep 2, 2010 at 3:50 PM, Joe Kington jking...@wisc.edu wrote:
  Hi all,
 
  I just wanted to check if this would be considered a bug.
 
  numpy.histogram does not appear to preserve subclasses of ndarrays (e.g.
  masked arrays).  This leads to considerable problems when working with
  masked arrays. (As per this Stack Overflow question)
 
  E.g.
 
  import numpy as np
  x = np.arange(100)
  x = np.ma.masked_where(x  30, x)
 
  counts, bin_edges = np.histogram(x)
 
  yields:
  counts -- array([10, 10, 10, 10, 10, 10, 10, 10, 10, 10])
  bin_edges -- array([  0. ,   9.9,  19.8,  29.7,  39.6,  49.5,  59.4,
  69.3,  79.2, 89.1,  99. ])
 
  I would have expected histogram to ignore the masked portion of the data.
  Is this a bug, or expected behavior?  I'll open a bug report, if it's not
  expected behavior...

 If you want to ignore masked data it's just on extra function call

 histogram(m_arr.compressed())

 I don't think the fact that this makes an extra copy will be relevant,
 because I guess full masked array handling inside histogram will be a
 lot more expensive.

 Using asanyarray would also allow matrices in and other subtypes that
 might not be handled correctly by the histogram calculations.

 For anything else besides dropping masked observations, it would be
 necessary to figure out what the masked array definition of a
 histogram is, as Bruce pointed out.

 (Another interesting question would be if histogram handles nans
 correctly, searchsorted ???)

 Josef


Good points all around.  I'll skip the enhancement request.  Sorry for the
noise!
Thanks!
-Joe



 
  This would appear to be easily fixed by using asanyarray rather than
 asarray
  within histogram.  E.g. this diff for numpy/lib/function_base.py
  Index: function_base.py
  ===
  --- function_base.py(revision 8604)
  +++ function_base.py(working copy)
  @@ -132,9 +132,9 @@
 
   
 
  -a = asarray(a)
  +a = asanyarray(a)
   if weights is not None:
  -weights = asarray(weights)
  +weights = asanyarray(weights)
   if np.any(weights.shape != a.shape):
   raise ValueError(
   'weights should have the same shape as a.')
  @@ -156,7 +156,7 @@
   mx += 0.5
   bins = linspace(mn, mx, bins+1, endpoint=True)
   else:
  -bins = asarray(bins)
  +bins = asanyarray(bins)
   if (np.diff(bins)  0).any():
   raise AttributeError(
   'bins must increase monotonically.')
 
  Thanks!
  -Joe
 
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Downcasting an array in-place?

2010-05-11 Thread Joe Kington
Is it possible to downcast an array in-place?

For example:
x = np.random.random(10) # Placeholder for real data
x -= x.min()
x /= x.ptp() / 255
x = x.astype(np.uint8)  -- returns a copy

First off, a bit of background to the question... At the moment, I'm trying
to downcast a large (10GB) array of uint16's to uint8's. I have enough RAM
to fit everything into memory, but I'd really prefer to use as few copies as
possible

In the particular case of a C-ordered uint16 array to uint8 on a
little-endian system, I can do this:
# x is the big 3D array of uint16's
x -= x.min()
x /= x.ptp() / 255
x = x.view(np.uint8)[:, :, ::2]

That works, but a)  produces a non-contiguous array, and b)  is awfully
case-specific.

Is there a way to do something similar to astype(), but have it
cannibalize the memory of the original array?  (e.g. the out argument in
a ufunc?)

Hopefully my question makes some sense to someone other than myself...

Thanks!
-Joe
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Memory profiling NumPy code?

2010-04-28 Thread Joe Kington
I know you're looking for something with much more fine-grained control,
(which I can't help much with) but I often find it useful to just plot the
overall memory of the program over time.

There may be an slicker way to do it, but here's the script I use, anyway...
(saved as ~/bin/quick_profile, usage quick_profile (whatever), e.g.
quick_profile python script.py)

# /bin/sh

# Setup
datfile=$(mktemp)
echo ElapsedTime MemUsed  $datfile

starttime=$(date +%s.%N)

# Run the specified command in the background
$@ 

# While the last process is still going
while [ -n `ps --no-headers $!` ]
do
bytes=$(ps -o rss -C $1 --no-headers | awk '{SUM += $1} END {print
SUM}')
elapsed=$(echo $(date +%s.%N) - $starttime | bc)
echo $elapsed $bytes  $datfile
sleep 0.1
done

# Plot up the results with matplotlib
cat EOF | python
import pylab, sys, numpy
infile = file($datfile)
infile.readline() # skip first line
data = [[float(dat) for dat in line.strip().split()] for line in infile]
data = numpy.array(data)
time,mem = data[:,0], data[:,1]/1024
pylab.plot(time,mem)
pylab.title(Profile of:  \%s\  % $@)
pylab.xlabel('Elapsed Time (s): Total %0.5f s' % time.max())
pylab.ylabel('Memory Used (MB): Peak %0.2f MB' % mem.max())
pylab.show()
EOF

rm $datfile

Hope that helps a bit, anyway...
-Joe

On Mon, Apr 26, 2010 at 6:16 AM, David Cournapeau courn...@gmail.comwrote:

 On Mon, Apr 26, 2010 at 7:57 PM, Dag Sverre Seljebotn
 da...@student.matnat.uio.no wrote:
  I'd like to profile the memory usage of my application using tools like
  e.g. Heapy. However since NumPy arrays are allocated in C they are not
  tracked by Python memory profiling.
 
  Does anybody have a workaround to share? I really just want to track a
  few arrays in a friendly way from within Python (I am aware of the
  existance of C-level profilers).

 I think heapy has some hooks so that you can add support for
 extensions. Maybe we could provide a C API in numpy to make this easy,

 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] lists of zeros and ones

2010-03-19 Thread Joe Kington
See itertools.permutations (python standard library)

e.g.
In [3]: list(itertools.permutations([1,1,0,0]))
Out[3]:
[(1, 1, 0, 0),
 (1, 1, 0, 0),
 (1, 0, 1, 0),
 (1, 0, 0, 1),
 (1, 0, 1, 0),
 (1, 0, 0, 1),
 (1, 1, 0, 0),
 (1, 1, 0, 0),
 (1, 0, 1, 0),
 (1, 0, 0, 1),
 (1, 0, 1, 0),
 (1, 0, 0, 1),
 (0, 1, 1, 0),
 (0, 1, 0, 1),
 (0, 1, 1, 0),
 (0, 1, 0, 1),
 (0, 0, 1, 1),
 (0, 0, 1, 1),
 (0, 1, 1, 0),
 (0, 1, 0, 1),
 (0, 1, 1, 0),
 (0, 1, 0, 1),



 (0, 0, 1, 1),
 (0, 0, 1, 1)]

Hope that helps,
-Joe

On Fri, Mar 19, 2010 at 9:53 AM, gerardob gberbeg...@gmail.com wrote:


 Hello, i would like to produce lists of lists 1's and 0's.

 For example, to produce the list composed of:

 L = [[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]]

 I just need to do the following:

 n=4
 numpy.eye(n,dtype=int).tolist()

 I would like to know a simple way to generate a list containing all the
 lists having two 1's at each element.

 Example, n = 4
 L2 = [[1,1,0,0],[1,0,1,0],[1,0,0,1],[0,1,1,0],[0,1,0,1],[0,0,1,1]]

 Any ideas?
 Thanks.







 --
 View this message in context:
 http://old.nabble.com/lists-of-zeros-and-ones-tp27950978p27950978.html
 Sent from the Numpy-discussion mailing list archive at Nabble.com.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] lists of zeros and ones

2010-03-19 Thread Joe Kington
I just realized that permutations isn't quite what you want, as swapping the
first 1 for the second 1 gives the same thing.  You can use set to get
the unique permutations.
e.g.
In [4]: set(itertools.permutations([1,1,0,0]))
Out[4]:
set([(0, 0, 1, 1),
 (0, 1, 0, 1),
 (0, 1, 1, 0),
 (1, 0, 0, 1),
 (1, 0, 1, 0),
 (1, 1, 0, 0)])


On Fri, Mar 19, 2010 at 10:17 AM, Joe Kington jking...@wisc.edu wrote:

 See itertools.permutations (python standard library)

 e.g.
 In [3]: list(itertools.permutations([1,1,0,0]))
 Out[3]:
 [(1, 1, 0, 0),
  (1, 1, 0, 0),
  (1, 0, 1, 0),
  (1, 0, 0, 1),
  (1, 0, 1, 0),
  (1, 0, 0, 1),
  (1, 1, 0, 0),
  (1, 1, 0, 0),
  (1, 0, 1, 0),
  (1, 0, 0, 1),
  (1, 0, 1, 0),
  (1, 0, 0, 1),
  (0, 1, 1, 0),
  (0, 1, 0, 1),
  (0, 1, 1, 0),
  (0, 1, 0, 1),
  (0, 0, 1, 1),
  (0, 0, 1, 1),
  (0, 1, 1, 0),
  (0, 1, 0, 1),
  (0, 1, 1, 0),
  (0, 1, 0, 1),



  (0, 0, 1, 1),
  (0, 0, 1, 1)]

 Hope that helps,
 -Joe

 On Fri, Mar 19, 2010 at 9:53 AM, gerardob gberbeg...@gmail.com wrote:


 Hello, i would like to produce lists of lists 1's and 0's.

 For example, to produce the list composed of:

 L = [[1,0,0,0],[0,1,0,0],[0,0,1,0],[0,0,0,1]]

 I just need to do the following:

 n=4
 numpy.eye(n,dtype=int).tolist()

 I would like to know a simple way to generate a list containing all the
 lists having two 1's at each element.

 Example, n = 4
 L2 = [[1,1,0,0],[1,0,1,0],[1,0,0,1],[0,1,1,0],[0,1,0,1],[0,0,1,1]]

 Any ideas?
 Thanks.







 --
 View this message in context:
 http://old.nabble.com/lists-of-zeros-and-ones-tp27950978p27950978.html
 Sent from the Numpy-discussion mailing list archive at Nabble.com.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] a simple examplr showing numpy and matplotlib failing

2009-12-02 Thread Joe Kington
I'm just guessing here, but have you tried completely destroying the figure
each time, as Michael suggested?

That should avoid the problem you're having, I think...

At any rate, if you don't do a fig.clf(), I'm fairly sure matplotlib keeps a
reference to the data around.

Hope that helps,
-Joe

On Wed, Dec 2, 2009 at 11:55 AM, Yeates, Mathew C (388D) 
mathew.c.yea...@jpl.nasa.gov wrote:

  Anybody have any ideas what is going on here. Although I found a
 workaround, I’m concerned about memory leaks


  --

 *From:* numpy-discussion-boun...@scipy.org [mailto:
 numpy-discussion-boun...@scipy.org] *On Behalf Of *Yeates, Mathew C (388D)
 *Sent:* Tuesday, December 01, 2009 6:53 PM

 *To:* Discussion of Numerical Python
 *Subject:* Re: [Numpy-discussion] a simple examplr showing numpy and
 matplotlib failing



 I found  a workaround. If I replace

  plot_data=data[0,0:,0]

 With

  plot_data=numpy.copy(data[0,0:,0])



 Everything is okay.



 I am on Windows XP 64 with 4 Gigs ram. (Note: the data array is greater
 than 4 Gigs since my datatype is float64. If I decrease the size so that the
 array is around 3 Gigs, all is good)





 Mathew








  --

 *From:* numpy-discussion-boun...@scipy.org [mailto:
 numpy-discussion-boun...@scipy.org] *On Behalf Of *Santanu Chatterjee
 *Sent:* Tuesday, December 01, 2009 12:15 PM
 *To:* Discussion of Numerical Python
 *Subject:* Re: [Numpy-discussion] a simple examplr showing numpy and
 matplotlib failing



 Hi Mathew,
 I saw your email and I was curious about it. I tried your code and it
 does work for me without any problem.

 Santanu

 On Tue, Dec 1, 2009 at 2:58 PM, Michael Droettboom md...@stsci.edu
 wrote:

 Hmm... works for me. What platform, with how much physical and virtual RAM?

 One thing you may want to try is to completely destroy the figure each
 time:

 if fig:
 fig.clf()
 fig=None

 Mike


 Yeates, Mathew C (388D) wrote:
 
  Click on “Hello World” twice and get a memory error. Comment out the
  ax.plot call and get no error.
 
  import numpy
 
  import sys
 
  import gtk
 
  from matplotlib.figure import Figure
 
  from matplotlib.backends.backend_gtkagg import FigureCanvasGTKAgg as
  FigureCanvas
 
  ax=None
 
  fig=None
 
  canvas=None
 
  def doplot(widget,box1):
 
  global ax,fig,canvas
 
  data=numpy.zeros(shape=(3508,125,129))
 
  plot_data=data[0,0:,0]
 
  if canvas:
 
  box1.remove(canvas)
 
  canvas=None
 
  if ax:
 
  ax.cla()
 
  ax=None
 
  if fig: fig=None
 
  fig = Figure(figsize=(5,5), dpi=100)
 
  ax = fig.add_subplot(111)
 
  mif=numpy.arange(plot_data.shape[0])
 
  #if the next line is commented out, all is good
 
  ax.plot(plot_data,mif)
 
  canvas = FigureCanvas(fig)
 
  box1.pack_start(canvas, True, True, 0)
 
  canvas.show()
 
  def delete_event(widget, event, data=None):
 
  return False
 
  window = gtk.Window(gtk.WINDOW_TOPLEVEL)
 
  window.connect(destroy, lambda x: gtk.main_quit())
 
  box1 = gtk.HBox(False, 0)
 
  window.add(box1)
 
  button = gtk.Button(Hello World)
 
  box1.pack_start(button, True, True, 0)
 
  #window.add(box1)
 
  button.show()
 
  button.connect(clicked, doplot, box1)
 
  box1.show()
 
  window.set_default_size(500,400)
 
  window.show()
 
  gtk.main()
 

  
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

 --
 Michael Droettboom
 Science Software Branch
 Operations and Engineering Division
 Space Telescope Science Institute
 Operated by AURA for NASA

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Compound conditional indexing

2009-09-30 Thread Joe Kington
There may be a more elegant way, but:

In [2]: a = np.arange(10)

In [3]: a[(a5)  (a8)]
Out[3]: array([6, 7])


On Wed, Sep 30, 2009 at 1:27 PM, Gökhan Sever gokhanse...@gmail.com wrote:

 Hello,

 How to conditionally index an array as shown below :

 a = arange(10)
 a[5a8]

 to get
 array([6,7])

 I can't do this with where either.

 What is the cure for this?

 Thanks.

 --
 Gökhan

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] making the distinction between -0.0 and 0.0..

2009-09-29 Thread Joe Kington
Well, this is messy, and nearly unreadable, but it should work and is pure
python(and I think even be endian-independent).

struct.unpack('b',struct.pack('d', X)[0])[0] = 0
(where X is the variable you want to test)

In [54]: struct.unpack('b',struct.pack('d',0.0)[0])[0] = 0
Out[54]: True

In [55]: struct.unpack('b',struct.pack('d',-0.0)[0])[0] = 0
Out[55]: False

In [56]: struct.unpack('b',struct.pack('d',-0.001)[0])[0] = 0
Out[56]: False

In [57]: struct.unpack('b',struct.pack('d',0.001)[0])[0] = 0
Out[57]: True

In [58]: struct.unpack('b',struct.pack('d',3999564.8763)[0])[0] = 0
Out[58]: True

In [59]: struct.unpack('b',struct.pack('d',-3999564.8763)[0])[0] = 0
Out[59]: False

Hope that helps, anyway
-Joe

On Tue, Sep 29, 2009 at 12:04 PM, Christopher Barker
chris.bar...@noaa.govwrote:

 Pauli Virtanen wrote:
  Tue, 29 Sep 2009 09:53:40 -0700, Christopher Barker wrote:
  [clip]
  How can I identify -0.0?
 
  signbit
 

 perfect for numpy, but at this point I don't have a numpy dependency
 (very unusual for my code!). Anyone know a pure-python way to get it?

 It seems I should be able to do something like:

 struct.pack(d,-3.4)[0]  Something

 but I'm not sure what Something is, and it would be endian-dependent,
 wouldn't it?

 thanks,
 -Chris


 --
 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 chris.bar...@noaa.gov
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] making the distinction between -0.0 and 0.0..

2009-09-29 Thread Joe Kington
I just realized that what I'm doing won't work on older versions of python,
anyway...

Things work fine on 2.6

Python 2.6.2 (r262:71600, Sep  3 2009, 09:36:43)
[GCC 3.4.6 20060404 (Red Hat 3.4.6-9)] on linux2
Type help, copyright, credits or license for more information.
 import struct
 struct.pack('f', -0.0)
'\x00\x00\x00\x80'  -- Has sign bit
 struct.pack('f', -0.0)
'\x80\x00\x00\x00'  -- Has sign bit
 struct.pack('f', -0.0)
'\x00\x00\x00\x80'  -- Has sign bit
 struct.pack('=f', -0.0)
'\x00\x00\x00\x80'


But on python 2.3 it only works for the native endian case

Python 2.3.4 (#1, Jan  9 2007, 16:40:09)
[GCC 3.4.6 20060404 (Red Hat 3.4.6-3)] on linux2
Type help, copyright, credits or license for more information.
 import struct
 struct.pack('f', -0.0)
'\x00\x00\x00\x80'  -- Correct, has sign bit and unpacks to -0.0
 struct.pack('f', -0.0)
'\x00\x00\x00\x00'  -- (big-endian) Lost the sign bit! Unpacks to
0.0
 struct.pack('f', -0.0)
'\x00\x00\x00\x00'  -- (big-endian) Lost the sign bit!
 struct.pack('=f', -0.0)
'\x00\x00\x00\x00'  -- (whichever is native) Lost the sign bit!


So I guess prior to 2.6 (which as xavier pointed out, has copysign)  trying
to use struct won't be endian-independent.

Of course, you could always do x.__repr__().startswith('-'), but it's slow,
ugly, and you already said you wanted to avoid  it.  On the other hand, it
works everywhere with any version of python.

At any rate, I'm doubt I'm helping that much,
-Joe
On Tue, Sep 29, 2009 at 12:19 PM, Joe Kington jking...@wisc.edu wrote:

 Well, this is messy, and nearly unreadable, but it should work and is pure
 python(and I think even be endian-independent).

 struct.unpack('b',struct.pack('d', X)[0])[0] = 0
 (where X is the variable you want to test)

 In [54]: struct.unpack('b',struct.pack('d',0.0)[0])[0] = 0
 Out[54]: True

 In [55]: struct.unpack('b',struct.pack('d',-0.0)[0])[0] = 0
 Out[55]: False

 In [56]: struct.unpack('b',struct.pack('d',-0.001)[0])[0] = 0
 Out[56]: False

 In [57]: struct.unpack('b',struct.pack('d',0.001)[0])[0] = 0
 Out[57]: True

 In [58]: struct.unpack('b',struct.pack('d',3999564.8763)[0])[0] = 0
 Out[58]: True

 In [59]: struct.unpack('b',struct.pack('d',-3999564.8763)[0])[0] = 0
 Out[59]: False

 Hope that helps, anyway
 -Joe


 On Tue, Sep 29, 2009 at 12:04 PM, Christopher Barker 
 chris.bar...@noaa.gov wrote:

 Pauli Virtanen wrote:
  Tue, 29 Sep 2009 09:53:40 -0700, Christopher Barker wrote:
  [clip]
  How can I identify -0.0?
 
  signbit
 

 perfect for numpy, but at this point I don't have a numpy dependency
 (very unusual for my code!). Anyone know a pure-python way to get it?

 It seems I should be able to do something like:

 struct.pack(d,-3.4)[0]  Something

 but I'm not sure what Something is, and it would be endian-dependent,
 wouldn't it?

 thanks,
 -Chris


 --
 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 chris.bar...@noaa.gov
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] making the distinction between -0.0 and 0.0..

2009-09-29 Thread Joe Kington
I know it's a bit pointless profiling these, but just so I can avoid doing
real work for a bit...

In [1]: import sys, struct, math

In [2]: def comp_struct(x):
   ...: # Get the first or last byte, depending on endianness
   ...: # (using 'f' or 'f' loses the signbit for -0.0 in older
python's)
   ...: i = {'little':-1, 'big':0}[sys.byteorder]
   ...: return struct.unpack('b', struct.pack('f', x)[i])[0]  0
   ...:

In [3]: comp_struct(0.00); comp_struct(-0.00)
Out[3]: False
Out[3]: True

In [4]: def comp_string(x):
   ...: return x.__repr__().startswith('-')
   ...:

In [5]: comp_struct(0.00); comp_struct(-0.00)
Out[5]: False
Out[5]: True

In [6]: def comp_atan(x):
   : if x  0: return False
   : elif x  0: return True
   : elif math.atan2(x, -1.0)  0:
   : return False
   : else:
   :return True
   :

In [7]: comp_atan(0.00); comp_atan(-0.00)
Out[7]: False
Out[7]: True

In [8]: %timeit comp_struct(-0.0)
100 loops, best of 3: 971 ns per loop

In [9]: %timeit comp_string(-0.0)
100 loops, best of 3: 1.43 us per loop

In [10]: %timeit comp_atan(-0.0)
100 loops, best of 3: 502 ns per loop

And just to compare to what was just posted:
In [45]: %timeit signbit(-0.0)
100 loops, best of 3: 995 ns per loop

So, if speed matters, apparently checking things through atan2 is the
fastest.  (wouldn't have guessed that!)

On the other hand I'm being a bit ridiculous profiling these... Use
whichever is the most readable, I guess.

This is more fun than real work, though!
-Joe

On Tue, Sep 29, 2009 at 4:24 PM, Christopher Barker
chris.bar...@noaa.govwrote:

 Christian Heimes wrote:
  How about using atan2()? :)

 unless atan2 shortcuts for the easy ones, that doesn't strike me as
 efficient (though with python function call overhead, maybe!).

 Anyway, of course, some googling that I should have done in the first
 place, revealed double.py, from Martin Jansche:

 http://symptotic.com/mj/code.html
 (MIT license).


 double.py provides a full(?) set of IEEE functions for doubles in
 Python. His solution to the problem at hand is:

 def signbit(value):
 
 Test whether the sign bit of the given floating-point value is
 set.  If it is set, this generally means the given value is
 negative.  However, this is not the same as comparing the value
 to C{0.0}.  For example:

  NEGATIVE_ZERO  0.0
 False

 since negative zero is numerically equal to positive zero.  But
 the sign bit of negative zero is indeed set:

  signbit(NEGATIVE_ZERO)
 True
  signbit(0.0)
 False

 @type  value: float
 @param value: a Python (double-precision) float value

 @rtype:  bool
 @return: C{True} if the sign bit of C{value} is set;
  C{False} if it is not set.
 
 return (doubleToRawLongBits(value)  63) == 1

 where:

 def doubleToRawLongBits(value):
 
 @type  value: float
 @param value: a Python (double-precision) float value

 @rtype: long
 @return: the IEEE 754 bit representation (64 bits as a long integer)
  of the given double-precision floating-point value.
 
 # pack double into 64 bits, then unpack as long int
 return _struct.unpack('Q', _struct.pack('d', value))[0]


 Which is pretty much what I was looking for, though I can't say I've
 profiled the various options at hand!

 -Chris



 --
 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 chris.bar...@noaa.gov
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] making the distinction between -0.0 and 0.0..

2009-09-29 Thread Joe Kington
Using 'd' rather than 'f' doesn't fix the problem...

Python 2.3.4 (#1, Jan  9 2007, 16:40:09)
[GCC 3.4.6 20060404 (Red Hat 3.4.6-3)] on linux2
Type help, copyright, credits or license for more information.
 import struct
 struct.pack('d', -0.0)
'\x00\x00\x00\x00\x00\x00\x00\x80'   -- Correct signbit
 struct.pack('d', -0.0)
'\x00\x00\x00\x00\x00\x00\x00\x00'   -- No signbit. Unpacks to 0.0
 struct.pack('d', -0.0)
'\x00\x00\x00\x00\x00\x00\x00\x00'   -- No signbit.


In python 2.6 it works fine. -0.0 packs to
'\x00\x00\x00\x00\x00\x00\x00\x80' or '\x80\x00...' as it should.

I'm assuming it's a bug that was fixed somewhere in between?


On Tue, Sep 29, 2009 at 4:39 PM, Robert Kern robert.k...@gmail.com wrote:

 On Tue, Sep 29, 2009 at 16:37, Joe Kington jking...@wisc.edu wrote:
  I know it's a bit pointless profiling these, but just so I can avoid
 doing
  real work for a bit...
 
  In [1]: import sys, struct, math
 
  In [2]: def comp_struct(x):
 ...: # Get the first or last byte, depending on endianness
 ...: # (using 'f' or 'f' loses the signbit for -0.0 in older
  python's)

 Did you try 'd'? I wonder if the extra conversion step from a C double
 to a C float is causing this issue.

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
  -- Umberto Eco
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] interpolation in numpy

2009-07-09 Thread Joe Kington
scipy.ndimage.zoom is exactly what you're looking for, as Zach Pincus
already said.

As far as I know, numpy doesn't have any 3D interpolation routines, so
you'll have to install scipy. Interp2d will only interpolate slices of your
data, not the whole volume.

-Joe

On Thu, Jul 9, 2009 at 8:42 AM, Thomas Hrabe thr...@googlemail.com wrote:

 Hi all,

 I am not a newbie to python and numpy, but however, I kinda do not
 find a proper solution for my interpolation problem without coding it
 explicitly myself.

 All I want to do is to increase the resolution of an tree dimensional
 array.

 I have a volume 'a'

 a = numpy.random.rand(3,3,3)

 now, I would like to expand a to a volume b of size say 10x10x10

 I want b to have interpolated values from a. One could think of this
 procedure as zooming into the volume, similar for images or so.

 numpy.interp does such a thing for 1d, but is there such function for 3d,
 too?

 Thank you in advance for your help,

 Thomas

 FYI: I do not have scipy installed
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Indexing with integer ndarrays

2009-06-13 Thread Joe Kington
Hi folks,

This is probably a very simple question, but it has me stumped...

I have an integer 2D array containing 3rd dimesion indicies that I'd like to
use to index values in a 3D array.

Basically, I want the equivalent of:

 output = np.zeros((ny,nx))

 for i in xrange(ny):
 for j in xrange(nx):
 z = grid[i,j]
 output[j,i] = bigVolume[j,i,z]

Where grid is my 2D array of indicies and bigVolume is my 3D array.

I've read the numpy-user and numpybook sections on indexing with an integer
ndarray, but I'm still not quite able to wrap my head around how it should
work.  I'm sure I'm missing something obvious (I haven't been using numpy
particularly long).

If it helps anyone visualize it, I'm essentially trying to extract
attributes out of a seismic volume along the surface of a horizion.

Thanks!
-Joe
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Indexing with integer ndarrays

2009-06-13 Thread Joe Kington
Thank you! That answered things quite nicely.  My apologies for not finding
the earlier discussion before sending out the question...

Thanks again,
-Joe

On Sat, Jun 13, 2009 at 7:17 PM, Robert Kern robert.k...@gmail.com wrote:

 On Sat, Jun 13, 2009 at 19:11, Joe Kingtonjking...@wisc.edu wrote:
  Hi folks,
 
  This is probably a very simple question, but it has me stumped...
 
  I have an integer 2D array containing 3rd dimesion indicies that I'd like
 to
  use to index values in a 3D array.
 
  Basically, I want the equivalent of:
 
  output = np.zeros((ny,nx))
 
  for i in xrange(ny):
  for j in xrange(nx):
  z = grid[i,j]
  output[j,i] = bigVolume[j,i,z]
 
  Where grid is my 2D array of indicies and bigVolume is my 3D array.
 
  I've read the numpy-user and numpybook sections on indexing with an
 integer
  ndarray, but I'm still not quite able to wrap my head around how it
 should
  work.  I'm sure I'm missing something obvious (I haven't been using numpy
  particularly long).
 
  If it helps anyone visualize it, I'm essentially trying to extract
  attributes out of a seismic volume along the surface of a horizion.

 I discuss this particular use case (well, a little different; we are
 pulling out a thin slab around a horizon rather than a slice) here:

 http://mail.scipy.org/pipermail/numpy-discussion/2008-July/035776.html

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
  -- Umberto Eco
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion