### [Numpy-discussion] Problem with numpy.linalg.eig?

Hi all, The following code calling numpy v1.0.4 fails to terminate on my machine, which was not the case with v1.0.3.1 from numpy import arange, float64 from numpy.linalg import eig a = arange(13*13, dtype = float64) a.shape = (13,13) a = a%17 eig(a) Regards, Peter

### Re: [Numpy-discussion] Problem with numpy.linalg.eig?

Yes, with the MSI I can always reproduce the problem with numpy.test(). It always hangs.With the egg it does not hang. Pointer problems are usually random, but not random if we are using the same binaries in EGG and MSI and variables are always initialized to certain value. I can

### [Numpy-discussion] Correlate with small arrays

Hi, I'm trying to do a PDE style calculation with numpy arrays y = a * x[:-2] + b * x[1:-1] + c * x[2:] with a,b,c constants. I realise I could use correlate for this, i.e y = numpy.correlate(x, array((a, b, c))) however the performance doesn't seem as good (I suspect correlate is optimised

### Re: [Numpy-discussion] Correlate with small arrays

I'm trying to do a PDE style calculation with numpy arrays y = a * x[:-2] + b * x[1:-1] + c * x[2:] with a,b,c constants. I realise I could use correlate for this, i.e y = numpy.correlate(x, array((a, b, c))) The relative performance seems to vary depending on the size,

### Re: [Numpy-discussion] Simple financial functions for NumPy

Right now it looks like there is a mix of attitudes, about the financial functions. They are a small enough addition, that I don't think it matters terribly much what we do with them. So, it seems to me that keeping them in numpy.lib and following the rule for that namespace

### Re: [Numpy-discussion] Win32 installer: please test it

Yes, I am one of those users with crashes in numpy 1.4. Your build seems to pass for me. For reference my machine is Windows XP, Intel Xeon 5140 - Numpy is installed in C:\Python25\lib\site-packages\numpy Numpy version 1.0.5.dev5008 Python version 2.5.2 (r252:60911, Feb 21 2008,

### Re: [Numpy-discussion] Fast histogram

On 17/04/2008, Zachary Pincus wrote: But even if indices = array, one still needs to do something like: for index in indices: histogram[index] += 1 Which is slow in python and fast in C. I haven't tried this, but if you want the sum in C you could do for x in unique(indices):

### [Numpy-discussion] Fwd: Fast histogram

On 17/04/2008, Zachary Pincus wrote: But even if indices = array, one still needs to do something like: for index in indices: histogram[index] += 1 Which is slow in python and fast in C. I haven't tried this, but if you want the sum in C you could do for x in unique(indices):

### [Numpy-discussion] Generalised inner product

Hi, Does numpy have some sort of generalised inner product? For example I have arrays a.shape = (5,6,7) b.shape = (8,7,9,10) and I want to perform a product over the 3rd axis of a and the 2nd of b, i.e. c[i,j,k,l,m] = sum (over x) of a[i,j,x] * b[k,x,l,m] I guess I could do it with swapaxes

### Re: [Numpy-discussion] Generalised inner product

c[i,j,k,l,m] = sum (over x) of a[i,j,x] * b[k,x,l,m] Try tensordot. Chuck That was exactly what I needed. Thanks! Peter ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion

### Re: [Numpy-discussion] Multiple Boolean Operations

Hi Andrea, 2008/5/23 Andrea Gavana [EMAIL PROTECTED]: And so on. The probelm with this approach is that I lose the original indices for which I want all the inequality tests to succeed: To have the original indices you just need to re-index your indices, as it were idx = flatnonzero(xCent =

### [Numpy-discussion] Installation info

Is numpy v1.1 going to come out in egg format? I ask because I only see the superpack installers on the sourceforge page, and we have users who we are delivering to via egg - requires. thanks, Peter 2008/5/23 [EMAIL PROTECTED]: Send Numpy-discussion mailing list submissions to

### Re: [Numpy-discussion] Installation info

2008/5/30 Peter Creasey [EMAIL PROTECTED]: Is numpy v1.1 going to come out in egg format? Oops, I didn't mean to mail with an entire numpy digest in the body. sorry, Peter ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http

### Re: [Numpy-discussion] Where is Jaime?

>> > >> > Is the interp fix in the google pipeline or do we need a workaround? >> > >> >> Oooh, if someone is looking at changing interp, is there any chance >> that fp could be extended to take complex128 rather than just float >> values? I.e. so that I could write: >> >> >>> y = interp(mu,

### Re: [Numpy-discussion] FeatureRequest: support for array

> > > > from itertools import chain > > def fromiter_awesome_edition(iterable): > > elem = next(iterable) > > dtype = whatever_numpy_does_to_infer_dtypes_from_lists(elem) > > return np.fromiter(chain([elem], iterable), dtype=dtype) > > > > I think this would be a huge win for

### Re: [Numpy-discussion] How to find indices of values in an array (indirect in1d) ?

> > In the end, I?ve only the list comprehension to work as expected > > A = [0,0,1,3] > B = np.arange(8) > np.random.shuffle(B) > I = [list(B).index(item) for item in A if item in B] > > > But Mark's and Sebastian's methods do not seem to work... > The function you want is also in the open

### [Numpy-discussion] PR for complex np.interp, question about assert_almost_equal

Hi all, I submitted a PR (#6872) for using complex numbers in np.lib.interp. The tests pass on my machine, but I see that the TravisCI builds are giving assertion fails (on my own test) with python 3.3 and 3.5 of the form: > assert_almost_equal > TypeError: Cannot cast array data from

### Re: [Numpy-discussion] PR for complex np.interp, question about assert_almost_equal

>>> The tests pass on my machine, but I see that the TravisCI builds are >>> giving assertion fails (on my own test) with python 3.3 and 3.5 of the >>> form: >>> > assert_almost_equal >>> > TypeError: Cannot cast array data from dtype('complex128') to >>> dtype('float64') according to the rule

### Re: [Numpy-discussion] PR for complex np.interp, question about assert_almost_equal

>> > assert_almost_equal >> > TypeError: Cannot cast array data from dtype('complex128') to >> dtype('float64') according to the rule 'safe' >> >> > > Hi Peter, that error is unrelated to assert_almost_equal. What happens is > that when you pass in a complex argument `fp` to your modified >

### Re: [Numpy-discussion] Misleading/erroneous TypeError message

> > I just upgraded my numpy and started to received a TypeError from one of > > my codes that relied on the old, less strict, casting behaviour. The error > > message, however, left me scratching my head when trying to debug something > > like this: > > > > >>> a = array([0],dtype=uint64) > > >>>

Hi, I just upgraded my numpy and started to received a TypeError from one of my codes that relied on the old, less strict, casting behaviour. The error message, however, left me scratching my head when trying to debug something like this: >>> a = array([0],dtype=uint64) >>> a +=

### Re: [Numpy-discussion] Integers to integer powers, let's make a decision

> > +1 > > On Sat, Jun 4, 2016 at 10:22 AM, Charles R Harris > wrote: >> Hi All, >> >> I've made a new post so that we can make an explicit decision. AFAICT, the >> two proposals are >> >> Integers to negative integer powers raise an error. >> Integers to integer powers

### Re: [Numpy-discussion] Behavior of np.random.uniform

>> I would also point out that requiring open vs closed intervals (in >> doubles) is already an extremely specialised use case. In terms of >> *sampling the reals*, there is no difference between the intervals >> (a,b) and [a,b], because the endpoints have measure 0, and even with >>

### Re: [Numpy-discussion] Behavior of np.random.uniform

+1 for the deprecation warning for low>high, I think the cases where that is called are more likely to be unintentional rather than someone trying to use uniform(closed_end, open_end) and you might help users find bugs - i.e. the idioms of ‘explicit is better than implicit’ and ‘fail early and

### Re: [Numpy-discussion] proposal: new logspace without the log in the argument

> > Some questions it'd be good to get feedback on: > > - any better ideas for naming it than "geomspace"? It's really too bad > that the 'logspace' name is already taken. > > - I guess the alternative interface might be something like > > np.linspace(start, stop, steps, spacing="log") > > what do

### Re: [Numpy-discussion] Reflect array?

> >> On Tue, Mar 29, 2016 at 1:46 PM, Benjamin Root > >> wrote: > >> > Is there a quick-n-easy way to reflect a NxM array that represents a > >> > quadrant into a 2Nx2M array? Essentially, I am trying to reduce the size > >> > of > >> > an expensive calculation by taking

### Re: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python

> Date: Wed, 31 Aug 2016 13:28:21 +0200 > From: Michael Bieri > > I'm not quite sure which approach is state-of-the-art as of 2016. How would > you do it if you had to make a C/C++ library available in Python right now? > > In my case, I have a C library with some scientific

### Re: [Numpy-discussion] padding options for diff

> Date: Wed, 26 Oct 2016 09:05:41 -0400 > From: Matthew Harrigan > > np.cumsum(np.diff(x, to_begin=x.take([0], axis=axis), axis=axis), axis=axis) > > That's certainly not going to win any beauty contests. The 1d case is > clean though: > > np.cumsum(np.diff(x,

### Re: [Numpy-discussion] padding options for diff

> Date: Wed, 26 Oct 2016 16:18:05 -0400 > From: Matthew Harrigan > > Would it be preferable to have to_begin='first' as an option under the > existing kwarg to avoid overlapping? > >> if keep_left: >> if to_begin is None: >> to_begin = np.take(a, [0],

### Re: [Numpy-discussion] padding options for diff

> Date: Mon, 24 Oct 2016 08:44:46 -0400 > From: Matthew Harrigan > > I posted a pull request which > adds optional padding kwargs "to_begin" and "to_end" to diff. Those > options are based on what's available in ediff1d. It

### Re: [Numpy-discussion] Integers to negative integer powers, time for a decision.

> On Sun, Oct 9, 2016 at 12:59 PM, Stephan Hoyer wrote: > >> >> I agree with Sebastian and Nathaniel. I don't think we can deviating from >> the existing behavior (int ** int -> int) without breaking lots of existing >> code, and if we did, yes, we would need a new integer power