[Numpy-discussion] doing zillions of 3x3 determinants fast

2008-08-24 Thread Daniel Lenski
Hi all, I need to take the determinants of a large number of 3x3 matrices, in order to determine for each of N points, in which of M tetrahedral cells they lie. I arrange the matrices in an ndarray of shape (N,M,5,3,3). As far as I can tell, Numpy doesn't have a function to do determinants

Re: [Numpy-discussion] weights parameter of np.average() doesn't work (BUG?)

2008-08-24 Thread Daniel Lenski
On Sun, 24 Aug 2008 20:57:43 -0600, Charles R Harris wrote: On Sun, Aug 24, 2008 at 8:03 PM, Dan Lenski [EMAIL PROTECTED] wrote: This has been fixed in later versions: In [2]: a=arange(100).reshape(10,10) In [3]: average(a, axis=1, weights=ones(10)) Out[3]: array([ 4.5, 14.5, 24.5,

Re: [Numpy-discussion] doing zillions of 3x3 determinants fast

2008-08-24 Thread Daniel Lenski
On Mon, 25 Aug 2008 03:48:54 +, Daniel Lenski wrote: * it's fast enough for 100,000 determinants, but it bogs due to all the temporary arrays when I try to do 1,000,000 determinants (=72 MB array) I've managed to reduce the memory usage significantly by getting the number

Re: [Numpy-discussion] reading *big* inhomogenous text matrices *fast*?

2008-08-13 Thread Daniel Lenski
On Wed, 13 Aug 2008 16:57:32 -0400, Zachary Pincus wrote: Your approach generates numerous large temporary arrays and lists. If the files are large, the slowdown could be because all that memory allocation is causing some VM thrashing. I've run into that at times parsing large text files.

Re: [Numpy-discussion] non-linear array manipulation

2008-08-13 Thread Daniel Lenski
On Tue, 12 Aug 2008 10:37:51 -0400, Gong, Shawn (Contractor) wrote: The following array manipulation takes long time because I can't find ways to do in row/column, and have to do cell by cell. Would you check to see if there is a nicer/faster way for this non-linear operation? for i in

Re: [Numpy-discussion] reading *big* inhomogenous text matrices *fast*?

2008-08-13 Thread Daniel Lenski
On Wed, 13 Aug 2008 20:55:02 -0500, robert.kern wrote: This is similar to what I tried originally! Unfortunately, repeatedly appending to a list seems to be very slow... I guess Python keeps reallocating and copying the list as it grows. (It would be nice to be able to tune the increments by

Re: [Numpy-discussion] reading *big* inhomogenous text matrices *fast*?

2008-08-13 Thread Daniel Lenski
On Wed, 13 Aug 2008 21:42:51 -0500, Robert Kern wrote: Here is the appropriate snippet in Objects/listobject.c: /* This over-allocates proportional to the list size, making room * for additional growth. The over-allocation is mild, but is * enough to give

Re: [Numpy-discussion] reading *big* inhomogenous text matrices *fast*?

2008-08-13 Thread Daniel Lenski
On Wed, 13 Aug 2008 22:11:07 -0400, Zachary Pincus wrote: Try profiling the code just to make sure that it is the list append that's slow, and not something else happening on that line, e.g.. From what you and others have pointed out, I'm pretty sure I must have been doing something else wrong