### [Numpy-discussion] supporting quad precision

Hi folks, I recently came across an application I needed quad precision for (high-accuracy solution of a differential equation). I found a C++ library (odeint) that worked for the integration itself, but unfortunately it appears numpy is not able to work with quad precision arrays. For my

### Re: [Numpy-discussion] supporting quad precision

, 2013 at 10:07 AM, Anne Archibald archib...@astron.nl wrote: Hi folks, I recently came across an application I needed quad precision for (high-accuracy solution of a differential equation). I found a C++ library (odeint) that worked for the integration itself, but unfortunately

### Re: [Numpy-discussion] Assigning complex value to real array

On 7 October 2010 13:01, Pauli Virtanen p...@iki.fi wrote: to, 2010-10-07 kello 12:08 -0400, Andrew P. Mullhaupt kirjoitti: [clip] No. You can define the arrays as backed by mapped files with real and imaginary parts separated. Then the imaginary part, being initially zero, is a sparse part

### Re: [Numpy-discussion] Assigning complex value to real array

On 7 October 2010 19:46, Andrew P. Mullhaupt d...@zen-pharaohs.com wrote: It wouldn't be the first time I suggested rewriting the select and choose operations. I spent months trying to get Guido to let anything more than slice indexing in arrays. And now, in the technologically advanced

### Re: [Numpy-discussion] Stacking a 2d array onto a 3d array

On 26 October 2010 21:02, Dewald Pieterse dewald.piete...@gmail.com wrote: I see my slicing was the problem, np.vstack((test[:1], test)) works perfectly. Yes and no. np.newaxis (or None for short) is a very useful tool; you just stick it in an index expression and it adds an axis of length one

### Re: [Numpy-discussion] Solving Ax = b: inverse vs cholesky factorization

On 8 November 2010 14:38, Joon groups.and.li...@gmail.com wrote: Oh I see. So I guess in invA = solve(Ax, I) and then x = dot(invA, b) case, there are more places where numerical errors occur, than just x = solve(Ax, b) case. That's the heart of the matter, but one can be more specific. You

### Re: [Numpy-discussion] sample without replacement

I know this question came up on the mailing list some time ago (19/09/2008), and the conclusion was that yes, you can do it more or less efficiently in pure python; the trick is to use two different methods. If your sample is more than, say, a quarter the size of the set you're drawing from, you

### Re: [Numpy-discussion] ragged array implimentation

On 7 March 2011 15:29, Christopher Barker chris.bar...@noaa.gov wrote: On 3/7/11 11:18 AM, Francesc Alted wrote: but, instead of returning a numpy array of 'object' elements, plain python lists are returned instead. which gives you the append option -- I can see how that would be usefull.

### Re: [Numpy-discussion] RFC: Detecting array changes (NumPy 2.0?)

On 11 March 2011 15:34, Charles R Harris charlesr.har...@gmail.com wrote: On Fri, Mar 11, 2011 at 1:06 PM, Dag Sverre Seljebotn d.s.seljeb...@astro.uio.no wrote: On Fri, 11 Mar 2011 19:37:42 + (UTC), Pauli Virtanen p...@iki.fi wrote: On Fri, 11 Mar 2011 11:47:58 -0700, Charles R

### Re: [Numpy-discussion] Numeric integration of higher order integrals

When the dimensionality gets high, grid methods like you're describing start to be a problem (the curse of dimensionality). The standard approaches are simple Monte Carlo integration or its refinements (Metropolis-Hasings, for example). These converge somewhat slowly, but are not much affected by

### Re: [Numpy-discussion] Quaternion dtype for NumPy - initial implementation available

What a useful package! Apart from helping all the people who know they need quaternions, this package removes one major family of use cases for vectorized small-matrix operations, namely, 3D rotations. Quaternions are the canonical way to represent orientation and rotation in three dimensions, and

### Re: [Numpy-discussion] Can I index array starting with 1?

Don't forget the everything-looks-like-a-nail approach: make all your arrays one bigger than you need and ignore element zero. Anne On 7/28/11, Stéfan van der Walt ste...@sun.ac.za wrote: Hi Jeremy On Thu, Jul 28, 2011 at 3:19 PM, Jeremy Conlin jlcon...@gmail.com wrote: I have a need to

### Re: [Numpy-discussion] Can I index array starting with 1?

, implementation would be pretty easy - just make a subclass of ndarray that replaces the indexing function. Anne On 28 July 2011 19:26, Derek Homeier de...@astro.physik.uni-goettingen.de wrote: On 29.07.2011, at 1:19AM, Stéfan van der Walt wrote: On Thu, Jul 28, 2011 at 4:10 PM, Anne Archibald

### Re: [Numpy-discussion] [SciPy-User] recommendation for saving data

In astronomy we tend to use FITS, which is well-supported by pyfits, but a little limited. Some new instruments are beginning to use HDF5. All these generic formats allow very general data storage, so you will need to come up with a standrdized way to represent your own data. Used well, these

### Re: [Numpy-discussion] Efficient way to load a 1Gb file?

There was also some work on a semi-mutable array type that allowed appending along one axis, then 'freezing' to yield a normal numpy array (unfortunately I'm not sure how to find it in the mailing list archives). One could write such a setup by hand, using mmap() or realloc(), but I'd be inclined

### Re: [Numpy-discussion] Managing Rolling Data

On 21/02/07, Alexander Michael [EMAIL PROTECTED] wrote: On 2/21/07, Mike Ressler [EMAIL PROTECTED] wrote: Would loading your data via memmap, then slicing it, do your job (using numpy.memmap)? ... Interesting idea. I think Anne's suggestion that sliced assignment will reduce to an

### Re: [Numpy-discussion] what goes wrong with cos(), sin()

On 21/02/07, Robert Kern [EMAIL PROTECTED] wrote: Well, you can always use long double if it is implemented on your platform. You will have to construct a value for π yourself, though. I'm afraid that we don't really make that easy. If the trig functions are implemented at all, you can

### Re: [Numpy-discussion] Managing Rolling Data

On 23/02/07, Alexander Michael [EMAIL PROTECTED] wrote: I still find the ring buffer solution appealing, but I did not see a way to stack two arrays together without creating copies. Am I missing a bit of numpy cleverness? The short answer is no; the stride in memory from one element to the

### Re: [Numpy-discussion] Draft PEP for the new buffer interface to be in Python 3000

On 27/02/07, Travis Oliphant [EMAIL PROTECTED] wrote: The problem is that we aren't really specifying floating-point standards, we are specifying float, double and long double as whatever the compiler understands. There are some platforms which don't follow the IEEE 754 standard. This

### Re: [Numpy-discussion] In-place fancy selection

On 01/03/07, Francesc Altet [EMAIL PROTECTED] wrote: Hi, I don't think there is a solution for this, but perhaps anybody may offer some idea. Given: In [79]:a=numpy.arange(9,-1,-1) In [80]:b=numpy.arange(10) In [81]:numpy.random.shuffle(b) In [82]:b Out[82]:array([2, 6, 3, 5, 4, 9, 0, 8, 7,

### Re: [Numpy-discussion] in place random generation

On 07/03/07, Daniel Mahler [EMAIL PROTECTED] wrote: My problem is not space, but time. I am creating a small array over and over, and this is turning out to be a bottleneck. My experiments suggest that problem is the allocation, not the random number generation. Allocating all the arrays as

### Re: [Numpy-discussion] in place random generation

On 09/03/07, Robert Kern [EMAIL PROTECTED] wrote: Mark P. Miller wrote: As an aside, are the random number generators from scipy.random the same as those for numpy.random? If not, will those of us who need to just use a few random numbers here and there throughout our code (we don't need

### Re: [Numpy-discussion] zoom FFT with numpy?

On 15/03/07, Ray Schumacher [EMAIL PROTECTED] wrote: The desired band is rather narrow, as the goal is to determine the f of a peak that always occurs in a narrow band of about 1kHz around 7kHz 2) frequency shift, {low pass}, and downsample By this I would take it to mean, multiply by a

### Re: [Numpy-discussion] Array of Callables

On 21/03/07, Andrew Corrigan [EMAIL PROTECTED] wrote: This is a feature I've been wanting for a long time, so I'm really glad that Shane brought this up. While I was hoping for a gain in speed, that isn't the only reason that I would like to see this added. In fact, the most compelling

### Re: [Numpy-discussion] Array of Callables

On 21/03/07, Andrew Corrigan [EMAIL PROTECTED] wrote: Thanks for pointing that out. Technically that works, but it doesn't really express this operation as concisely and as naturally as I'd like to be able to. What I really want is to be able to write: a = array([lambda x: x**2, lambda

### Re: [Numpy-discussion] nd_image.affine_transform edge effects

On 22/03/07, James Turner [EMAIL PROTECTED] wrote: So, its not really a bug, its a undesired feature... It is curable, though painful - you can pad the image out, given an estimate of the size of the window. Yes, this sucks. Anne. ___

### Re: [Numpy-discussion] Simple multi-arg wrapper for dot()

On 24/03/07, Bill Baxter [EMAIL PROTECTED] wrote: I mentioned in another thread Travis started on the scipy list that I would find it useful if there were a function like dot() that could multiply more than just two things. Here's a sample implementation called 'mdot'. mdot(a,b,c,d) ==

### Re: [Numpy-discussion] Detect subclass of ndarray

On 23/03/07, Charles R Harris [EMAIL PROTECTED] wrote: Anyone, What is the easiest way to detect in python/C if an object is a subclass of ndarray? Um, how about isinstance or issubclass? (if you want strictness you can look at whether x.__class__ is zeros(1).__class__) Anne

### Re: [Numpy-discussion] Simple multi-arg wrapper for dot()

On 24/03/07, Bill Baxter [EMAIL PROTECTED] wrote: Nice, but how does that fare on things like mdot(a,(b,c),d) ? I'm pretty sure it doesn't handle it. I think an mdot that can only multiply things left to right comes up short compared to an infix operator that can easily use parentheses to

### Re: [Numpy-discussion] Detect subclass of ndarray

On 24/03/07, Travis Oliphant [EMAIL PROTECTED] wrote: My opinion is that a 1-d array in matrix-multiplication should always be interpreted as a row vector. Is this not what is currently done? If not, then it is a bug in my mind. An alternative approach, in line with the usual usage, is

### Re: [Numpy-discussion] New Operators in Python

On 24/03/07, Charles R Harris [EMAIL PROTECTED] wrote: Yes indeed, this is an old complaint. Just having an infix operator would be an improvement: A dot B dot C Not that I am suggesting dot in this regard ;) In particular, it wouldn't parse without spaces. What about division? Matlab has

### [Numpy-discussion] vectorized methods

Hi, What is the current idiom for vectorizing instance methods? I don't need vectorization over self. For functions: from numpy import * @vectorize def f(x): if x0: return 1 else: return 2 print f(array([-1,0,1])) does the right thing. But for instance methods: class

### Re: [Numpy-discussion] .data doesn't account for .transpose()?

On 29/03/07, Robert Kern [EMAIL PROTECTED] wrote: Glen W. Mabey wrote: So, would that imply that a .copy() should be done first on any array that you want to access .data on? Or even ascontiguousarray(). I'd like to point out that the numpy usage of the word contiguous is a bit

### Re: [Numpy-discussion] newbie question - large dataset

On 07/04/07, Steve Staneff [EMAIL PROTECTED] wrote: Hi, I'm looking for a better solution to managing a very large calculation. Set A is composed of tuples a, each of the form a = [float, string]; set B is composed of tuples of similar structure (b = [float, string]). For each possible

### [Numpy-discussion] degree to which numpy releases threads

On 07/04/07, Fernando Perez [EMAIL PROTECTED] wrote: You are correct. If g,h in the OP's description satisfy: a) they are bloody expensive b) they release the GIL internally via the proper C API calls, which means they are promising not to modify any shared python objects the pure python

### Re: [Numpy-discussion] detecting shared data

On 11/04/07, Bill Baxter [EMAIL PROTECTED] wrote: Must be pretty recent. I'm using 1.0.2.dev3520 (enthought egg) and the function's not there. It is. I've never been quite happy with it, though; I realize it's not very feasible to write one that efficiently checks all possible overlaps, but

### Re: [Numpy-discussion] detecting shared data

On 12/04/07, Stefan van der Walt [EMAIL PROTECTED] wrote: Thank you for taking the time to write those tests! Failures may be expressed using NumpyTestCase.failIf(self, expr, msg=None) That's not quite what I mean. There are situations, with the current code, that it gets the answer wrong

### Re: [Numpy-discussion] Question about Optimization (Inline and Pyrex)

On 17/04/07, Lou Pecora [EMAIL PROTECTED] wrote: You should probably look over your code and see if you can eliminate loops by using the built in vectorization of NumPy. I've found this can really speed things up. E.g. given element by element multiplication of two n-dimensional arrays x

### Re: [Numpy-discussion] Question about Optimization (Inline and Pyrex)

On 17/04/07, Francesc Altet [EMAIL PROTECTED] wrote: Finally, don't let benchmarks fool you. If you can, it is always better to run your own benchmarks made of your own problems. A tool that can be killer for one application can be just mediocre for another (that's somewhat extreme, but I

### Re: [Numpy-discussion] Question about Optimization (Inline and Pyrex)

On 17/04/07, Lou Pecora [EMAIL PROTECTED] wrote: Now, I didn't know that. That's cool because I have a new dual core Intel Mac Pro. I see I have some learning to do with multithreading. Thanks. No problem. I had completely forgotten about the global interpreter lock, wrote a little

### Re: [Numpy-discussion] Question about Optimization (Inline and Pyrex)

On 17/04/07, Lou Pecora [EMAIL PROTECTED] wrote: I get what you are saying, but I'm not even at the Stupidly Easy Parallel level, yet. Eventually. Well, it's hardly wonderful, but I wrote a little package to make idioms like: d = {} def work(f): d[f] = sum(exp(2.j*pi*f*times))

### Re: [Numpy-discussion] Question about Optimization (Inline, and Pyrex)

On 17/04/07, James Turner [EMAIL PROTECTED] wrote: Hi Anne, Your reply to Lou raises a naive follow-up question of my own... Normally, python's multithreading is effectively cooperative, because the interpreter's data structures are all stored under the same lock, so only one thread can

### Re: [Numpy-discussion] Question about Optimization (Inline, and Pyrex)

On 18/04/07, Robert Kern [EMAIL PROTECTED] wrote: Sebastian Haase wrote: Hi, I don't know much about ATLAS -- would there be other numpy functions that *could* or *should* be implemented using ATLAS !? Any ? Not really, no. ATLAS is a library designed to implement linear algebra

### Re: [Numpy-discussion] Question about Optimization (Inline and Pyrex)

On 18/04/07, Sebastian Haase [EMAIL PROTECTED] wrote: Hi Anne, I'm just starting to look into your code (sound very interesting - should probably be put onto the wiki) -- quick note: you are mixing tabs and spaces :-( what editor are you using !? Agh. vim is misbehaving. Sorry about that.

### Re: [Numpy-discussion] efficient use of numpy.where() and .any()

On 23/04/07, Pierre GM [EMAIL PROTECTED] wrote: Note that in addition of the bitwise operators, you can use the logical_ functions. OK, you'll still end up w/ temporaries, but I wonder whether there couldn't be some tricks to bypass that... If you're really determined not to make many temps,

### Re: [Numpy-discussion] arctan2 with complex args

On 29/04/07, David Goldsmith [EMAIL PROTECTED] wrote: Far be it from me to challenge the mighty Wolfram, but I'm not sure that using the *formula* for calculating the arctan of a *single* complex argument from its real and imaginary parts makes any sense if x and/or y are themselves complex

### Re: [Numpy-discussion] matlab vs. python question

On 08/05/07, Gael Varoquaux [EMAIL PROTECTED] wrote: On Tue, May 08, 2007 at 12:18:56PM +0200, Giorgio Luciano wrote: A good workspace (with an interactive button) just to not get figures freezed I am not sure what you mean by figures freezed but I would like to check that you are aware of

### Re: [Numpy-discussion] numpy version of Interactive Data Analysis tutorial available

On 10/05/07, Perry Greenfield [EMAIL PROTECTED] wrote: I have updated the Using Python for Interactive Data Analysis tutorial to use numpy instead of numarray (finally!). There are further improvements I would like to make in its organization and formatting (in the process including

### Re: [Numpy-discussion] very large matrices.

On 12/05/07, Dave P. Novakovic [EMAIL PROTECTED] wrote: core 2 duo with 4gb RAM. I've heard about iterative svd functions. I actually need a complete svd, with all eigenvalues (not LSI). I'm actually more interested in the individual eigenvectors. As an example, a single row could probably

### [Numpy-discussion] Unfortunate user experience with max()

Hi, Numpy has a max() function. It takes an array, and possibly some extra arguments (axis and default). Unfortunately, this means that numpy.max(-1.3,2,7) -1.3 This can lead to surprising bugs in code that either explicitly expects it to behave like python's max() or implicitly expects that

### Re: [Numpy-discussion] Unfortunate user experience with max()

On 16/05/07, Alan G Isaac [EMAIL PROTECTED] wrote: On Wed, 16 May 2007, Anne Archibald apparently wrote: numpy.max(-1.3,2,7) -1.3 Is that new behavior? I get a TypeError on the last argument. (As expected.) For which version of numpy? In [2]: numpy.max(-1.3,2.7) Out[2]: -1.3 In [3

### Re: [Numpy-discussion] best way for storing extensible data?

On 18/05/07, David M. Cooke [EMAIL PROTECTED] wrote: It'll act like appending to a list, where it will grow the array (by doubling, I think) when it needs to, so appending each value is amortized to O(1) time. A list though would use more memory per element as each element is a full Python

### Re: [Numpy-discussion] Question about flags of fancy indexed array

On 23/05/07, Albert Strasheim [EMAIL PROTECTED] wrote: Consider the following example: First a comment: almost nobody needs to care how the data is stored internally. Try to avoid looking at the flags unless you're interfacing with a C library. The nice feature of numpy is that it hides all

### Re: [Numpy-discussion] Question about flags of fancy indexed array

On 23/05/07, Albert Strasheim [EMAIL PROTECTED] wrote: If you are correct that this is in fact a fresh new array, I really don't understand where the values of these flags. To recap: In [19]: x = N.zeros((3,2)) In [20]: x.flags Out[20]: C_CONTIGUOUS : True F_CONTIGUOUS : False

### Re: [Numpy-discussion] byteswap() leaves dtype unchanged

On 30/05/07, Matthew Brett [EMAIL PROTECTED] wrote: I think the point is that you can have several different situations with byte ordering: 1) Your data and dtype endianess match, but you want the data swapped and the dtype to reflect this 2) Your data and dtype endianess don't match, and

### Re: [Numpy-discussion] SciPy Journal

On 31/05/07, Travis Oliphant [EMAIL PROTECTED] wrote: 2) I think it's scope should be limited to papers that describe algorithms and code that are in NumPy / SciPy / SciKits. Perhaps we could also accept papers that describe code that depends on NumPy / SciPy that is also easily available.

### Re: [Numpy-discussion] flatten() without copy - is this possible?

On 01/06/07, dmitrey [EMAIL PROTECTED] wrote: y = x.flatten(1) turn array into vector (note that this forces a copy) Is there any way to do the trick wthout copying? What are the problems here? Just other way of array elements indexing... It is sometimes possible to flatten an array

### Re: [Numpy-discussion] flatten() without copy - is this possible?

On 05/06/07, Charles R Harris [EMAIL PROTECTED] wrote: On 6/5/07, dmitrey [EMAIL PROTECTED] wrote: Thank you, but all your examples deal with 3-dimensional arrays. and I still misunderstood, is it possible somehow for 2-dimensional arrays or no? D. There is nothing special about the

### Re: [Numpy-discussion] randint for long type (permutations)

On 14/06/07, Will Woods [EMAIL PROTECTED] wrote: I want to choose a subset of all possible permutations of a sequence of length N, with each element of the subset unique. This is then going to be scattered across multiple machines using mpi. Since there is a one-to-one mapping between the

### Re: [Numpy-discussion] fancy indexing/broadcasting question

On 07/07/07, Mark.Miller [EMAIL PROTECTED] wrote: A quick question for the group. I'm working with some code to generate some arrays of random numbers. The random numbers, however, need to meet certain criteria. So for the moment, I have things that look like this (code is just an

### Re: [Numpy-discussion] expm

On 20/07/07, Nils Wagner [EMAIL PROTECTED] wrote: lorenzo bolla wrote: hi all. is there a function in numpy to compute the exp of a matrix, similar to expm in matlab? for example: expm([[0,0],[0,0]]) = eye(2) Numpy doesn't provide expm but scipy does. from scipy.linalg import expm,

### Re: [Numpy-discussion] numpy arrays, data allocation and SIMD alignement

On 04/08/07, David Cournapeau [EMAIL PROTECTED] wrote: Here's a hack that google turned up: I'd avoid hacks in favour of posix_memalign (which allows arbitrary degrees of alignment. For one thing, freeing becomes a headache (you can't free a pointer you've jiggered!). - Check whether a

### Re: [Numpy-discussion] numpy arrays, data allocation and SIMD alignement

On 06/08/07, David Cournapeau [EMAIL PROTECTED] wrote: Well, when I proposed the SIMD extension, I was willing to implement the proposal, and this was for a simple goal: enabling better integration with many numeric libraries which need SIMD alignment. As nice as a custom allocator might be,

### Re: [Numpy-discussion] numpy arrays, data allocation and SIMD alignement

On 07/08/07, David Cournapeau [EMAIL PROTECTED] wrote: Anne, you said previously that it was easy to allocate buffers for a given alignment at runtime. Could you point me to a document which explains how ? For platforms without posix_memalign, I don't see how to implement a memory allocator

### Re: [Numpy-discussion] numpy arrays, data allocation and SIMD alignement

On 08/08/2007, Stefan van der Walt [EMAIL PROTECTED] wrote: On Tue, Aug 07, 2007 at 01:33:24AM -0400, Anne Archibald wrote: Well, it can be done in Python: just allocate a too-big ndarray and take a slice that's the right shape and has the right alignment. But this sucks. Could you

### Re: [Numpy-discussion] numpy arrays, data allocation and SIMD alignement

On 08/08/2007, Charles R Harris [EMAIL PROTECTED] wrote: On 8/8/07, Anne Archibald [EMAIL PROTECTED] wrote: Oh. Well, it's not *terrible*; it gets you an aligned array. But you have to allocate the original array as a 1D byte array (to allow for arbitrary realignments) and then align

### Re: [Numpy-discussion] vectorized function inside a class

On 08/08/2007, mark [EMAIL PROTECTED] wrote: Thanks for the ideas to circumvent vectorization. But the real function I need to vectorize is quite a bit more complicated. So I would really like to use vectorize. Are there any reasons against vectorization? Is it slow? The way Tim suggests I

### Re: [Numpy-discussion] .transpose() of memmap array fails to close()

On 13/08/07, Glen W. Mabey [EMAIL PROTECTED] wrote: As I have tried to think through what should be the appropriate behavior for the returned value of __getitem__, I have not been able to see an appropriate solution (let alone know how to implement it) to this issue. Is the problem one of

### Re: [Numpy-discussion] .transpose() of memmap array fails to close()

On 15/08/07, Glen W. Mabey [EMAIL PROTECTED] wrote: On Tue, Aug 14, 2007 at 12:23:26AM -0400, Anne Archibald wrote: On 13/08/07, Glen W. Mabey [EMAIL PROTECTED] wrote: As I have tried to think through what should be the appropriate behavior for the returned value of __getitem__, I have

### Re: [Numpy-discussion] .transpose() of memmap array fails to close()

On 16/08/07, Glen W. Mabey [EMAIL PROTECTED] wrote: On Wed, Aug 15, 2007 at 08:50:28PM -0400, Anne Archibald wrote: But to be pythonic, or numpythonic, when the original A is garbage-collected, the garbage collection should certainly close the mmap. Humm, this would be less than ideal

### Re: [Numpy-discussion] Extended Outer Product

On 21/08/07, Timothy Hochberg [EMAIL PROTECTED] wrote: This is just a general comment on recent threads of this type and not directed specifically at Chuck or anyone else. IMO, the emphasis on avoiding FOR loops at all costs is misplaced. It is often more memory friendly and thus faster to

### [Numpy-discussion] Bug or surprising undocumented behaviour in irfft

Hi, numpy's Fourier transforms have the handy feature of being able to upsample and downsample signals; for example the documentation cites irfft(rfft(A),16*len(A)) as a way to get a Fourier interpolation of A. However, there is a peculiarity with the way numpy handles the highest-frequency

### Re: [Numpy-discussion] Bug or surprising undocumented behaviour in irfft

On 29/08/2007, Charles R Harris [EMAIL PROTECTED] wrote: What is going on is that the coefficient at the Nyquist frequency appears once in the unextended array, but twice when the array is extended with zeros because of the Hermitean symmetry. That should probably be fixed in the upsampling

### Re: [Numpy-discussion] Bug or surprising undocumented behaviour in irfft

On 29/08/2007, Charles R Harris [EMAIL PROTECTED] wrote: Is this also appropriate for the other FFTs? (inverse real, complex, hermitian, what have you) I have written a quick hack (attached) that should do just that rescaling, but I don't know that it's a good idea, as implemented.

### Re: [Numpy-discussion] Accessing a numpy array in a mmap fashion

On 30/08/2007, Brian Donovan [EMAIL PROTECTED] wrote: Hello all, I'm wondering if there is a way to use a numpy array that uses disk as a memory store rather than ram. I'm looking for something like mmap but which can be used like a numpy array. The general idea is this. I'm simulating a

### Re: [Numpy-discussion] Docstring improvements for numpy.where?

On 12/09/2007, Robert Kern [EMAIL PROTECTED] wrote: That sentence applies to the 3-argument form, which has nothing to do with nonzero() and does not yield a tuple. But in general, yes, the docstring leaves much to be desired. Well, here's what I hope is a step in the right direction. Anne

### Re: [Numpy-discussion] arange and floating point arguments

On 14/09/2007, Robert Kern [EMAIL PROTECTED] wrote: Ed Schofield wrote: Using arange in this way is a fundamentally unreliable thing to do, but is there anything we want to do about this? Tell people to use linspace(). Yes, it does a slightly different thing; that's why it works. Most uses

### Re: [Numpy-discussion] arange and floating point arguments

On 14/09/2007, Charles R Harris [EMAIL PROTECTED] wrote: Since none of the numbers are exactly represented in IEEE floating point, this sort of oddity is expected. If you look at the exact values, (.4 + .2)/.1 6 and .6/.1 6 . That said, I would expect something like ceil(interval/delta -

### Re: [Numpy-discussion] arange and floating point arguments

On 15/09/2007, Christopher Barker [EMAIL PROTECTED] wrote: Oh, and could someone post an actual example of a use for which FP arange is required (with fudges to try to accommodate decimal to binary conversion errors), and linspace won't do? Well, here's one: evaluating a function we know to

### Re: [Numpy-discussion] Extended Outer Product

On 19/09/2007, Travis E. Oliphant [EMAIL PROTECTED] wrote: Anne Archibald wrote: vectorize, of course, is a good example of my point above: it really just loops, in python IIRC, but conceptually it's extremely handy for doing exactly what the OP wanted. Unfortunately vectorize() does

### Re: [Numpy-discussion] Casting a float array into a string array

On 05/10/2007, Matthieu Brucher [EMAIL PROTECTED] wrote: I'd like to have the '2.', because if the number is negative, only '-' is returned, not the real value. For string arrays you need to specify the length of the string as part of the data type (and it defaults to length 1): In [11]:

### [Numpy-discussion] bug in vectorize? (was: Re: Casting a float array into a string array)

On 05/10/2007, Christopher Barker [EMAIL PROTECTED] wrote: I don't know how to generalize this to n-d though -- maybe numpy.vectorize? Oops! Looks like there's a big somewhere: In [1]: from numpy import * In [2]: vectorize(lambda x: %5.3g % x)(ones((2,2,2))) Out[2]: array([[[' ', '\xc1'],

### Re: [Numpy-discussion] appending extra items to arrays

On 11/10/2007, Robert Kern [EMAIL PROTECTED] wrote: Appending to a list then converting the list to an array is the most straightforward way to do it. If the performance of this isn't a problem, I recommend leaving it alone. Just a speculation: Python strings have a similar problem - they're

### Re: [Numpy-discussion] fortran array storage question

On 26/10/2007, Travis E. Oliphant [EMAIL PROTECTED] wrote: There is an optimization where-in the inner-loops are done over the dimension with the smallest stride. What other cache-coherent optimizations do you recommend? That sounds like a very good first step. I'm far from an expert on this

### Re: [Numpy-discussion] numpy FFT memory accumulation

On 31/10/2007, Ray S [EMAIL PROTECTED] wrote: I am using fftRes = abs(fft.rfft(data_array[end-2**15:end])) to do running analysis on streaming data. The N never changes. It sucks memory up at ~1MB/sec with 70kHz data rate and 290 ffts/sec. (Interestingly, Numeric FFT accumulates much

### Re: [Numpy-discussion] Unnecessarily bad performance of elementwise operators with Fortran-arrays

On 08/11/2007, David Cournapeau [EMAIL PROTECTED] wrote: For copy and array creation, I understand this, but for element-wise operations (mean, min, and max), this is not enough to explain the difference, no ? For example, I can understand a 50 % or 100 % time increase for simple operations

### Re: [Numpy-discussion] numpy : your experiences?

On 16/11/2007, Rahul Garg [EMAIL PROTECTED] wrote: It would be awesome if you guys could respond to some of the following questions : a) Can you guys tell me briefly about the kind of problems you are tackling with numpy and scipy? b) Have you ever felt that numpy/scipy was slow and had to

### Re: [Numpy-discussion] OT: A Way to Approximate and Compress a 3D Surface

On 20/11/2007, Geoffrey Zhu [EMAIL PROTECTED] wrote: I have N tabulated data points { (x_i, y_i, z_i) } that describes a 3D surface. The surface is pretty smooth. However, the number of data points is too large to be stored and manipulated efficiently. To make it easier to deal with, I am

### Re: [Numpy-discussion] RAdian -- degres conversion

On 16/12/2007, Hans Meine [EMAIL PROTECTED] wrote: (*: It's similar with math.hypot, which I have got to know and appreciate nowadays.) I'd like to point out that math.hypot is a nontrivial function which is easy to get wrong: In [6]: x=1e200; y=1e200; In [7]: math.hypot(x,y) Out[7]:

### Re: [Numpy-discussion] any better way to normalise a matrix

On 27/12/2007, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: in my code i am trying to normalise a matrix as below mymatrix=matrix(..# items are of double type..can be negative values) numrows,numcols=mymatrix.shape for i in range(numrows): temp=mymatrix[i].max() for j in

### Re: [Numpy-discussion] any better way to normalize a matrix

On 28/12/2007, Christopher Barker [EMAIL PROTECTED] wrote: I like the array methods a lot -- is there any particular reason there is no ndarray.abs(), or has it just not been added? Here I have to disagree with you. Numpy provides ufuncs as general powerful tools for operating on matrices.

### Re: [Numpy-discussion] any better way to normalize a matrix

On 28/12/2007, Christopher Barker [EMAIL PROTECTED] wrote: Anne Archibald wrote: Numpy provides ufuncs as general powerful tools for operating on matrices. More can be added relatively easily, they provide not just the basic apply operation but also outer and others. Adding another way

### Re: [Numpy-discussion] fast iteration (I think I've got it)

On 01/01/2008, Neal Becker [EMAIL PROTECTED] wrote: This is a c-api question. I'm trying to get iterators that are both fast and reasonably general. I did confirm that iterating using just the general PyArrayIterObject protocol is not as fast as using c-style pointers for contiguous arrays.

### Re: [Numpy-discussion] Nasty bug using pre-initialized arrays

On 07/01/2008, Charles R Harris [EMAIL PROTECTED] wrote: One place where Numpy differs from MatLab is the way memory is handled. MatLab is always generating new arrays, so for efficiency it is worth preallocating arrays and then filling in the parts. This is not the case in Numpy where lists

### Re: [Numpy-discussion] CASTABLE flag

On 07/01/2008, Timothy Hochberg [EMAIL PROTECTED] wrote: I'm fairly dubious about assigning float to ints as is. First off it looks like a bug magnet to me due to accidentally assigning a floating point value to a target that one believes to be float but is in fact integer. Second, C-style

### Re: [Numpy-discussion] Does float16 exist?

On 08/01/2008, Charles R Harris [EMAIL PROTECTED] wrote: I'm starting to get interested in implementing float16 support ;) My tentative program goes something like this: 1) Add the operators to the scalar type. This will give sorting, basic printing, addition, etc. 2) Add conversions to

### Re: [Numpy-discussion] Does float16 exist?

On 08/01/2008, Charles R Harris [EMAIL PROTECTED] wrote: Well, at a minimum people will want to read, write, print, and promote them. That would at least let people work with the numbers, and since my understanding is that the main virtue of the format is compactness for storage and

### Re: [Numpy-discussion] Can not update a submatrix

On 30/01/2008, Francesc Altet [EMAIL PROTECTED] wrote: A Wednesday 30 January 2008, Nadav Horesh escrigué: In the following piece of code: import numpy as N R = N.arange(9).reshape(3,3) ax = [1,2] R array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) R[ax,:][:,ax] =

### Re: [Numpy-discussion] Stride of 2 for correlate()

On 05/02/2008, Chris Finley [EMAIL PROTECTED] wrote: After searching the archives, I was unable to find a good method for changing the stride of the correlate or convolve routines. I am doing a Daubechies analysis of some sample data, say data = arange(0:80). The coefficient array or four

### Re: [Numpy-discussion] Bug in numpy all() function

On 06/02/2008, Robert Kern [EMAIL PROTECTED] wrote: I guess the all function doesn't know about generators? Yup. It works on arrays and things it can turn into arrays by calling the C API equivalent of numpy.asarray(). There's a ton of magic and special cases in asarray() in order to