Re: [Numpy-discussion] ANNOUNCE: EPD with Py2.5 version 4.0.30002 RC2 available for testing

2008-12-01 Thread Gael Varoquaux
On Mon, Dec 01, 2008 at 12:44:10PM +0900, David Cournapeau wrote: On Mon, Dec 1, 2008 at 7:00 AM, Darren Dale [EMAIL PROTECTED] wrote: I tried installing 4.0.300x on a machine running 64-bit windows vista home edition and ran into problems with PyQt and some related packages. So I

[Numpy-discussion] optimising single value functions for array calculations

2008-12-01 Thread Timmie
Hello, I am developing a module which bases its calculations on another specialised module. My module uses numpy arrays a lot. The problem is that the other module I am building upon, does not work with (whole) arrays but with single values. Therefore, I am currently forces to loop over the

Re: [Numpy-discussion] optimising single value functions for array calculations

2008-12-01 Thread Emmanuelle Gouillart
Hello Timmie, numpy.vectorize(myfunc) should do what you want. Cheers, Emmanuelle Hello, I am developing a module which bases its calculations on another specialised module. My module uses numpy arrays a lot. The problem is that the other module I am building upon, does not work with

Re: [Numpy-discussion] optimising single value functions for array calculations

2008-12-01 Thread Nadav Horesh
I does not solve the slowness problem. I think I read on the list about an experimental code for fast vectorization. Nadav. -הודעה מקורית- מאת: [EMAIL PROTECTED] בשם Emmanuelle Gouillart נשלח: ב 01-דצמבר-08 12:28 אל: Discussion of Numerical Python נושא: Re: [Numpy-discussion]

Re: [Numpy-discussion] optimising single value functions for array calculations

2008-12-01 Thread Matthieu Brucher
2008/12/1 Timmie [EMAIL PROTECTED]: Hello, I am developing a module which bases its calculations on another specialised module. My module uses numpy arrays a lot. The problem is that the other module I am building upon, does not work with (whole) arrays but with single values. Therefore, I

[Numpy-discussion] memmap dtype issue

2008-12-01 Thread Wim Bakker
For a long time now, numpy's memmap has me puzzled by its behavior. When I use memmap straightforward on a file it seems to work fine, but whenever I try to do a memmap using a dtype it seems to gobble up the whole file into memory. This, of course, makes the use of memmap futile. I would expect

Re: [Numpy-discussion] optimising single value functions for array calculations

2008-12-01 Thread Stéfan van der Walt
2008/12/1 Nadav Horesh [EMAIL PROTECTED]: I does not solve the slowness problem. I think I read on the list about an experimental code for fast vectorization. The choices are basically weave, fast_vectorize (http://projects.scipy.org/scipy/scipy/ticket/727), ctypes, cython or f2py. Any I left

Re: [Numpy-discussion] optimising single value functions fo r array calculations

2008-12-01 Thread Timmie
Hi, thanks for all your answers. I will certainly test it. numpy.vectorize(myfunc) should do what you want. Just to add a better example based on a recent discussion here on this list [1]: myfunc(x): res = math.sin(x) return res a = numpy.arange(1,20) = myfunc(a) will not

Re: [Numpy-discussion] memmap dtype issue

2008-12-01 Thread Travis E. Oliphant
Wim Bakker wrote: For a long time now, numpy's memmap has me puzzled by its behavior. When I use memmap straightforward on a file it seems to work fine, but whenever I try to do a memmap using a dtype it seems to gobble up the whole file into memory. I don't understand your question.

Re: [Numpy-discussion] ANNOUNCE: EPD with Py2.5 version 4.0.30002 RC2 available for testing

2008-12-01 Thread Darren Dale
On Mon, Dec 1, 2008 at 3:12 AM, Gael Varoquaux [EMAIL PROTECTED] wrote: On Mon, Dec 01, 2008 at 12:44:10PM +0900, David Cournapeau wrote: On Mon, Dec 1, 2008 at 7:00 AM, Darren Dale [EMAIL PROTECTED] wrote: I tried installing 4.0.300x on a machine running 64-bit windows vista home

[Numpy-discussion] np.loadtxt : yet a new implementation...

2008-12-01 Thread Pierre GM
All, Please find attached to this message another implementation of np.loadtxt, which focuses on missing values. It's basically a combination of John Hunter's et al mlab.csv2rec, Ryan May's patches and pieces of code I'd been working on over the last few weeks. Besides some helper classes

Re: [Numpy-discussion] np.loadtxt : yet a new implementation...

2008-12-01 Thread Pierre GM
And now for the tests: Proposal : Here's an extension to np.loadtxt, designed to take missing values into account. from genload_proposal import * from numpy.ma.testutils import * import StringIO class TestLineSplitter(TestCase): # def test_nodelimiter(self): Test

Re: [Numpy-discussion] np.loadtxt : yet a new implementation...

2008-12-01 Thread Stéfan van der Walt
2008/12/1 Pierre GM [EMAIL PROTECTED]: Please find attached to this message another implementation of Struggling to comply! Cheers Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org

Re: [Numpy-discussion] np.loadtxt : yet a new implementation...

2008-12-01 Thread Pierre GM
Well, looks like the attachment is too big, so here's the implementation. The tests will come in another message. Proposal : Here's an extension to np.loadtxt, designed to take missing values into account. import itertools import numpy as np import numpy.ma as ma def

Re: [Numpy-discussion] np.loadtxt : yet a new implementation...

2008-12-01 Thread John Hunter
On Mon, Dec 1, 2008 at 12:21 PM, Pierre GM [EMAIL PROTECTED] wrote: Well, looks like the attachment is too big, so here's the implementation. The tests will come in another message.\ It looks like I am doing something wrong -- trying to parse a CSV file with dates formatted like '2008-10-14',

[Numpy-discussion] Fwd: np.loadtxt : yet a new implementation...

2008-12-01 Thread Pierre GM
(Sorry about that, I pressed Reply instead of Reply all. Not my day for emails...) On Dec 1, 2008, at 1:54 PM, John Hunter wrote: It looks like I am doing something wrong -- trying to parse a CSV file with dates formatted like '2008-10-14', with:: import datetime, sys import

Re: [Numpy-discussion] Fwd: np.loadtxt : yet a new implementation...

2008-12-01 Thread John Hunter
On Mon, Dec 1, 2008 at 1:14 PM, Pierre GM [EMAIL PROTECTED] wrote: The problem you have is that the default dtype is 'float' (for backwards compatibility w/ the original np.loadtxt). What you want is to automatically change the dtype according to the content of your file: you should use

Re: [Numpy-discussion] Fwd: np.loadtxt : yet a new implementation...

2008-12-01 Thread Pierre GM
On Dec 1, 2008, at 2:26 PM, John Hunter wrote OK, that worked great. I do think some a default impl in np.rec which returned a recarray would be nice. It might also be nice to have a method like np.rec.fromcsv which defaults to a delimiter=',', names=True and dtype=None. Since csv is one

[Numpy-discussion] fromiter typo?

2008-12-01 Thread Neal Becker
Says it takes a default dtype arg, but doesn't act like it's an optional arg: fromiter (iterator or generator, dtype=None) Construct an array from an iterator or a generator. Only handles 1-dimensional cases. By default the data-type is determined from the objects returned from the iterator. ---

Re: [Numpy-discussion] fromiter typo?

2008-12-01 Thread Pauli Virtanen
Mon, 01 Dec 2008 14:43:11 -0500, Neal Becker wrote: Says it takes a default dtype arg, but doesn't act like it's an optional arg: fromiter (iterator or generator, dtype=None) Construct an array from an iterator or a generator. Only handles 1-dimensional cases. By default the data-type is

Re: [Numpy-discussion] np.loadtxt : yet a new implementation...

2008-12-01 Thread Stéfan van der Walt
Hi Pierre 2008/12/1 Pierre GM [EMAIL PROTECTED]: * `genloadtxt` is the base function that makes all the work. It outputs 2 arrays, one for the data (missing values being substituted by the appropriate default) and one for the mask. It would go in np.lib.io I see the code length increased

Re: [Numpy-discussion] np.loadtxt : yet a new implementation...

2008-12-01 Thread Ryan May
Stéfan van der Walt wrote: Hi Pierre 2008/12/1 Pierre GM [EMAIL PROTECTED]: * `genloadtxt` is the base function that makes all the work. It outputs 2 arrays, one for the data (missing values being substituted by the appropriate default) and one for the mask. It would go in np.lib.io I

Re: [Numpy-discussion] np.loadtxt : yet a new implementation...

2008-12-01 Thread Stéfan van der Walt
2008/12/1 Ryan May [EMAIL PROTECTED]: I've wondered about this being an issue. On one hand, you hate to make existing code noticeably slower. On the other hand, if speed is important to you, why are you using ascii I/O? More I than O! But I think numpy.fromfile, once fixed up, could fill

Re: [Numpy-discussion] np.loadtxt : yet a new implementation...

2008-12-01 Thread Pierre GM
I agree, genloadtxt is a bit blotted, and it's not a surprise it's slower than the initial one. I think that in order to be fair, comparisons must be performed with matplotlib.mlab.csv2rec, that implements as well the autodetection of the dtype. I'm quite in favor of keeping a lite version

[Numpy-discussion] bug in ma.masked_all()?

2008-12-01 Thread Eric Firing
Pierre, ma.masked_all does not seem to work with fancy dtypes and more then one dimension: In [1]:import numpy as np In [2]:dt = np.dtype({'names': ['a', 'b'], 'formats': ['f', 'f']}) In [3]:x = np.ma.masked_all((2,), dtype=dt) In [4]:x Out[4]: masked_array(data = [(--, --) (--, --)],

Re: [Numpy-discussion] np.loadtxt : yet a new implementation...

2008-12-01 Thread Christopher Barker
Stéfan van der Walt wrote: important to you, why are you using ascii I/O? ascii I/O is slow, so that's a reason in itself to want it not to be slower! More I than O! But I think numpy.fromfile, once fixed up, could fill this niche nicely. I agree -- for the simple cases, fromfile() could

Re: [Numpy-discussion] np.loadtxt : yet a new implementation...

2008-12-01 Thread Christopher Barker
Pierre GM wrote: Another issue comes from the possibility to define the dtype automatically: Does all that get bypassed if the dtype(s) is specified? Is it still slow in that case? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/ORR

[Numpy-discussion] fast way to convolve a 2d array with 1d filter

2008-12-01 Thread frank wang
Hi, I need to convolve a 1d filter with 8 coefficients with a 2d array of the shape (6,7). I can use convolve to perform the operation for each row. This will involve a for loop with a counter 6. I wonder there is an fast way to do this in numpy without using for loop. Does anyone know how

[Numpy-discussion] ANN: HDF5 for Python 1.0

2008-12-01 Thread Andrew Collette
= Announcing HDF5 for Python (h5py) 1.0 = What is h5py? - HDF5 for Python (h5py) is a general-purpose Python interface to the Hierarchical Data Format library, version 5. HDF5 is a versatile, mature scientific

Re: [Numpy-discussion] bug in ma.masked_all()?

2008-12-01 Thread Pierre GM
On Dec 1, 2008, at 6:09 PM, Eric Firing wrote: Pierre, ma.masked_all does not seem to work with fancy dtypes and more then one dimension: Eric, Should be fixed in SVN (r6130). There were indeed problems with nested dtypes. Tricky beasts they are. Thanks for reporting!

Re: [Numpy-discussion] ANN: HDF5 for Python 1.0

2008-12-01 Thread josef . pktd
Requires * UNIX-like platform (Linux or Mac OS-X); Windows version is in progress I installed version 0.3.0 back in August on WindowsXP, and as far as I remember there were no problems at all with the install, and all tests pass. I thought the interface was really easy to use. But

Re: [Numpy-discussion] [SciPy-user] os x, intel compilers mkl, and fink python

2008-12-01 Thread David Warde-Farley
On 28-Nov-08, at 5:38 PM, Gideon Simpson wrote: Has anyone gotten the combination of OS X with a fink python distribution to successfully build numpy/scipy with the intel compilers and the mkl? If so, how'd you do it? IIRC David Cournapeau has had some success building numpy with MKL on

Re: [Numpy-discussion] fast way to convolve a 2d array with 1d filter

2008-12-01 Thread Stéfan van der Walt
Hi Frank 2008/12/2 frank wang [EMAIL PROTECTED]: I need to convolve a 1d filter with 8 coefficients with a 2d array of the shape (6,7). I can use convolve to perform the operation for each row. This will involve a for loop with a counter 6. I wonder there is an fast way to do this in numpy

Re: [Numpy-discussion] fast way to convolve a 2d array with 1d filter

2008-12-01 Thread frank wang
This is what I thought to do. However, I am not sure whether this is a fast way to do it and also I want to find a more generous way to do it. I thought there may be a more elegant way to do it. Thanks Frank Date: Tue, 2 Dec 2008 07:42:27 +0200 From: [EMAIL PROTECTED] To:

Re: [Numpy-discussion] bug in ma.masked_all()?

2008-12-01 Thread Eric Firing
Pierre, Your change fixed masked_all for the example I gave, but I think it introduced a new failure in zeros: dt = np.dtype([((' Pressure, Digiquartz [db]', 'P'), 'f4'), ((' Depth [salt water, m]', 'D'), 'f4'), ((' Temperature [ITS-90, deg C]', 'T'), 'f4'), ((' Descent Rate [m/s]', 'w'),

Re: [Numpy-discussion] fast way to convolve a 2d array with 1d filter

2008-12-01 Thread Charles R Harris
On Mon, Dec 1, 2008 at 11:14 PM, frank wang [EMAIL PROTECTED] wrote: This is what I thought to do. However, I am not sure whether this is a fast way to do it and also I want to find a more generous way to do it. I thought there may be a more elegant way to do it. Thanks Frank Well, for