Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-08 Thread dmitrey
Matthieu Brucher wrote: Matlab surely relies on MKL to do this (Matlab ships with MKL or ACML now). The latest Intel library handles multiprocessing, so if you want to use multithreading, use MKL (and it can handle quad-cores with no sweat). So Numpy is multithreaded. I have AMD processor

[Numpy-discussion] how to work with mercurial and numpy right now

2008-01-08 Thread Ondrej Certik
Hi, if you want to play with Mercurial now (without forcing everyone else to leave svn), I suggest this: http://cheeseshop.python.org/pypi/hgsvn I tried that and it works. It's a very easy way to create a hg mirror at your computer. And then you can take this as the official upstream repository

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-08 Thread David Cournapeau
Matthieu Brucher wrote: Oups, safe for the /trunk:1-2871 part. This should be deleted before a commit to the trunk, I think. Yes, that's what I (quite unclearly) meant: since revision numbers are per- repository in svn, I don't understand the point of tracking trunk

Re: [Numpy-discussion] how to work with mercurial and numpy right now

2008-01-08 Thread David Cournapeau
Ondrej Certik wrote: Hi, if you want to play with Mercurial now (without forcing everyone else to leave svn), I suggest this: http://cheeseshop.python.org/pypi/hgsvn I tried that and it works. It's a very easy way to create a hg mirror at your computer. And then you can take this as the

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-08 Thread Matthieu Brucher
In fact, the trunk should be tracked from all the branches, although there will be the problem with merging the different branches (I did not have many troubles with that, but I only tried with a few differences) into the trunk. I don't think only one branch wants to be up to date with

Re: [Numpy-discussion] [C++-sig] Overloading sqrt(5.5)*myvector

2008-01-08 Thread Konrad Hinsen
On Jan 7, 2008, at 19:57, Travis E. Oliphant wrote: Alternatively, the generic scalar operations should probably not be so inclusive and should allow the other object a chance to perform the operation more often (by returning NotImplemented). That would be great. In fact, this has been (and

Re: [Numpy-discussion] how to work with mercurial and numpy right now

2008-01-08 Thread David M. Cooke
On Jan 8, 2008, at 04:36 , David Cournapeau wrote: Ondrej Certik wrote: Hi, if you want to play with Mercurial now (without forcing everyone else to leave svn), I suggest this: http://cheeseshop.python.org/pypi/hgsvn I tried that and it works. It's a very easy way to create a hg mirror

Re: [Numpy-discussion] how to work with mercurial and numpy right now

2008-01-08 Thread David Cournapeau
David M. Cooke wrote: On Jan 8, 2008, at 04:36 , David Cournapeau wrote: Ondrej Certik wrote: Hi, if you want to play with Mercurial now (without forcing everyone else to leave svn), I suggest this: http://cheeseshop.python.org/pypi/hgsvn I tried that and it works. It's a very

Re: [Numpy-discussion] how to work with mercurial and numpy right now

2008-01-08 Thread David M. Cooke
On Jan 8, 2008, at 07:16 , David Cournapeau wrote: David M. Cooke wrote: AFAIK, all the tools can specify a svn revision to start from, if you don't need history (or just recent history). Are you sure ? bzr-svn does not do it (logically, since bzr-svn can pull/push), and I don't see any

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-08 Thread Matthieu Brucher
MKL does the multithreading on its own for level 3 BLAS instructions (OpenMP). For ACML, the problem is that AMD does not provide a CBLAS interface and is not interested in doing so. With ACML, the compilation fails with the current Numpy, but hopefully with Scons it will work, at least

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-08 Thread Kevin Jacobs [EMAIL PROTECTED]
On 1/8/08, Matthieu Brucher [EMAIL PROTECTED] wrote: I have AMD processor so I guess I should use ACML somehow instead. However, at 1st I would prefer my code to be platform-independent, and at 2nd unfortunately I haven't encountered in numpy documentation (in website scipy.org and

Re: [Numpy-discussion] parallel numpy - any info?

2008-01-08 Thread Ray Schumacher
At 04:27 AM 1/8/2008, you wrote: 4. Re: parallel numpy (by Brian Granger) - any info? (Matthieu Brucher) From: Matthieu Brucher [EMAIL PROTECTED] MKL does the multithreading on its own for level 3 BLAS instructions (OpenMP). There was brief debate yesterday among the Pythonians in the

Re: [Numpy-discussion] parallel numpy - any info?

2008-01-08 Thread Matthieu Brucher
2008/1/8, Ray Schumacher [EMAIL PROTECTED]: At 04:27 AM 1/8/2008, you wrote: 4. Re: parallel numpy (by Brian Granger) - any info? (Matthieu Brucher) From: Matthieu Brucher [EMAIL PROTECTED] MKL does the multithreading on its own for level 3 BLAS instructions (OpenMP). There

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-08 Thread David Cournapeau
Matthieu Brucher wrote: MKL does the multithreading on its own for level 3 BLAS instructions (OpenMP). For ACML, the problem is that AMD does not provide a CBLAS interface and is not interested in doing so. With ACML, the compilation fails with the current

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-08 Thread Brian Granger
As others have mentioned, the quickest and easiest way of getting these things is to build numpy against a LAPACK/BLAS that has threading support enabled. I have not played with this, but there is no reason it shouldn't work out of the box. On Jan 7, 2008 2:26 PM, dmitrey [EMAIL PROTECTED]

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-08 Thread Brian Granger
Yes, the problem in this implementation is that it uses pthreads for synchronization instead of spin locks with a work pool implementation tailored to numpy. The thread synchronization overhead is horrible (300,000-400,000 clock cycles) and swamps anything other than very large arrays. I

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Charles R Harris
On Jan 7, 2008 1:09 PM, Travis E. Oliphant [EMAIL PROTECTED] wrote: Darren Dale wrote: One of my collaborators would like to use 16bit float arrays. According to http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16 in numpy.core.numerictypes, it appears that this

Re: [Numpy-discussion] parallel numpy (by Brian Granger) - any info?

2008-01-08 Thread Brian Granger
On Jan 8, 2008 3:33 AM, Matthieu Brucher [EMAIL PROTECTED] wrote: I have AMD processor so I guess I should use ACML somehow instead. However, at 1st I would prefer my code to be platform-independent, and at 2nd unfortunately I haven't encountered in numpy documentation (in website

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Anne Archibald
On 08/01/2008, Charles R Harris [EMAIL PROTECTED] wrote: I'm starting to get interested in implementing float16 support ;) My tentative program goes something like this: 1) Add the operators to the scalar type. This will give sorting, basic printing, addition, etc. 2) Add conversions to

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Charles R Harris
On Jan 8, 2008 12:03 PM, Anne Archibald [EMAIL PROTECTED] wrote: On 08/01/2008, Charles R Harris [EMAIL PROTECTED] wrote: I'm starting to get interested in implementing float16 support ;) My tentative program goes something like this: 1) Add the operators to the scalar type. This will

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-08 Thread Robert Kern
David Cournapeau wrote: Matthieu Brucher wrote: Oups, safe for the /trunk:1-2871 part. This should be deleted before a commit to the trunk, I think. Yes, that's what I (quite unclearly) meant: since revision numbers are per- repository in svn, I don't understand the

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Anne Archibald
On 08/01/2008, Charles R Harris [EMAIL PROTECTED] wrote: Well, at a minimum people will want to read, write, print, and promote them. That would at least let people work with the numbers, and since my understanding is that the main virtue of the format is compactness for storage and

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Charles R Harris
On Jan 8, 2008 1:09 PM, Anne Archibald [EMAIL PROTECTED] wrote: On 08/01/2008, Charles R Harris [EMAIL PROTECTED] wrote: Well, at a minimum people will want to read, write, print, and promote them. That would at least let people work with the numbers, and since my understanding is that

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Darren Dale
On Tuesday 08 January 2008 03:24:49 pm Charles R Harris wrote: On Jan 8, 2008 1:09 PM, Anne Archibald [EMAIL PROTECTED] wrote: On 08/01/2008, Charles R Harris [EMAIL PROTECTED] wrote: Well, at a minimum people will want to read, write, print, and promote them. That would at least

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Bill Baxter
If you're really going to try to do it, Charles, there's an implementation of float16 in the OpenEXR toolkit. http://www.openexr.com/ Or more precisely it's in the files in the Half/ directory of this: http://download.savannah.nongnu.org/releases/openexr/ilmbase-1.0.1.tar.gz I don't know if it's

[Numpy-discussion] major changes in matplotlib svn

2008-01-08 Thread John Hunter
Apologies for the off-topic post to the numpy list, but we have just committed some potentially code-breaking changes to the matplotlib svn repository, and we want to gve as wide a notification to people as possible. Please do not reply to the numpy list, but rather to a matplotlib mailing list .

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Charles R Harris
On Jan 8, 2008 1:58 PM, Bill Baxter [EMAIL PROTECTED] wrote: If you're really going to try to do it, Charles, there's an implementation of float16 in the OpenEXR toolkit. http://www.openexr.com/ Or more precisely it's in the files in the Half/ directory of this:

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Bill Baxter
On Jan 9, 2008 8:03 AM, Charles R Harris [EMAIL PROTECTED] wrote: On Jan 8, 2008 1:58 PM, Bill Baxter [EMAIL PROTECTED] wrote: If you're really going to try to do it, Charles, there's an implementation of float16 in the OpenEXR toolkit. http://www.openexr.com/ Or more precisely it's in

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Charles R Harris
On Jan 8, 2008 5:01 PM, Bill Baxter [EMAIL PROTECTED] wrote: On Jan 9, 2008 8:03 AM, Charles R Harris [EMAIL PROTECTED] wrote: On Jan 8, 2008 1:58 PM, Bill Baxter [EMAIL PROTECTED] wrote: If you're really going to try to do it, Charles, there's an implementation of float16 in the

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Bill Baxter
On Jan 9, 2008 9:18 AM, Charles R Harris [EMAIL PROTECTED] wrote: On Jan 8, 2008 5:01 PM, Bill Baxter [EMAIL PROTECTED] wrote: On Jan 9, 2008 8:03 AM, Charles R Harris [EMAIL PROTECTED] wrote: On Jan 8, 2008 1:58 PM, Bill Baxter [EMAIL PROTECTED] wrote: If you're really going to try to

Re: [Numpy-discussion] def of var of complex

2008-01-08 Thread Robert Kern
Neal Becker wrote: I noticed that if I generate complex rv i.i.d. with var=1, that numpy says: var (real part) - (close to 1.0) var (imag part) - (close to 1.0) but var (complex array) - (close to complex 0) Is that not a strange definition? There is some discussion on this in the

[Numpy-discussion] def of var of complex

2008-01-08 Thread Neal Becker
I noticed that if I generate complex rv i.i.d. with var=1, that numpy says: var (real part) - (close to 1.0) var (imag part) - (close to 1.0) but var (complex array) - (close to complex 0) Is that not a strange definition? ___ Numpy-discussion

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Eric Firing
Bill Baxter wrote: On Jan 9, 2008 9:18 AM, Charles R Harris [EMAIL PROTECTED] wrote: On Jan 8, 2008 5:01 PM, Bill Baxter [EMAIL PROTECTED] wrote: On Jan 9, 2008 8:03 AM, Charles R Harris [EMAIL PROTECTED] wrote: On Jan 8, 2008 1:58 PM, Bill Baxter [EMAIL PROTECTED] wrote: If you're really

[Numpy-discussion] numpy.linalg.eigvals crashes whn calling lapack_lite.pyd

2008-01-08 Thread Simon
Newbie here. Trying to generate eigenvalues from a matrix using: print numpy.linalg.eigvals(matrix) This works with small matrices, say 5 x 5, but causes python to crash on larger matrices, say 136 x 136, which is not really very large. Setup: Win XP SP2 Python 2.5.1 (from .msi) numpy 1.0.4

Re: [Numpy-discussion] def of var of complex

2008-01-08 Thread Neal Becker
Robert Kern wrote: Neal Becker wrote: I noticed that if I generate complex rv i.i.d. with var=1, that numpy says: var (real part) - (close to 1.0) var (imag part) - (close to 1.0) but var (complex array) - (close to complex 0) Is that not a strange definition? There is some

Re: [Numpy-discussion] def of var of complex

2008-01-08 Thread Robert Kern
Charles R Harris wrote: Suppose you have a set of z_i and want to choose z to minimize the average square error $ \sum_i |z_i - z|^2 $. The solution is that $z=\mean{z_i}$ and the resulting average error is given by 2). Note that I didn't mention Gaussians anywhere. No distribution is

Re: [Numpy-discussion] def of var of complex

2008-01-08 Thread Robert Kern
Neal Becker wrote: 2 is what I expected. Suppose I have a complex signal x, with additive Gaussian noise (i.i.d, real and imag are independent). y = x + n Not only do the real and imag marginal distributions have to be independent, but also of the same scale, i.e. Re(n) ~ Gaussian(0,

Re: [Numpy-discussion] Using svnmerge on numpy: am I missing something ?

2008-01-08 Thread David Cournapeau
Robert Kern wrote: David Cournapeau wrote: Matthieu Brucher wrote: Oups, safe for the /trunk:1-2871 part. This should be deleted before a commit to the trunk, I think. Yes, that's what I (quite unclearly) meant: since revision numbers are per- repository in

Re: [Numpy-discussion] def of var of complex

2008-01-08 Thread Charles R Harris
On Jan 8, 2008 7:48 PM, Robert Kern [EMAIL PROTECTED] wrote: Charles R Harris wrote: Suppose you have a set of z_i and want to choose z to minimize the average square error $ \sum_i |z_i - z|^2 $. The solution is that $z=\mean{z_i}$ and the resulting average error is given by 2). Note

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread David Cournapeau
Charles R Harris wrote: On Jan 8, 2008 1:58 PM, Bill Baxter [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: If you're really going to try to do it, Charles, there's an implementation of float16 in the OpenEXR toolkit. http://www.openexr.com/ Or more precisely it's in

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Charles R Harris
On Jan 8, 2008 6:49 PM, Eric Firing [EMAIL PROTECTED] wrote: Bill Baxter wrote: On Jan 9, 2008 9:18 AM, Charles R Harris [EMAIL PROTECTED] wrote: On Jan 8, 2008 5:01 PM, Bill Baxter [EMAIL PROTECTED] wrote: On Jan 9, 2008 8:03 AM, Charles R Harris [EMAIL PROTECTED] wrote: On Jan 8,

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread David Cournapeau
Charles R Harris wrote: On Jan 8, 2008 6:49 PM, Eric Firing [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Bill Baxter wrote: On Jan 9, 2008 9:18 AM, Charles R Harris [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: On Jan 8, 2008 5:01 PM, Bill Baxter [EMAIL

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Charles R Harris
On Jan 8, 2008 8:42 PM, David Cournapeau [EMAIL PROTECTED] wrote: Charles R Harris wrote: On Jan 8, 2008 6:49 PM, Eric Firing [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Bill Baxter wrote: On Jan 9, 2008 9:18 AM, Charles R Harris [EMAIL PROTECTED] mailto:[EMAIL

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread David Cournapeau
Charles R Harris wrote: On Jan 8, 2008 8:42 PM, David Cournapeau [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Charles R Harris wrote: On Jan 8, 2008 6:49 PM, Eric Firing [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] mailto: [EMAIL PROTECTED] mailto:[EMAIL

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Charles R Harris
On Jan 8, 2008 8:55 PM, David Cournapeau [EMAIL PROTECTED] wrote: Charles R Harris wrote: On Jan 8, 2008 8:42 PM, David Cournapeau [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Charles R Harris wrote: On Jan 8, 2008 6:49 PM, Eric Firing [EMAIL PROTECTED]

Re: [Numpy-discussion] def of var of complex

2008-01-08 Thread Travis E. Oliphant
Robert Kern wrote: Neal Becker wrote: I noticed that if I generate complex rv i.i.d. with var=1, that numpy says: var (real part) - (close to 1.0) var (imag part) - (close to 1.0) but var (complex array) - (close to complex 0) Is that not a strange definition? 2. Take a

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Robert Kern
Charles R Harris wrote: I see that there are already a number of parsers available for Python, SPARK, for instance is included in the 2.5.1 distribution. No, it isn't. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our

Re: [Numpy-discussion] def of var of complex

2008-01-08 Thread Robert Kern
Charles R Harris wrote: On Jan 8, 2008 7:48 PM, Robert Kern [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Charles R Harris wrote: Suppose you have a set of z_i and want to choose z to minimize the average square error $ \sum_i |z_i - z|^2 $. The solution is that

Re: [Numpy-discussion] numpy.linalg.eigvals crashes whn calling lapack_lite.pyd

2008-01-08 Thread Charles R Harris
On Jan 8, 2008 6:49 PM, Simon [EMAIL PROTECTED] wrote: Newbie here. Trying to generate eigenvalues from a matrix using: print numpy.linalg.eigvals(matrix) This works with small matrices, say 5 x 5, but causes python to crash on larger matrices, say 136 x 136, which is not really very

Re: [Numpy-discussion] def of var of complex

2008-01-08 Thread Robert Kern
Travis E. Oliphant wrote: Robert Kern wrote: Neal Becker wrote: I noticed that if I generate complex rv i.i.d. with var=1, that numpy says: var (real part) - (close to 1.0) var (imag part) - (close to 1.0) but var (complex array) - (close to complex 0) Is that not a strange

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Charles R Harris
On Jan 8, 2008 10:00 PM, Robert Kern [EMAIL PROTECTED] wrote: Charles R Harris wrote: I see that there are already a number of parsers available for Python, SPARK, for instance is included in the 2.5.1 distribution. No, it isn't. Oops, so it isn't. Looks like this news item at the spark

Re: [Numpy-discussion] parallel numpy - any info?

2008-01-08 Thread Albert Strasheim
Hello On Jan 8, 2008 5:31 PM, Ray Schumacher [EMAIL PROTECTED] wrote: At 04:27 AM 1/8/2008, you wrote: 4. Re: parallel numpy (by Brian Granger) - any info? (Matthieu Brucher) From: Matthieu Brucher [EMAIL PROTECTED] MKL does the multithreading on its own for level 3 BLAS

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread David Cournapeau
Charles R Harris wrote: On Jan 8, 2008 8:55 PM, David Cournapeau [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Charles R Harris wrote: On Jan 8, 2008 8:42 PM, David Cournapeau [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Matthieu Brucher
This can also be true of C code unless you use compilers in the same family. There are also issues, but they are much simpler. The C++ name mangling can be worked around. name mangling is just the top of the iceberg. There are problems wrt to static initialization, exception, etc...; ABI

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread David Cournapeau
Matthieu Brucher wrote: This can also be true of C code unless you use compilers in the same family. There are also issues, but they are much simpler. The C++ name mangling can be worked around. name mangling is just the top of the iceberg. There are problems wrt

Re: [Numpy-discussion] numpy.linalg.eigvals crashes whn calling lapack_lite.pyd

2008-01-08 Thread Simon
Charles R Harris charlesr.harris at gmail.com writes: On Jan 8, 2008 6:49 PM, Simon simonpy2008 at gmail.com wrote: Newbie here. Trying to generate eigenvalues from a matrix using:print numpy.linalg.eigvals(matrix)This works with small matrices, say 5 x 5, but causes python to crash on

Re: [Numpy-discussion] Does float16 exist?

2008-01-08 Thread Matthieu Brucher
This is no longer the case, sincerely. I use C++ compilers from different vendors for some time, and I had almost no problem safe from some template depth issues. C++ ability is not so much a problem with recent compilers, I agree. But not all platforms are or can use a recent C++