Matthieu Brucher wrote:
Matlab surely relies on MKL to do this (Matlab ships with MKL or ACML
now). The latest Intel library handles multiprocessing, so if you want
to use multithreading, use MKL (and it can handle quad-cores with no
sweat). So Numpy is multithreaded.
I have AMD processor
Hi,
if you want to play with Mercurial now (without forcing everyone else
to leave svn), I suggest this:
http://cheeseshop.python.org/pypi/hgsvn
I tried that and it works. It's a very easy way to create a hg mirror
at your computer. And then you can take this
as the official upstream repository
Matthieu Brucher wrote:
Oups, safe for the /trunk:1-2871 part. This should be deleted
before
a commit to the trunk, I think.
Yes, that's what I (quite unclearly) meant: since revision numbers are
per- repository in svn, I don't understand the point of tracking trunk
Ondrej Certik wrote:
Hi,
if you want to play with Mercurial now (without forcing everyone else
to leave svn), I suggest this:
http://cheeseshop.python.org/pypi/hgsvn
I tried that and it works. It's a very easy way to create a hg mirror
at your computer. And then you can take this
as the
In fact, the trunk should be tracked from all the branches, although
there will be the problem with merging the different branches (I did
not have many troubles with that, but I only tried with a few
differences) into the trunk. I don't think only one branch wants to be
up to date with
On Jan 7, 2008, at 19:57, Travis E. Oliphant wrote:
Alternatively, the generic scalar operations should probably not be so
inclusive and should allow the other object a chance to perform the
operation more often (by returning NotImplemented).
That would be great. In fact, this has been (and
On Jan 8, 2008, at 04:36 , David Cournapeau wrote:
Ondrej Certik wrote:
Hi,
if you want to play with Mercurial now (without forcing everyone else
to leave svn), I suggest this:
http://cheeseshop.python.org/pypi/hgsvn
I tried that and it works. It's a very easy way to create a hg mirror
David M. Cooke wrote:
On Jan 8, 2008, at 04:36 , David Cournapeau wrote:
Ondrej Certik wrote:
Hi,
if you want to play with Mercurial now (without forcing everyone else
to leave svn), I suggest this:
http://cheeseshop.python.org/pypi/hgsvn
I tried that and it works. It's a very
On Jan 8, 2008, at 07:16 , David Cournapeau wrote:
David M. Cooke wrote:
AFAIK, all the tools can specify a svn revision to start from, if you
don't need history (or just recent history).
Are you sure ? bzr-svn does not do it (logically, since bzr-svn can
pull/push), and I don't see any
MKL does the multithreading on its own for level 3 BLAS instructions
(OpenMP). For ACML, the problem is that AMD does not provide a CBLAS
interface and is not interested in doing so. With ACML, the compilation
fails with the current Numpy, but hopefully with Scons it will work, at
least
On 1/8/08, Matthieu Brucher [EMAIL PROTECTED] wrote:
I have AMD processor so I guess I should use ACML somehow instead.
However, at 1st I would prefer my code to be platform-independent, and
at 2nd unfortunately I haven't encountered in numpy documentation (in
website scipy.org and
At 04:27 AM 1/8/2008, you wrote:
4. Re: parallel numpy (by Brian Granger) - any info?
(Matthieu Brucher)
From: Matthieu Brucher [EMAIL PROTECTED]
MKL does the multithreading on its own for level 3 BLAS instructions
(OpenMP).
There was brief debate yesterday among the Pythonians in the
2008/1/8, Ray Schumacher [EMAIL PROTECTED]:
At 04:27 AM 1/8/2008, you wrote:
4. Re: parallel numpy (by Brian Granger) - any info?
(Matthieu Brucher)
From: Matthieu Brucher [EMAIL PROTECTED]
MKL does the multithreading on its own for level 3 BLAS instructions
(OpenMP).
There
Matthieu Brucher wrote:
MKL does the multithreading on its own for level 3 BLAS
instructions (OpenMP). For ACML, the problem is that AMD does
not provide a CBLAS interface and is not interested in doing
so. With ACML, the compilation fails with the current
As others have mentioned, the quickest and easiest way of getting
these things is to build numpy against a LAPACK/BLAS that has
threading support enabled. I have not played with this, but there is
no reason it shouldn't work out of the box.
On Jan 7, 2008 2:26 PM, dmitrey [EMAIL PROTECTED]
Yes, the problem in this implementation is that it uses pthreads for
synchronization instead of spin locks with a work pool implementation
tailored to numpy. The thread synchronization overhead is horrible
(300,000-400,000 clock cycles) and swamps anything other than very large
arrays. I
On Jan 7, 2008 1:09 PM, Travis E. Oliphant [EMAIL PROTECTED] wrote:
Darren Dale wrote:
One of my collaborators would like to use 16bit float arrays. According
to
http://www.scipy.org/Tentative_NumPy_Tutorial, and references to float16
in
numpy.core.numerictypes, it appears that this
On Jan 8, 2008 3:33 AM, Matthieu Brucher [EMAIL PROTECTED] wrote:
I have AMD processor so I guess I should use ACML somehow instead.
However, at 1st I would prefer my code to be platform-independent, and
at 2nd unfortunately I haven't encountered in numpy documentation (in
website
On 08/01/2008, Charles R Harris [EMAIL PROTECTED] wrote:
I'm starting to get interested in implementing float16 support ;) My
tentative program goes something like this:
1) Add the operators to the scalar type. This will give sorting, basic
printing, addition, etc.
2) Add conversions to
On Jan 8, 2008 12:03 PM, Anne Archibald [EMAIL PROTECTED] wrote:
On 08/01/2008, Charles R Harris [EMAIL PROTECTED] wrote:
I'm starting to get interested in implementing float16 support ;) My
tentative program goes something like this:
1) Add the operators to the scalar type. This will
David Cournapeau wrote:
Matthieu Brucher wrote:
Oups, safe for the /trunk:1-2871 part. This should be deleted
before
a commit to the trunk, I think.
Yes, that's what I (quite unclearly) meant: since revision numbers are
per- repository in svn, I don't understand the
On 08/01/2008, Charles R Harris [EMAIL PROTECTED] wrote:
Well, at a minimum people will want to read, write, print, and promote them.
That would at least let people work with the numbers, and since my
understanding is that the main virtue of the format is compactness for
storage and
On Jan 8, 2008 1:09 PM, Anne Archibald [EMAIL PROTECTED] wrote:
On 08/01/2008, Charles R Harris [EMAIL PROTECTED] wrote:
Well, at a minimum people will want to read, write, print, and promote
them.
That would at least let people work with the numbers, and since my
understanding is that
On Tuesday 08 January 2008 03:24:49 pm Charles R Harris wrote:
On Jan 8, 2008 1:09 PM, Anne Archibald [EMAIL PROTECTED] wrote:
On 08/01/2008, Charles R Harris [EMAIL PROTECTED] wrote:
Well, at a minimum people will want to read, write, print, and promote
them.
That would at least
If you're really going to try to do it, Charles, there's an
implementation of float16 in the OpenEXR toolkit.
http://www.openexr.com/
Or more precisely it's in the files in the Half/ directory of this:
http://download.savannah.nongnu.org/releases/openexr/ilmbase-1.0.1.tar.gz
I don't know if it's
Apologies for the off-topic post to the numpy list, but we have just
committed some potentially code-breaking changes to the matplotlib svn
repository, and we want to gve as wide a notification to people as
possible. Please do not reply to the numpy list, but rather to a
matplotlib mailing list .
On Jan 8, 2008 1:58 PM, Bill Baxter [EMAIL PROTECTED] wrote:
If you're really going to try to do it, Charles, there's an
implementation of float16 in the OpenEXR toolkit.
http://www.openexr.com/
Or more precisely it's in the files in the Half/ directory of this:
On Jan 9, 2008 8:03 AM, Charles R Harris [EMAIL PROTECTED] wrote:
On Jan 8, 2008 1:58 PM, Bill Baxter [EMAIL PROTECTED] wrote:
If you're really going to try to do it, Charles, there's an
implementation of float16 in the OpenEXR toolkit.
http://www.openexr.com/
Or more precisely it's in
On Jan 8, 2008 5:01 PM, Bill Baxter [EMAIL PROTECTED] wrote:
On Jan 9, 2008 8:03 AM, Charles R Harris [EMAIL PROTECTED]
wrote:
On Jan 8, 2008 1:58 PM, Bill Baxter [EMAIL PROTECTED] wrote:
If you're really going to try to do it, Charles, there's an
implementation of float16 in the
On Jan 9, 2008 9:18 AM, Charles R Harris [EMAIL PROTECTED] wrote:
On Jan 8, 2008 5:01 PM, Bill Baxter [EMAIL PROTECTED] wrote:
On Jan 9, 2008 8:03 AM, Charles R Harris [EMAIL PROTECTED]
wrote:
On Jan 8, 2008 1:58 PM, Bill Baxter [EMAIL PROTECTED] wrote:
If you're really going to try to
Neal Becker wrote:
I noticed that if I generate complex rv i.i.d. with var=1, that numpy says:
var (real part) - (close to 1.0)
var (imag part) - (close to 1.0)
but
var (complex array) - (close to complex 0)
Is that not a strange definition?
There is some discussion on this in the
I noticed that if I generate complex rv i.i.d. with var=1, that numpy says:
var (real part) - (close to 1.0)
var (imag part) - (close to 1.0)
but
var (complex array) - (close to complex 0)
Is that not a strange definition?
___
Numpy-discussion
Bill Baxter wrote:
On Jan 9, 2008 9:18 AM, Charles R Harris [EMAIL PROTECTED] wrote:
On Jan 8, 2008 5:01 PM, Bill Baxter [EMAIL PROTECTED] wrote:
On Jan 9, 2008 8:03 AM, Charles R Harris [EMAIL PROTECTED]
wrote:
On Jan 8, 2008 1:58 PM, Bill Baxter [EMAIL PROTECTED] wrote:
If you're really
Newbie here. Trying to generate eigenvalues from a matrix using:
print numpy.linalg.eigvals(matrix)
This works with small matrices, say 5 x 5, but causes python to crash on larger
matrices, say 136 x 136, which is not really very large.
Setup:
Win XP SP2
Python 2.5.1 (from .msi)
numpy 1.0.4
Robert Kern wrote:
Neal Becker wrote:
I noticed that if I generate complex rv i.i.d. with var=1, that numpy
says:
var (real part) - (close to 1.0)
var (imag part) - (close to 1.0)
but
var (complex array) - (close to complex 0)
Is that not a strange definition?
There is some
Charles R Harris wrote:
Suppose you have a set of z_i and want to choose z to minimize the
average square error $ \sum_i |z_i - z|^2 $. The solution is that
$z=\mean{z_i}$ and the resulting average error is given by 2). Note that
I didn't mention Gaussians anywhere. No distribution is
Neal Becker wrote:
2 is what I expected. Suppose I have a complex signal x, with additive
Gaussian noise (i.i.d, real and imag are independent).
y = x + n
Not only do the real and imag marginal distributions have to be independent,
but
also of the same scale, i.e. Re(n) ~ Gaussian(0,
Robert Kern wrote:
David Cournapeau wrote:
Matthieu Brucher wrote:
Oups, safe for the /trunk:1-2871 part. This should be deleted
before
a commit to the trunk, I think.
Yes, that's what I (quite unclearly) meant: since revision numbers are
per- repository in
On Jan 8, 2008 7:48 PM, Robert Kern [EMAIL PROTECTED] wrote:
Charles R Harris wrote:
Suppose you have a set of z_i and want to choose z to minimize the
average square error $ \sum_i |z_i - z|^2 $. The solution is that
$z=\mean{z_i}$ and the resulting average error is given by 2). Note
Charles R Harris wrote:
On Jan 8, 2008 1:58 PM, Bill Baxter [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
If you're really going to try to do it, Charles, there's an
implementation of float16 in the OpenEXR toolkit.
http://www.openexr.com/
Or more precisely it's in
On Jan 8, 2008 6:49 PM, Eric Firing [EMAIL PROTECTED] wrote:
Bill Baxter wrote:
On Jan 9, 2008 9:18 AM, Charles R Harris [EMAIL PROTECTED]
wrote:
On Jan 8, 2008 5:01 PM, Bill Baxter [EMAIL PROTECTED] wrote:
On Jan 9, 2008 8:03 AM, Charles R Harris [EMAIL PROTECTED]
wrote:
On Jan 8,
Charles R Harris wrote:
On Jan 8, 2008 6:49 PM, Eric Firing [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Bill Baxter wrote:
On Jan 9, 2008 9:18 AM, Charles R Harris
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
On Jan 8, 2008 5:01 PM, Bill Baxter [EMAIL
On Jan 8, 2008 8:42 PM, David Cournapeau [EMAIL PROTECTED]
wrote:
Charles R Harris wrote:
On Jan 8, 2008 6:49 PM, Eric Firing [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Bill Baxter wrote:
On Jan 9, 2008 9:18 AM, Charles R Harris
[EMAIL PROTECTED] mailto:[EMAIL
Charles R Harris wrote:
On Jan 8, 2008 8:42 PM, David Cournapeau [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Charles R Harris wrote:
On Jan 8, 2008 6:49 PM, Eric Firing [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]
mailto: [EMAIL PROTECTED] mailto:[EMAIL
On Jan 8, 2008 8:55 PM, David Cournapeau [EMAIL PROTECTED]
wrote:
Charles R Harris wrote:
On Jan 8, 2008 8:42 PM, David Cournapeau [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Charles R Harris wrote:
On Jan 8, 2008 6:49 PM, Eric Firing [EMAIL PROTECTED]
Robert Kern wrote:
Neal Becker wrote:
I noticed that if I generate complex rv i.i.d. with var=1, that numpy says:
var (real part) - (close to 1.0)
var (imag part) - (close to 1.0)
but
var (complex array) - (close to complex 0)
Is that not a strange definition?
2. Take a
Charles R Harris wrote:
I see that there are already a number of parsers available for Python,
SPARK, for instance is included in the 2.5.1 distribution.
No, it isn't.
--
Robert Kern
I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our
Charles R Harris wrote:
On Jan 8, 2008 7:48 PM, Robert Kern [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Charles R Harris wrote:
Suppose you have a set of z_i and want to choose z to minimize the
average square error $ \sum_i |z_i - z|^2 $. The solution is that
On Jan 8, 2008 6:49 PM, Simon [EMAIL PROTECTED] wrote:
Newbie here. Trying to generate eigenvalues from a matrix using:
print numpy.linalg.eigvals(matrix)
This works with small matrices, say 5 x 5, but causes python to crash on
larger
matrices, say 136 x 136, which is not really very
Travis E. Oliphant wrote:
Robert Kern wrote:
Neal Becker wrote:
I noticed that if I generate complex rv i.i.d. with var=1, that numpy says:
var (real part) - (close to 1.0)
var (imag part) - (close to 1.0)
but
var (complex array) - (close to complex 0)
Is that not a strange
On Jan 8, 2008 10:00 PM, Robert Kern [EMAIL PROTECTED] wrote:
Charles R Harris wrote:
I see that there are already a number of parsers available for Python,
SPARK, for instance is included in the 2.5.1 distribution.
No, it isn't.
Oops, so it isn't. Looks like this news item at the spark
Hello
On Jan 8, 2008 5:31 PM, Ray Schumacher [EMAIL PROTECTED] wrote:
At 04:27 AM 1/8/2008, you wrote:
4. Re: parallel numpy (by Brian Granger) - any info?
(Matthieu Brucher)
From: Matthieu Brucher [EMAIL PROTECTED]
MKL does the multithreading on its own for level 3 BLAS
Charles R Harris wrote:
On Jan 8, 2008 8:55 PM, David Cournapeau [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Charles R Harris wrote:
On Jan 8, 2008 8:42 PM, David Cournapeau
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]
This can also be true of C code unless you use compilers in the same
family.
There are also issues, but they are much simpler.
The C++ name mangling can be worked around.
name mangling is just the top of the iceberg. There are problems wrt to
static initialization, exception, etc...; ABI
Matthieu Brucher wrote:
This can also be true of C code unless you use compilers in the same
family.
There are also issues, but they are much simpler.
The C++ name mangling can be worked around.
name mangling is just the top of the iceberg. There are problems
wrt
Charles R Harris charlesr.harris at gmail.com writes:
On Jan 8, 2008 6:49 PM, Simon simonpy2008 at gmail.com wrote:
Newbie here. Trying to generate eigenvalues from a matrix using:print
numpy.linalg.eigvals(matrix)This works with small matrices, say 5 x 5, but
causes python to crash on
This is no longer the case, sincerely. I use C++ compilers from
different vendors for some time, and I had almost no problem safe from
some template depth issues.
C++ ability is not so much a problem with recent compilers, I agree. But
not all platforms are or can use a recent C++
57 matches
Mail list logo