On Wed, Jan 26, 2011 at 12:28 PM, Mark Wiebe mwwi...@gmail.com wrote:
On Tue, Jan 25, 2011 at 5:18 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
On Tue, Jan 25, 2011 at 1:13 PM, Travis Oliphant
oliph...@enthought.comwrote:
It may make sense for a NumPy 1.6 to come out in
Hi,
I just noticed a document/ implementation conflict with tril and triu.
According tril documentation it should return of same shape and data-type as
called. But this is not the case at least with dtype bool.
The input shape is referred as (M, N) in tril and triu, but as (N, M) in
tri.
On Wed, Jan 26, 2011 at 7:22 AM, eat e.antero.ta...@gmail.com wrote:
Hi,
I just noticed a document/ implementation conflict with tril and triu.
According tril documentation it should return of same shape and data-type as
called. But this is not the case at least with dtype bool.
The input
Hi,
On Wed, Jan 26, 2011 at 2:35 PM, josef.p...@gmail.com wrote:
On Wed, Jan 26, 2011 at 7:22 AM, eat e.antero.ta...@gmail.com wrote:
Hi,
I just noticed a document/ implementation conflict with tril and triu.
According tril documentation it should return of same shape and data-type
as
On Wed, Jan 26, 2011 at 6:47 PM, Dag Sverre Seljebotn
da...@student.matnat.uio.no wrote:
On 01/26/2011 02:05 AM, David wrote:
On 01/26/2011 01:42 AM, Charles R Harris wrote:
Hi All,
Just thought it was time to start discussing a release schedule for
numpy 2.0 so we have something to aim at.
Hi there!
I have successfully built numpy 1.5 on ubuntu lucid (32 for now).
I think I got ATLAS/lapack/BLAS support, and if I
ldd linalg/lapack_lite.so
I see that my libptf77blas.so etc. are successfully linked. :-)
However, how to I find out, if (and where) libamd.a and libumfpack.a
have been
On 01/25/2011 10:28 PM, Mark Wiebe wrote:
On Tue, Jan 25, 2011 at 5:18 PM, Charles R Harris
charlesr.har...@gmail.com mailto:charlesr.har...@gmail.com wrote:
On Tue, Jan 25, 2011 at 1:13 PM, Travis Oliphant
oliph...@enthought.com mailto:oliph...@enthought.com wrote:
On Jan
Hi,
I just confirmed Stefan's answer on one of the examples in
http://www.mathworks.co.jp/matlabcentral/newsreader/view_thread/262996
matlab:
A = randn(100,2)*[2 0;3 0;-1 2]';
A = A + randn(size(A))/3;
[U,S,V] = svd(A);
X = V(:,end)
python:
from numpy import *
A =
On Wed, Jan 26, 2011 at 2:23 AM, Ralf Gommers
ralf.gomm...@googlemail.comwrote:
On Wed, Jan 26, 2011 at 12:28 PM, Mark Wiebe mwwi...@gmail.com wrote:
On Tue, Jan 25, 2011 at 5:18 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
On Tue, Jan 25, 2011 at 1:13 PM, Travis Oliphant
I wrote a new function, einsum, which implements Einstein summation
notation, and I'd like comments/thoughts from people who might be interested
in this kind of thing.
In testing it, it is also faster than many of NumPy's built-in functions,
except for dot and inner. At the bottom of this email
On Wed, Jan 26, 2011 at 11:27 AM, Mark Wiebe mwwi...@gmail.com wrote:
I wrote a new function, einsum, which implements Einstein summation
notation, and I'd like comments/thoughts from people who might be interested
in this kind of thing.
This sounds really cool! I've definitely considered
On Wed, Jan 26, 2011 at 1:36 PM, Joshua Holbrook josh.holbr...@gmail.comwrote:
On Wed, Jan 26, 2011 at 11:27 AM, Mark Wiebe mwwi...@gmail.com wrote:
I wrote a new function, einsum, which implements Einstein summation
notation, and I'd like comments/thoughts from people who might be
On Wed, Jan 26, 2011 at 12:48 PM, Mark Wiebe mwwi...@gmail.com wrote:
On Wed, Jan 26, 2011 at 1:36 PM, Joshua Holbrook josh.holbr...@gmail.com
wrote:
On Wed, Jan 26, 2011 at 11:27 AM, Mark Wiebe mwwi...@gmail.com wrote:
I wrote a new function, einsum, which implements Einstein summation
On Wed, Jan 26, 2011 at 2:01 PM, Joshua Holbrook josh.holbr...@gmail.comwrote:
snip
How closely coupled is this new code with numpy's internals? That is,
could you factor it out into its own package? If so, then people could
have immediate use out of it without having to integrate it into
On Wed, Jan 26, 2011 at 16:43, Mark Wiebe mwwi...@gmail.com wrote:
On Wed, Jan 26, 2011 at 2:01 PM, Joshua Holbrook josh.holbr...@gmail.com
wrote:
snip
How closely coupled is this new code with numpy's internals? That is,
could you factor it out into its own package? If so, then people could
It think his real question is whether einsum() and the iterator stuff
can live in a separate module that *uses* a released version of numpy
rather than a development branch.
--
Robert Kern
Indeed, I would like to be able to install and use einsum() without
having to install another
Mark,
interesting idea. Given the fact that in 2-d euclidean metric, the
Einstein summation conventions are only a way to write out
conventional matrix multiplications, do you consider at some point to
include a non-euclidean metric in this thing? (As you have in special
relativity, for
On Thu, Jan 27, 2011 at 12:18:30AM +0100, Hanno Klemm wrote:
interesting idea. Given the fact that in 2-d euclidean metric, the
Einstein summation conventions are only a way to write out
conventional matrix multiplications, do you consider at some point to
include a non-euclidean metric
On Wed, Jan 26, 2011 at 3:05 PM, Joshua Holbrook josh.holbr...@gmail.comwrote:
It think his real question is whether einsum() and the iterator stuff
can live in a separate module that *uses* a released version of numpy
rather than a development branch.
--
Robert Kern
Indeed, I
On Wed, Jan 26, 2011 at 3:18 PM, Hanno Klemm kl...@phys.ethz.ch wrote:
Mark,
interesting idea. Given the fact that in 2-d euclidean metric, the
Einstein summation conventions are only a way to write out
conventional matrix multiplications, do you consider at some point to
include a
Am 27.01.2011 um 00:29 schrieb Mark Wiebe:
On Wed, Jan 26, 2011 at 3:18 PM, Hanno Klemm kl...@phys.ethz.ch
wrote:
Mark,
interesting idea. Given the fact that in 2-d euclidean metric, the
Einstein summation conventions are only a way to write out
conventional matrix multiplications, do you
On Wednesday, January 26, 2011, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Thu, Jan 27, 2011 at 12:18:30AM +0100, Hanno Klemm wrote:
interesting idea. Given the fact that in 2-d euclidean metric, the
Einstein summation conventions are only a way to write out
conventional matrix
Ah, sorry for misunderstanding. That would actually be very difficult,
as the iterator required a fair bit of fixes and adjustments to the core.
The new_iterator branch should be 1.5 ABI compatible, if that helps.
I see. Perhaps the fixes and adjustments can/should be included with
numpy
On Wed, Jan 26, 2011 at 5:02 PM, Joshua Holbrook
josh.holbr...@gmail.com wrote:
Ah, sorry for misunderstanding. That would actually be very difficult,
as the iterator required a fair bit of fixes and adjustments to the core.
The new_iterator branch should be 1.5 ABI compatible, if that helps.
On Wed, Jan 26, 2011 at 7:35 PM, Benjamin Root ben.r...@ou.edu wrote:
On Wednesday, January 26, 2011, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Thu, Jan 27, 2011 at 12:18:30AM +0100, Hanno Klemm wrote:
interesting idea. Given the fact that in 2-d euclidean metric, the
Einstein
Nice function, and wonderful that it speeds some tasks up.
some feedback: the following notation is a little counter intuitive to me:
np.einsum('i...-', a)
array([50, 55, 60, 65, 70])
np.sum(a, axis=0)
array([50, 55, 60, 65, 70])
Since there is nothing after the -, I expected a
On Wed, Jan 26, 2011 at 1:10 PM, Mark Wiebe mwwi...@gmail.com wrote:
On Wed, Jan 26, 2011 at 2:23 AM, Ralf Gommers ralf.gomm...@googlemail.com
wrote:
On Wed, Jan 26, 2011 at 12:28 PM, Mark Wiebe mwwi...@gmail.com wrote:
On Tue, Jan 25, 2011 at 5:18 PM, Charles R Harris
On Wed, Jan 26, 2011 at 6:41 PM, Jonathan Rocher jroc...@enthought.comwrote:
Nice function, and wonderful that it speeds some tasks up.
some feedback: the following notation is a little counter intuitive to me:
np.einsum('i...-', a)
array([50, 55, 60, 65, 70])
np.sum(a,
On Wed, Jan 26, 2011 at 5:23 PM, josef.p...@gmail.com wrote:
snip
So, if I read the examples correctly we finally get dot along an axis
np.einsum('ijk,ji-', a, b)
np.einsum('ijk,jik-k', a, b)
or something like this.
the notation might require getting used to but it doesn't look worse
The only disadvantage I see, is that choosing the axes to operate on
in a program or function requires string manipulation.
One possibility would be for the Python exposure to accept lists or tuples
of integers. The subscript 'ii' could be [(0,0)], and 'ij,jk-ik' could be
[(0,1), (1,2),
On Wed, Jan 26, 2011 at 8:29 PM, Joshua Holbrook josh.holbr...@gmail.comwrote:
The only disadvantage I see, is that choosing the axes to operate on
in a program or function requires string manipulation.
One possibility would be for the Python exposure to accept lists or
tuples
of
Samuel John, on 2011-01-26 15:08, wrote:
Hi there!
I have successfully built numpy 1.5 on ubuntu lucid (32 for now).
I think I got ATLAS/lapack/BLAS support, and if I
ldd linalg/lapack_lite.so
I see that my libptf77blas.so etc. are successfully linked. :-)
However, how to I find out,
32 matches
Mail list logo