Gael Varoquaux wrote:
On Tue, Feb 17, 2009 at 10:18:11AM -0600, Robert Kern wrote:
On Tue, Feb 17, 2009 at 10:16, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Tue, Feb 17, 2009 at 09:09:38AM -0600, Robert Kern wrote:
np.repeat(np.repeat(x, 2, axis=0), 2, axis=1)
stride_tricks are
Hi numpist people,
I discovered the ufunc and there ability to compute the results on
preallocated arrays:
a = arange(10, dtype=float32)
b = arange(10, dtype=float32) + 1
c = add(a, b, a)
c is a
True
a
array([ 1., 3., 5., 7., 9., 11., 13., 15., 17.,
19.], dtype=float32)
Matthieu Brucher matthieu.brucher at gmail.com writes:
B has a reference to A.
Could you be more specific? Where is this reference stored? What C api
functions are used?
Matthieu
2009/2/18 Neal Becker ndbecker2 at gmail.com:
How is it ensured, at the C api level, that when I have an
2009/2/18 Neal Becker ndbeck...@gmail.com:
Matthieu Brucher matthieu.brucher at gmail.com writes:
B has a reference to A.
Could you be more specific? Where is this reference stored? What C api
functions are used?
I don't remember, and I don't have the Numpy book here. But if B is a
view
On Wed, Feb 18, 2009 at 01:02:54PM +, Neal Becker wrote:
B has a reference to A.
Could you be more specific? Where is this reference stored?
In [1]: import numpy as np
In [2]: a = np.empty(10)
In [3]: b = a[::2]
In [4]: b.base is a
Out[4]: True
Gaƫl
2009/2/18 Neal Becker ndbeck...@gmail.com:
Matthieu Brucher matthieu.brucher at gmail.com writes:
B has a reference to A.
Could you be more specific? Where is this reference stored? What C api
functions are used?
I'm probably not qualified to be much more specific, these links
should
Neal Becker wrote:
How is it ensured, at the C api level, that when I have an array A, and a view
of it B, that the data is not destroyed until both A and B are?
One array, A, owns the data and will deallocate it only when its
reference-count goes to 0.The view, B, has a reference to A
Thanks Chuck; that's perfect.
Ken
--
Message: 1
Date: Tue, 17 Feb 2009 11:04:56 -0700
From: Charles R Harris charlesr.har...@gmail.com
Subject: Re: [Numpy-discussion] Compute multiple outer products
without a
On Tue, Feb 17, 2009 at 11:37 PM, David Cournapeau
da...@ar.media.kyoto-u.ac.jp wrote:
Charles R Harris wrote:
Oh, and this should be avoided:
if (endptr != NULL) *endptr = (char*)p;
Folks have different views about whether the single statement should
be in brackets but no
Hi all
I'm fairly pleased to announce the release of Ironclad v0.8.1; it's not
an enormous technical leap above v0.8, but it does now enable you to
import and use SciPy and Matplotlib with IronPython on Win32 (with some
restrictions; see project page) . Downloads, and more details, are
Hi Paul, list:
Thanks for the reply.
numpy.sum() does indeed have a dtype for the accumulator for the
sum. numpy.sum() does not implement an inner (dot) product, just a
straight summation.
The feature I'm requesting is to add a similar accumulator type
argument for numpy.dot().
After
On Thu, Feb 19, 2009 at 2:37 AM, David Henderson
dav...@ipac.caltech.edu wrote:
Hi Paul, list:
Thanks for the reply.
numpy.sum() does indeed have a dtype for the accumulator for the
sum. numpy.sum() does not implement an inner (dot) product, just a
straight summation.
You may be
Hello,
Would it be possible to include the following rgb to hsv conversion code in
scipy (probably in misc along with misc.imread, etc.) ?
What do you think?
Thanks in advance.
Best regards,
--
Nicolas Pinto
Ph.D. Candidate, Brain Computer Sciences
Massachusetts Institute of Technology, USA
Hi,
I have a shiny new computer with 8 cores and numpy still takes forever
to compile --- is there a way to compile it in parallel (make -j9)?
Do distutils allow that? If not, let's move to some build system that
allows that? Just wanted to check if there is some reason for that,
apart from
On Wed, Feb 18, 2009 at 18:14, Ondrej Certik ond...@certik.cz wrote:
Hi,
I have a shiny new computer with 8 cores and numpy still takes forever
to compile --- is there a way to compile it in parallel (make -j9)?
Do distutils allow that?
No. numscons will, though.
If not, let's move to
I have a shiny new computer with 8 cores and numpy still takes forever
to compile
Yes, forever/8 = forever.
Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
On Thu, Feb 19, 2009 at 02:50:01AM +0100, Sturla Molden wrote:
I have a shiny new computer with 8 cores and numpy still takes forever
to compile
Yes, forever/8 = forever.
Good point. nan_to_num() could be helpful here.
--
Matthew Miller mat...@mattdm.org
On Thu, Feb 19, 2009 at 9:14 AM, Ondrej Certik ond...@certik.cz wrote:
Hi,
I have a shiny new computer with 8 cores and numpy still takes forever
to compile
Forever ? It takes one minute to build :) scipy takes for ever, but it
is because of C++ more than anything else.
--- is there a way
On Thu, Feb 19, 2009 at 10:50 AM, Sturla Molden stu...@molden.no wrote:
I have a shiny new computer with 8 cores and numpy still takes forever
to compile
Yes, forever/8 = forever.
Not if you are a physician: my impression in undergrad was that
infinity / 8 could be anything from 0 to
On Wed, Feb 18, 2009 at 8:00 PM, David Cournapeau courn...@gmail.comwrote:
On Thu, Feb 19, 2009 at 10:50 AM, Sturla Molden stu...@molden.no wrote:
I have a shiny new computer with 8 cores and numpy still takes forever
to compile
Yes, forever/8 = forever.
Not if you are a physician:
On Thu, Feb 19, 2009 at 11:07 AM, Ryan May rma...@gmail.com wrote:
Not to nitpick, but this is the second time I've seen this lately:
physician == medical doctor != physicist :)
You're right of course - the French word for physicist being
physicien, it may be one more mistake perpetuated by
David Cournapeau wrote:
No, and it never will. Parallel builds requires to build with
dependency handling. Even make does not handle it well: it works most
of the time by accident, but there are numerous problems (try for
example building lapack with make -j8 on your 8 cores machine - it
will
On Wed, Feb 18, 2009 at 8:19 PM, David Cournapeau courn...@gmail.comwrote:
On Thu, Feb 19, 2009 at 11:07 AM, Ryan May rma...@gmail.com wrote:
Not to nitpick, but this is the second time I've seen this lately:
physician == medical doctor != physicist :)
You're right of course - the
Hi all,
Is there any possibility to calculate norm along axis? For example:
a = np.array((
(3,4),
(6,8)))
And I want to get:
array([5.0, 10.0])
I currently use a for loop to achieve this, Is there any more elegant way to
do this?
--
Cheers,
Grissiom
Christian Heimes wrote:
David Cournapeau wrote:
No, and it never will. Parallel builds requires to build with
dependency handling. Even make does not handle it well: it works most
of the time by accident, but there are numerous problems (try for
example building lapack with make -j8 on
Grissiom,
Using the following doesn't require any loop:
In [9]: sqrt((a**2.).sum(1))
Out[9]: array([ 5., 10.])
Best,
On Wed, Feb 18, 2009 at 10:23 PM, Grissiom chaos.pro...@gmail.com wrote:
Hi all,
Is there any possibility to calculate norm along axis? For example:
a = np.array((
On Thu, Feb 19, 2009 at 12:12, Nicolas Pinto pi...@mit.edu wrote:
Grissiom,
Using the following doesn't require any loop:
In [9]: sqrt((a**2.).sum(1))
Out[9]: array([ 5., 10.])
Best,
Got it~ Thanks really ;)
--
Cheers,
Grissiom
___
Michael Abshoff wrote:
David Cournapeau wrote:
Christian Heimes wrote:
David Cournapeau wrote:
Hi,
You may call me naive and ignorant. Is it really that hard to archive
some kind of poor man's concurrency? You don't have to parallelize
everything to get a speed up on
David Cournapeau wrote:
Michael Abshoff wrote:
David Cournapeau wrote:
Hi David,
With Sage we do the cythonization in parallel and for now build
extension serially, but we have code to do that in parallel, too. Given
that we are building 180 extensions or so the speedup is linear. I often
Michael Abshoff wrote:
Sure, it also works for incremental builds and I do that many, many
times a day, i.e. for each patch I merge into the Sage library. What
gets recompiled is decided by our own dependency tracking code which we
want to push into Cython itself. Figuring out dependencies
David Cournapeau wrote:
* Integration with setuptools and eggs, which enables things like
namespace packages.
This is not. eggs are not specified, and totally implementation defined.
I tried some time ago to add an egg builder to scons, but I gave up.
And I don't think you can reuse
On Thu, Feb 19, 2009 at 3:24 PM, Andrew Straw straw...@astraw.com wrote:
David Cournapeau wrote:
* Integration with setuptools and eggs, which enables things like
namespace packages.
This is not. eggs are not specified, and totally implementation defined.
I tried some time ago to add an
David Cournapeau wrote:
Michael Abshoff wrote:
Hi David,
Sure, it also works for incremental builds and I do that many, many
times a day, i.e. for each patch I merge into the Sage library. What
gets recompiled is decided by our own dependency tracking code which we
want to push into
David Cournapeau wrote:
On Thu, Feb 19, 2009 at 3:24 PM, Andrew Straw straw...@astraw.com wrote:
It's an interesting idea to build Python package distributions without
distutils. For pure Python installables, if all you seek better is
distutils, the bar seems fairly low.
:) Being
On Thu, Feb 19, 2009 at 4:26 PM, Andrew Straw straw...@astraw.com wrote:
Maybe if you need a level of backward compatibility, (and really, to
gain a decent audience for this idea, I think you do need some level of
backward compatibility) the new tool could emit setup.py files for
consumption
35 matches
Mail list logo