Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread David Cournapeau
On Thu, Feb 19, 2009 at 4:26 PM, Andrew Straw wrote: > Maybe if you need a level of backward compatibility, (and really, to > gain a decent audience for this idea, I think you do need some level of > backward compatibility) the new tool could emit setup.py files for > consumption by distutils as

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Andrew Straw
David Cournapeau wrote: > On Thu, Feb 19, 2009 at 3:24 PM, Andrew Straw wrote: > >> It's an interesting idea to build Python package distributions without >> distutils. For pure Python installables, if all you seek better is >> distutils, the bar seems fairly low. >> > > :) Being better th

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Michael Abshoff
David Cournapeau wrote: > Michael Abshoff wrote: Hi David, >> Sure, it also works for incremental builds and I do that many, many >> times a day, i.e. for each patch I merge into the Sage library. What >> gets recompiled is decided by our own dependency tracking code which we >> want to push i

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread David Cournapeau
On Thu, Feb 19, 2009 at 3:24 PM, Andrew Straw wrote: > David Cournapeau wrote: >>> * Integration with setuptools and eggs, which enables things like >>> namespace packages. >>> >> >> This is not. eggs are not specified, and totally implementation defined. >> I tried some time ago to add an egg bu

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Andrew Straw
David Cournapeau wrote: >> * Integration with setuptools and eggs, which enables things like >> namespace packages. >> > > This is not. eggs are not specified, and totally implementation defined. > I tried some time ago to add an egg builder to scons, but I gave up. > And I don't think you can

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread David Cournapeau
Michael Abshoff wrote: > > Sure, it also works for incremental builds and I do that many, many > times a day, i.e. for each patch I merge into the Sage library. What > gets recompiled is decided by our own dependency tracking code which we > want to push into Cython itself. Figuring out dependen

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread David Cournapeau
Brian Granger wrote: > The reason that Ondrej and I are asking all of these questions is that > we are currently exploring build systems for a couple of projects we > are working on (together). In general, my sense is that distutils > won't be sufficient for our build needs. Thus, we are looking

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Michael Abshoff
David Cournapeau wrote: > Michael Abshoff wrote: >> David Cournapeau wrote: Hi David, >> With Sage we do the cythonization in parallel and for now build >> extension serially, but we have code to do that in parallel, too. Given >> that we are building 180 extensions or so the speedup is linear.

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread David Cournapeau
Michael Abshoff wrote: > David Cournapeau wrote: > >> Christian Heimes wrote: >> >>> David Cournapeau wrote: >>> > > Hi, > > >>> You may call me naive and ignorant. Is it really that hard to archive >>> some kind of poor man's concurrency? You don't have to parallelize >>> everyth

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Brian Granger
David, Thank you very much for writing this summary up. It collects a lot of useful information about the subtle and difficult issues related to building multi-language python packages. The reason that Ondrej and I are asking all of these questions is that we are currently exploring build system

Re: [Numpy-discussion] linalg.norm along axis?

2009-02-18 Thread Grissiom
On Thu, Feb 19, 2009 at 12:12, Nicolas Pinto wrote: > Grissiom, > > Using the following doesn't require any loop: > > In [9]: sqrt((a**2.).sum(1)) > Out[9]: array([ 5., 10.]) > > Best, > > Got it~ Thanks really ;) -- Cheers, Grissiom ___ Numpy-discu

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Michael Abshoff
David Cournapeau wrote: > Christian Heimes wrote: >> David Cournapeau wrote: Hi, >> You may call me naive and ignorant. Is it really that hard to archive >> some kind of poor man's concurrency? You don't have to parallelize >> everything to get a speed up on multi core machines. Usually the compi

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread David Cournapeau
On Thu, Feb 19, 2009 at 9:14 AM, Ondrej Certik wrote: > Hi, > > I have a shiny new computer with 8 cores and numpy still takes forever > to compile --- is there a way to compile it in parallel (make -j9)? > > Do distutils allow that? If not, let's move to some build system that > allows that? Sin

Re: [Numpy-discussion] linalg.norm along axis?

2009-02-18 Thread Nicolas Pinto
Grissiom, Using the following doesn't require any loop: In [9]: sqrt((a**2.).sum(1)) Out[9]: array([ 5., 10.]) Best, On Wed, Feb 18, 2009 at 10:23 PM, Grissiom wrote: > Hi all, > > Is there any possibility to calculate norm along axis? For example: > > a = np.array(( > (3,4), > (6,8))) > >

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread David Cournapeau
Christian Heimes wrote: > David Cournapeau wrote: > >> No, and it never will. Parallel builds requires to build with >> dependency handling. Even make does not handle it well: it works most >> of the time by accident, but there are numerous problems (try for >> example building lapack with make

[Numpy-discussion] linalg.norm along axis?

2009-02-18 Thread Grissiom
Hi all, Is there any possibility to calculate norm along axis? For example: a = np.array(( (3,4), (6,8))) And I want to get: array([5.0, 10.0]) I currently use a for loop to achieve this, Is there any more elegant way to do this? -- Cheers, Grissiom ___

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Ryan May
On Wed, Feb 18, 2009 at 8:19 PM, David Cournapeau wrote: > On Thu, Feb 19, 2009 at 11:07 AM, Ryan May wrote: > > > > > Not to nitpick, but this is the second time I've seen this lately: > > > > physician == medical doctor != physicist :) > > You're right of course - the French word for physicist

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Christian Heimes
David Cournapeau wrote: > No, and it never will. Parallel builds requires to build with > dependency handling. Even make does not handle it well: it works most > of the time by accident, but there are numerous problems (try for > example building lapack with make -j8 on your 8 cores machine - it >

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread David Cournapeau
On Thu, Feb 19, 2009 at 11:07 AM, Ryan May wrote: > > Not to nitpick, but this is the second time I've seen this lately: > > physician == medical doctor != physicist :) You're right of course - the French word for physicist being "physicien", it may be one more mistake perpetuated by the French

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Ryan May
On Wed, Feb 18, 2009 at 8:00 PM, David Cournapeau wrote: > On Thu, Feb 19, 2009 at 10:50 AM, Sturla Molden wrote: > > > >> I have a shiny new computer with 8 cores and numpy still takes forever > >> to compile > > > > Yes, forever/8 = forever. > > Not if you are a physician: my impression in unde

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread David Cournapeau
On Thu, Feb 19, 2009 at 10:50 AM, Sturla Molden wrote: > >> I have a shiny new computer with 8 cores and numpy still takes forever >> to compile > > Yes, forever/8 = forever. Not if you are a physician: my impression in undergrad was that infinity / 8 could be anything from 0 to infinity in physi

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread David Cournapeau
On Thu, Feb 19, 2009 at 9:14 AM, Ondrej Certik wrote: > Hi, > > I have a shiny new computer with 8 cores and numpy still takes forever > to compile Forever ? It takes one minute to build :) scipy takes for ever, but it is because of C++ more than anything else. > --- is there a way to compile it

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Matthew Miller
On Thu, Feb 19, 2009 at 02:50:01AM +0100, Sturla Molden wrote: > > I have a shiny new computer with 8 cores and numpy still takes forever > > to compile > Yes, forever/8 = forever. Good point. nan_to_num() could be helpful here. -- Matthew Miller mat...@mattdm.org

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Sturla Molden
> I have a shiny new computer with 8 cores and numpy still takes forever > to compile Yes, forever/8 = forever. Sturla Molden ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion

Re: [Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Robert Kern
On Wed, Feb 18, 2009 at 18:14, Ondrej Certik wrote: > Hi, > > I have a shiny new computer with 8 cores and numpy still takes forever > to compile --- is there a way to compile it in parallel (make -j9)? > > Do distutils allow that? No. numscons will, though. > If not, let's move to some build sy

[Numpy-discussion] parallel compilation of numpy

2009-02-18 Thread Ondrej Certik
Hi, I have a shiny new computer with 8 cores and numpy still takes forever to compile --- is there a way to compile it in parallel (make -j9)? Do distutils allow that? If not, let's move to some build system that allows that? Just wanted to check if there is some reason for that, apart from patch

[Numpy-discussion] rgb_to_hsv in scipy.misc ? (was: Optimizing speed for large-array inter-element algorithms (specifically, color space conversion))

2009-02-18 Thread Nicolas Pinto
Hello, Would it be possible to include the following rgb to hsv conversion code in scipy (probably in misc along with misc.imread, etc.) ? What do you think? Thanks in advance. Best regards, -- Nicolas Pinto Ph.D. Candidate, Brain & Computer Sciences Massachusetts Institute of Technology, USA

Re: [Numpy-discussion] additional dtype argument to numpy.dot() (Re: Numpy-discussion Digest, Vol 29, Issue 48)

2009-02-18 Thread David Cournapeau
On Thu, Feb 19, 2009 at 2:37 AM, David Henderson wrote: > Hi Paul, list: > > Thanks for the reply. > > numpy.sum() does indeed have a dtype for the accumulator for the > sum. numpy.sum() does not implement an inner (dot) product, just a > straight summation. You may be interested in xblas, which

[Numpy-discussion] additional dtype argument to numpy.dot() (Re: Numpy-discussion Digest, Vol 29, Issue 48)

2009-02-18 Thread David Henderson
Hi Paul, list: Thanks for the reply. numpy.sum() does indeed have a dtype for the accumulator for the sum. numpy.sum() does not implement an inner (dot) product, just a straight summation. The feature I'm requesting is to add a similar accumulator type argument for numpy.dot(). After tra

[Numpy-discussion] Ironclad v0.8.1 released

2009-02-18 Thread William Reade
Hi all I'm fairly pleased to announce the release of Ironclad v0.8.1; it's not an enormous technical leap above v0.8, but it does now enable you to import and use SciPy and Matplotlib with IronPython on Win32 (with some restrictions; see project page) . Downloads, and more details, are availab

Re: [Numpy-discussion] Warnings in current trunk

2009-02-18 Thread Charles R Harris
On Tue, Feb 17, 2009 at 11:37 PM, David Cournapeau < da...@ar.media.kyoto-u.ac.jp> wrote: > Charles R Harris wrote: > > > > Oh, and this should be avoided: > > > > if (endptr != NULL) *endptr = (char*)p; > > > > Folks have different views about whether the single statement should > > be in

Re: [Numpy-discussion] Compute multiple outer products, without a loop?

2009-02-18 Thread Ken Basye
Thanks Chuck; that's perfect. Ken -- Message: 1 Date: Tue, 17 Feb 2009 11:04:56 -0700 From: Charles R Harris Subject: Re: [Numpy-discussion] Compute multiple outer products without a loop? To: Discussion of Nume

Re: [Numpy-discussion] views and object lifetime

2009-02-18 Thread Travis E. Oliphant
Neal Becker wrote: > How is it ensured, at the C api level, that when I have an array A, and a view > of it B, that the data is not destroyed until both A and B are? > One array, A, owns the data and will deallocate it only when its reference-count goes to 0.The view, B, has a reference to

Re: [Numpy-discussion] views and object lifetime

2009-02-18 Thread Scott Sinclair
> 2009/2/18 Neal Becker : > Matthieu Brucher gmail.com> writes: > >> >> B has a reference to A. > > Could you be more specific? Where is this reference stored? What C api > functions are used? I'm probably not qualified to be much more specific, these links should provide the necessary detail:

Re: [Numpy-discussion] views and object lifetime

2009-02-18 Thread Gael Varoquaux
On Wed, Feb 18, 2009 at 01:02:54PM +, Neal Becker wrote: > > B has a reference to A. > Could you be more specific? Where is this reference stored? In [1]: import numpy as np In [2]: a = np.empty(10) In [3]: b = a[::2] In [4]: b.base is a Out[4]: True Gaƫl __

Re: [Numpy-discussion] views and object lifetime

2009-02-18 Thread Matthieu Brucher
2009/2/18 Neal Becker : > Matthieu Brucher gmail.com> writes: > >> >> B has a reference to A. > > Could you be more specific? Where is this reference stored? What C api > functions are used? I don't remember, and I don't have the Numpy book here. But if B is a view on A, a flag indicates that B

Re: [Numpy-discussion] views and object lifetime

2009-02-18 Thread Neal Becker
Matthieu Brucher gmail.com> writes: > > B has a reference to A. Could you be more specific? Where is this reference stored? What C api functions are used? > Matthieu > > 2009/2/18 Neal Becker gmail.com>: > > How is it ensured, at the C api level, that when I have an array A, and a view > >

Re: [Numpy-discussion] views and object lifetime

2009-02-18 Thread Matthieu Brucher
B has a reference to A. Matthieu 2009/2/18 Neal Becker : > How is it ensured, at the C api level, that when I have an array A, and a view > of it B, that the data is not destroyed until both A and B are? > > ___ > Numpy-discussion mailing list > Numpy-d

[Numpy-discussion] views and object lifetime

2009-02-18 Thread Neal Becker
How is it ensured, at the C api level, that when I have an array A, and a view of it B, that the data is not destroyed until both A and B are? ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-d

[Numpy-discussion] inplace dot products

2009-02-18 Thread Olivier Grisel
Hi numpist people, I discovered the ufunc and there ability to compute the results on preallocated arrays: >>> a = arange(10, dtype=float32) >>> b = arange(10, dtype=float32) + 1 >>> c = add(a, b, a) >>> c is a True >>> a array([ 1., 3., 5., 7., 9., 11., 13., 15., 17., 19.],

Re: [Numpy-discussion] fancy view question

2009-02-18 Thread Vincent Schut
Gael Varoquaux wrote: > On Tue, Feb 17, 2009 at 10:18:11AM -0600, Robert Kern wrote: >> On Tue, Feb 17, 2009 at 10:16, Gael Varoquaux >> wrote: >>> On Tue, Feb 17, 2009 at 09:09:38AM -0600, Robert Kern wrote: np.repeat(np.repeat(x, 2, axis=0), 2, axis=1) > stride_tricks are fun, but thi