[Numpy-discussion] building inplace with numpy.distutils?

2009-05-13 Thread Robert Cimrman
Hi (David)!

I am evaluating numpy.distutils as a build/install system for my project 
- is it possible to build the extension modules in-place so that the 
project can be used without installing it?  A pointer to documentation 
concerning this would be handy... Currently I use a regular Makefile for 
the build, which works quite well, but is not very portable and does not 
solve the package installation.

Otherwise let me say that numpy.distutils work very well, much better 
than the plain old distutils.

Best regards,
r.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] copy and paste arrays from matlab

2009-05-13 Thread Robin
[crossposted to numpy-discussion and mlabwrap-user]

Hi,

Please find attached Python code for the opposite direction - ie
format Python arrays for copy and pasting into an interactive Matlab
session.

It doesn't look as nice because newlines are row seperators in matlab
so I put everything on one line. Also theres no way to input 2D
arrays in Matlab that I know of without using reshape.

In [286]: from mmat import mmat
In [289]: x = rand(4,2)
In [290]: mmat(x,'%2.3f')
[ 0.897 0.074 ;   0.005 0.174 ;   0.207 0.736 ;   0.453 0.111 ]
In [287]: mmat(x,'%2.3f')
reshape([  [ 0.405 0.361 0.609 ;   0.249 0.275 0.620 ;   0.740 0.754
0.699 ;   0.280 0.053 0.181 ] [ 0.796 0.114 0.720 ;   0.296 0.692
0.352 ;   0.218 0.894 0.818 ;   0.709 0.946 0.860 ] ],[ 4 3 2 ])
In [288]: mmat(x)
reshape([  [ 4.046905655728e-01 3.605995195844e-01 6.089653771166e-01
;   2.491999503702e-01 2.751880043180e-01 6.199629932480e-01 ;
7.401974485581e-01 7.537929345351e-01 6.991798908866e-01 ;
2.800494872019e-01 5.258468515210e-02 1.812706305994e-01 ] [
7.957907133899e-01 1.144010574386e-01 7.203522053853e-01 ;
2.962977637560e-01 6.920657079182e-01 3.522371076632e-01 ;
2.181950954650e-01 8.936401263709e-01 8.177351741233e-01 ;
7.092517323839e-01 9.458774967489e-01 8.595104463863e-01 ] ],[ 4 3 2
])

Hope someone else finds it useful.

Cheers

Robin

On Tue, May 12, 2009 at 2:12 PM, Robin robi...@gmail.com wrote:
 [crossposted to numpy-discussion and mlabwrap-user]

 Hi,

 I wrote a little utility class in Matlab that inherits from double and
 overloads the display function so you can easily print matlab arrays
 of arbitrary dimension in Numpy format for easy copy and pasting.

 I have to work a lot with other peoples code - and while mlabwrap and
 reading and writing is great, sometimes I find it easier and quicker
 just to copy and paste smaller arrays between interactive sessions.

 Anyway you put it in your Matlab path then you can do
 x = rand(2,3,4,5);
 a = array(x)

 You can specify the fprintf style format string either in the
 constructor or after:
 a = array(x,'%2.6f')
 a.format = '%2.2f'

 eg:
 x = rand(4,3,2);
 array(x)
 ans =

 array([[[2.071566461449581e-01, 3.501602151029837e-02],
        [1.589135260727248e-01, 3.766891927380323e-01],
        [8.757206127846399e-01, 7.259276565938600e-01]],

       [[7.570839415557700e-01, 3.974969411279816e-02],
        [8.109207856487061e-01, 5.043242527988604e-01],
        [6.351863794630047e-01, 7.013280585980169e-01]],

       [[8.863281096304466e-01, 9.885678912262633e-01],
        [4.765077527169480e-01, 7.634956792870943e-01],
        [9.728134909163066e-02, 4.588908258125032e-01]],

       [[4.722298594969571e-01, 6.861815984603373e-01],
        [1.162875322461844e-01, 4.887479677951201e-02],
        [9.084394562396312e-01, 5.822948089552498e-01]]])

 It's a while since I've tried to do anything like this in Matlab and I
 must admit I found it pretty painful, so I hope it can be useful to
 someone else!

 I will try and do one for Python for copying and pasting to Matlab,
 but I'm expecting that to be a lot easier!

 Cheers

 Robin



mmat.py
Description: Binary data
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] copy and paste arrays from matlab

2009-05-13 Thread josef . pktd
On Wed, May 13, 2009 at 12:39 PM, Robin robi...@gmail.com wrote:
 [crossposted to numpy-discussion and mlabwrap-user]

 Hi,

 Please find attached Python code for the opposite direction - ie
 format Python arrays for copy and pasting into an interactive Matlab
 session.

 It doesn't look as nice because newlines are row seperators in matlab
 so I put everything on one line. Also theres no way to input 2D
 arrays in Matlab that I know of without using reshape.

You could use ``...`` as row continuation, and the matlab help
mentions ``cat`` to build multi dimensional arrays.
But cat seems to require nesting for more than 3 dimensions, so is not
really an improvement to reshape.
 C = cat(4, cat(3,[1,1;2,3],[1,2;3,3]),cat(3,[1,1;2,3],[1,2;3,3]));
 size(C)
ans =
 2 2 2 2

Thanks, it will be useful.

Josef


 In [286]: from mmat import mmat
 In [289]: x = rand(4,2)
 In [290]: mmat(x,'%2.3f')
 [ 0.897 0.074 ;   0.005 0.174 ;   0.207 0.736 ;   0.453 0.111 ]
 In [287]: mmat(x,'%2.3f')
 reshape([  [ 0.405 0.361 0.609 ;   0.249 0.275 0.620 ;   0.740 0.754
 0.699 ;   0.280 0.053 0.181 ] [ 0.796 0.114 0.720 ;   0.296 0.692
 0.352 ;   0.218 0.894 0.818 ;   0.709 0.946 0.860 ] ],[ 4 3 2 ])
 In [288]: mmat(x)
 reshape([  [ 4.046905655728e-01 3.605995195844e-01 6.089653771166e-01
 ;   2.491999503702e-01 2.751880043180e-01 6.199629932480e-01 ;
 7.401974485581e-01 7.537929345351e-01 6.991798908866e-01 ;
 2.800494872019e-01 5.258468515210e-02 1.812706305994e-01 ] [
 7.957907133899e-01 1.144010574386e-01 7.203522053853e-01 ;
 2.962977637560e-01 6.920657079182e-01 3.522371076632e-01 ;
 2.181950954650e-01 8.936401263709e-01 8.177351741233e-01 ;
 7.092517323839e-01 9.458774967489e-01 8.595104463863e-01 ] ],[ 4 3 2
 ])

 Hope someone else finds it useful.

 Cheers

 Robin

 On Tue, May 12, 2009 at 2:12 PM, Robin robi...@gmail.com wrote:
 [crossposted to numpy-discussion and mlabwrap-user]

 Hi,

 I wrote a little utility class in Matlab that inherits from double and
 overloads the display function so you can easily print matlab arrays
 of arbitrary dimension in Numpy format for easy copy and pasting.

 I have to work a lot with other peoples code - and while mlabwrap and
 reading and writing is great, sometimes I find it easier and quicker
 just to copy and paste smaller arrays between interactive sessions.

 Anyway you put it in your Matlab path then you can do
 x = rand(2,3,4,5);
 a = array(x)

 You can specify the fprintf style format string either in the
 constructor or after:
 a = array(x,'%2.6f')
 a.format = '%2.2f'

 eg:
 x = rand(4,3,2);
 array(x)
 ans =

 array([[[2.071566461449581e-01, 3.501602151029837e-02],
        [1.589135260727248e-01, 3.766891927380323e-01],
        [8.757206127846399e-01, 7.259276565938600e-01]],

       [[7.570839415557700e-01, 3.974969411279816e-02],
        [8.109207856487061e-01, 5.043242527988604e-01],
        [6.351863794630047e-01, 7.013280585980169e-01]],

       [[8.863281096304466e-01, 9.885678912262633e-01],
        [4.765077527169480e-01, 7.634956792870943e-01],
        [9.728134909163066e-02, 4.588908258125032e-01]],

       [[4.722298594969571e-01, 6.861815984603373e-01],
        [1.162875322461844e-01, 4.887479677951201e-02],
        [9.084394562396312e-01, 5.822948089552498e-01]]])

 It's a while since I've tried to do anything like this in Matlab and I
 must admit I found it pretty painful, so I hope it can be useful to
 someone else!

 I will try and do one for Python for copying and pasting to Matlab,
 but I'm expecting that to be a lot easier!

 Cheers

 Robin


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] (no subject)

2009-05-13 Thread David J Strozzi
Hi,

[You may want to edit the numpy homepage numpy.scipy.org to tell 
people they must subscribe to post, and adding a link to 
http://www.scipy.org/Mailing_Lists]


Many of you probably know of the interpreter yorick by Dave Munro. As 
a Livermoron, I use it all the time.  There are some built-in 
functions there, analogous to but above and beyond numpy's sum() and 
diff(), which are quite useful for common operations on gridded data. 
Of course one can write their own, but maybe they should be cleanly 
canonized?

For instance:

x = linspace(0,10,10)
y = sin(x)

It is common, say when integrating y(x), to take point-centered 
data and want to zone-center it:

I = sum(zcen(y)*diff(x))

def zcen(x): return 0.5*(x[0:-1]+x[1:])

Besides zcen, yorick has builtins for point centering, un-zone 
centering, etc.  Also, due to its slick syntax you can give these 
things as array indexes:

x(zcen), y(dif), z(:,sum,:)


Just some thoughts,
David Strozzi
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Are masked arrays slower for processing than ndarrays?

2009-05-13 Thread Pierre GM
All,
I just committed (r6994) some modifications to numpy.ma.getdata (Eric  
Firing's patch) and to the ufunc wrappers that were too slow with  
large arrays. We're roughly 3 times faster than we used to, but still  
slower than the equivalent classic ufuncs (no surprise here).

Here's the catch: it's basically cheating. I got rid of the pre- 
processing (where a mask was calculated depending on the domain and  
the input set to a filling value depending on this mask, before the  
actual computation). Instead, I  force  
np.seterr(divide='ignore',invalid='ignore') before calling the ufunc  
on the .data part, then mask the invalid values (if any) and reset the  
corresponding entries in .data to the input. Finally, I reset the  
error status. All in all, we're still data-friendly, meaning that the  
value below a masked entry is the same as the input, but we can't say  
that values initially masked are discarded (they're used in the  
computation but reset to their initial value)...

This playing around with the error status may (or may not, I don't  
know) cause some problems down the road.
It's still faaar faster than computing the domain (especially  
_DomainSafeDivide) when the inputs are large...
  I'd be happy if you could give it a try and send some feedback.

Cheers
P.

On May 9, 2009, at 8:17 PM, Eric Firing wrote:

 Eric Firing wrote:

 Pierre,

 ... I pressed send too soon.  There are test failures with the  
 patch I attached to my last message.  I think the basic ideas are  
 correct, but evidently there are wrinkles to be worked out.  Maybe  
 putmask() has to be used instead of where() (putmask is much faster)  
 to maintain the ability to do *= and similar, and maybe there are  
 other adjustments. Somehow, though, it should be possible to get  
 decent speed for simple multiplication and division; a 10x penalty  
 relative to ndarray operations is just too much.

 Eric


 Eli Bressert wrote:
 Hi,

 I'm using masked arrays to compute large-scale standard deviation,
 multiplication, gaussian, and weighted averages. At first I thought
 using the masked arrays would be a great way to sidestep looping
 (which it is), but it's still slower than expected. Here's a snippet
 of the code that I'm using it for.
 [...]
 # Like the spatial_weight section, this takes about 20 seconds
 W = spatial_weight / Rho2

 # Takes less than one second.
 Ave = np.average(av_good,axis=1,weights=W)

 Any ideas on why it would take such a long time for processing?
 A part of the slowdown is what looks to me like unnecessary copying  
 in _MaskedBinaryOperation.__call__.  It is using getdata, which  
 applies numpy.array to its input, forcing a copy.  I think the copy  
 is actually unintentional, in at least one sense, and possibly two:  
 first, because the default argument of getattr is always evaluated,  
 even if it is not needed; and second, because the call to np.array  
 is used where np.asarray or equivalent would suffice.
 The first file attached below shows the kernprof in the case of  
 multiplying two masked arrays, shape (10,50), with no masked  
 elements; 2/3 of the time is taken copying the data.
 Now, if there are actually masked elements in the arrays, it gets  
 much worse: see the second attachment.  The total time has  
 increased by more than a factor of 3, and the culprit is  
 numpy.which(), a very slow function.  It looks to me like it is  
 doing nothing useful at all; the numpy binary operation is still  
 being executed for all elements, regardless of mask, contrary to  
 the intention implied by the comment in the code.
 The third attached file has a patch that fixes the getdata problem  
 and eliminates the which().
 With this patch applied we get the profile in the 4th file, to be  
 compared to the second profile.  Much better.  I am pretty sure it  
 could still be sped up quite a bit, though.  It looks like the  
 masks are essentially being calculated twice for no good reason,  
 but I don't completely understand all the mask considerations, so  
 at this point I am not trying to fix that problem.
 Eric
 Especially the spatial_weight and W variables? Would there be a  
 faster
 way to do this? Or is there a way that numpy.std can process ignore
 nan's when processing?

 Thanks,

 Eli Bressert
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Are masked arrays slower for processing than ndarrays?

2009-05-13 Thread Stéfan van der Walt
Hi Pierre

2009/5/14 Pierre GM pgmdevl...@gmail.com:
 This playing around with the error status may (or may not, I don't
 know) cause some problems down the road.

I see the buildbot is complaining on SPARC.  Not sure if it is
complaining about your commit, but might be worth checking out
nontheless.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FAIL: Test bug in reduceat with structured arrays

2009-05-13 Thread David Warde-Farley
On 11-May-09, at 10:55 AM, Pauli Virtanen wrote:

 Wonder why buildbot's 64-bit SPARC boxes don't see this if it's  
 something
 connected to 64-bitness...

Different endianness, maybe? That seems even weirder, honestly.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Are masked arrays slower for process ing than ndarrays?

2009-05-13 Thread Matt Knox
Hi Pierre,

 Here's the catch: it's basically cheating. I got rid of the pre- 
 processing (where a mask was calculated depending on the domain and  
 the input set to a filling value depending on this mask, before the  
 actual computation). Instead, I  force  
 np.seterr(divide='ignore',invalid='ignore') before calling the ufunc  

This isn't a thread safe approach and could cause wierd side effects in a
multi-threaded application. I think modifying global options/variables inside
any function where it generally wouldn't be expected by the user is a bad idea.

- Matt

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Are masked arrays slower for processing than ndarrays?

2009-05-13 Thread Pierre GM

On May 13, 2009, at 7:36 PM, Matt Knox wrote:

 Here's the catch: it's basically cheating. I got rid of the pre-
 processing (where a mask was calculated depending on the domain and
 the input set to a filling value depending on this mask, before the
 actual computation). Instead, I  force
 np.seterr(divide='ignore',invalid='ignore') before calling the ufunc

 This isn't a thread safe approach and could cause wierd side effects  
 in a
 multi-threaded application. I think modifying global options/ 
 variables inside
 any function where it generally wouldn't be expected by the user is  
 a bad idea.

Whine. I was afraid of something like that...
2 options, then:
* We revert to computing a mask beforehand. That looks like the part  
that takes the most time w/ domained operations (according to Robert  
K's profiler. Robert, you deserve a statue for this tool). And that  
doesn't solve the pb of power, anyway: how do you compute the domain  
of power ?
* We reimplement masked versions of the ufuncs in C. Won't happen from  
me anytime soon (this fall or winter, maybe...)
Also, importing numpy.ma currently calls numpy.seterr(all='ignore')  
anyway...

So that's a -1 from Matt. Anybody else ?

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Are masked arrays slower for processing than ndarrays?

2009-05-13 Thread Matthew Brett
Hi,

 Whine. I was afraid of something like that...
 2 options, then:
 * We revert to computing a mask beforehand. That looks like the part
 that takes the most time w/ domained operations (according to Robert
 K's profiler. Robert, you deserve a statue for this tool). And that
 doesn't solve the pb of power, anyway: how do you compute the domain
 of power ?
 * We reimplement masked versions of the ufuncs in C. Won't happen from
 me anytime soon (this fall or winter, maybe...)
 Also, importing numpy.ma currently calls numpy.seterr(all='ignore')
 anyway...

I'm afraid I don't know the code at all, so count this as seems good,
but I had the feeling that the change is good for speed but possibly
bad for stability / readability?

In that case it seems right not to do that, and wait until someone
needs speed enough to write it in C or similar...

Best,

Matthew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Are masked arrays slower for processing than ndarrays?

2009-05-13 Thread Robert Kern
On Wed, May 13, 2009 at 18:36, Matt Knox mattknox...@gmail.com wrote:
 Hi Pierre,

 Here's the catch: it's basically cheating. I got rid of the pre-
 processing (where a mask was calculated depending on the domain and
 the input set to a filling value depending on this mask, before the
 actual computation). Instead, I  force
 np.seterr(divide='ignore',invalid='ignore') before calling the ufunc

 This isn't a thread safe approach and could cause wierd side effects in a
 multi-threaded application. I think modifying global options/variables inside
 any function where it generally wouldn't be expected by the user is a bad 
 idea.

seterr() uses thread-local storage.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Are masked arrays slower for process ing than ndarrays?

2009-05-13 Thread Matt Knox
 Robert Kern robert.kern at gmail.com writes:

 seterr() uses thread-local storage.

Oh. I stand corrected. Ignore my earlier objections then.

 Pierre GM pgmdevlist at gmail.com writes:

 Also, importing numpy.ma currently calls numpy.seterr(all='ignore')  
 anyway...

hmm. While this doesn't affect me personally... I wonder if everyone is aware of
this. Importing modules generally shouldn't have side effects either I would
think. Has this always been the case for the masked array module?

- Matt

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Are masked arrays slower for processing than ndarrays?

2009-05-13 Thread Pierre GM

On May 13, 2009, at 8:07 PM, Matt Knox wrote:

 hmm. While this doesn't affect me personally... I wonder if everyone  
 is aware of
 this. Importing modules generally shouldn't have side effects either  
 I would
 think. Has this always been the case for the masked array module?

Well, can't remember, actually... I was indeed surprised to see it was  
there. I guess I must have added when working on the power section. I  
will get of rid on the next commit, this is clearly bad practice from  
my part. Bad, bad Pierre.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] FAIL: Test bug in reduceat with structured arrays

2009-05-13 Thread Charles R Harris
On Wed, May 13, 2009 at 5:18 PM, David Warde-Farley d...@cs.toronto.eduwrote:

 On 11-May-09, at 10:55 AM, Pauli Virtanen wrote:

  Wonder why buildbot's 64-bit SPARC boxes don't see this if it's
  something
  connected to 64-bitness...

 Different endianness, maybe? That seems even weirder, honestly.


I managed an error on 32 bit fedora, but it was a oneoff sort of thing. I'll
see if it shows again.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] building inplace with numpy.distutils?

2009-05-13 Thread David Cournapeau
Robert Cimrman wrote:
 Hi (David)!

 I am evaluating numpy.distutils as a build/install system for my project 
 - is it possible to build the extension modules in-place so that the 
 project can be used without installing it?  A pointer to documentation 
 concerning this would be handy... Currently I use a regular Makefile for 
 the build, which works quite well, but is not very portable and does not 
 solve the package installation.

 Otherwise let me say that numpy.distutils work very well, much better 
 than the plain old distutils.
   

In-place builds can be setup with the -i option:

python setup.py build_ext -i

I think it is a plain distutils option.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy slices limited to 32 bit values?

2009-05-13 Thread Glenn Tarbox, PhD
I'm using the latest version of Sage (3.4.2) which is python 2.5 and numpy
something or other (I will do more digging presently)

I'm able to map large files and access all the elements unless I'm using
slices

so, for example:

fp = np.memmap(/mnt/hdd/data/mmap/numpy1e10.mmap, dtype='float64',
mode='r+', shape=(100,))

which is 1e10 doubles if you don't wanna count the zeros

gives full access to a 75 GB memory image

But when I do:

fp[:] = 1.0
np.sum(fp)

I get 1410065408.0  as the result

Interestingly, I can do:

fp[99] = 3.0

and get the proper result stored and can read it back.

So, it appears to me that slicing is limited to 32 bit values

Trying to push it a bit, I tried making my own slice

myslice = slice(1410065408, 99)

and using it like
fp[myslice]=1.0

but it returns immediately having changed nothing.  The slice creation
appears to work in that I can get the values back out and all... but
inside numpy it seems to get thrown out.

My guess is that internally the python slice in 2.5 is 32 bit even on my 64
bit version of python / numpy.

The good news is that it looks like the hard stuff (i.e. very large mmaped
files) work... but slicing is, for some reason, limited to 32 bits.

Am I missing something?

-glenn

-- 
Glenn H. Tarbox, PhD ||  206-274-6919
http://www.tarbox.org
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] building inplace with numpy.distutils?

2009-05-13 Thread Robert Cimrman
David Cournapeau wrote:
 Robert Cimrman wrote:
 Hi (David)!

 I am evaluating numpy.distutils as a build/install system for my project 
 - is it possible to build the extension modules in-place so that the 
 project can be used without installing it?  A pointer to documentation 
 concerning this would be handy... Currently I use a regular Makefile for 
 the build, which works quite well, but is not very portable and does not 
 solve the package installation.

 Otherwise let me say that numpy.distutils work very well, much better 
 than the plain old distutils.
   
 
 In-place builds can be setup with the -i option:
 
 python setup.py build_ext -i
 
 I think it is a plain distutils option.

I have tried

python setup.py build --inplace

which did not work, and --help helped neither, that is why I asked here. 
But I was close :)

thank you!
r.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion