Re: [Numpy-discussion] DVCS at PyCon

2009-03-31 Thread David Cournapeau
Eric Firing wrote:

 I agree.  The PEP does not show overwhelming superiority (or, arguably, 
 even mild superiority) of any alternative

I think this PEP was poorly written. You can't see any of the
advantage/differences of the different systems. Some people even said
they don't see the differences with svn. I think the reason partly is
that the PEP focused on existing python workflows, but the whole point,
at least for me, is to change the general workflow (for reviews, code
contributions, etc...). Stephen J. Turnbull sums it up nicely:

http://mail.python.org/pipermail/python-dev/2009-March/087968.html

FWIW, I tend to agree that Hg is less disruptive than git when coming
from svn, at least for the simple tasks (I don't know hg enough to have
a really informed opinion for more advanced workflows).

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Optical autocorrelation calculated with numpy is slow

2009-03-31 Thread Jochen S
On Tue, Mar 31, 2009 at 8:54 PM, Jochen S cycoma...@gmail.com wrote:

 On Tue, Mar 31, 2009 at 7:13 AM, João Luís Silva jsi...@fc.up.pt wrote:

 Hi,



 I wrote a script to calculate the *optical* autocorrelation of an
 electric field. It's like the autocorrelation, but sums the fields
 instead of multiplying them. I'm calculating

 I(tau) = integral( abs(E(t)+E(t-tau))**2,t=-inf..inf)


 An autocorrelation is just a convolution, which is a multiplication in
 frequency space. Thus you can do:
 FT_E = fft(E)
 FT_ac=FT_E*FT_E.conj()
 ac = fftshift(ifft(FT_ac))

 where E is your field and ac is your autocorrelation. Also what sort of
 autocorrelation are you talking about. For instance SHG autocorrelation is
 an intensity autocorrelation thus the first line should be:
 FT_E = fft(abs(E)**2)


Sorry I was reading over your example to quickly earlier, you're obviously
using intensity autocorrelation so what you should be doing is:
FT_E=fft(abs(E)**2)
FT_ac = FT_E*FT_E.conj()
ac = fftshift(ifft(FT_ac))




 HTH
 Jochen


 with script appended at the end. It's too slow for my purposes (takes ~5
 seconds, and scales ~O(N**2)). numpy's correlate is fast enough, but
 isn't what I need as it multiplies instead of add the fields. Could you
 help me get this script to run faster (without having to write it in
 another programming language) ?

 Thanks,
 João Silva

 #

 import numpy as np
 #import matplotlib.pyplot as plt

 n = 2**12
 n_autocorr = 3*n-2

 c = 3E2
 w0 = 2.0*np.pi*c/800.0
 t_max = 100.0
 t = np.linspace(-t_max/2.0,t_max/2.0,n)

 E = np.exp(-(t/10.0)**2)*np.exp(1j*w0*t)#Electric field

 dt = t[1]-t[0]
 t_autocorr=np.linspace(-dt*n_autocorr/2.0,dt*n_autocorr/2.0,n_autocorr)
 E1 = np.zeros(n_autocorr,dtype=E.dtype)
 E2 = np.zeros(n_autocorr,dtype=E.dtype)
 Ac = np.zeros(n_autocorr,dtype=np.float64)

 E2[n-1:n-1+n] = E[:]

 for i in range(2*n-2):
 E1[:] = 0.0
 E1[i:i+n] = E[:]

 Ac[i] = np.sum(np.abs(E1+E2)**2)

 Ac *= dt

 #plt.plot(t_autocorr,Ac)
 #plt.show()

 #

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Optical autocorrelation calculated with numpy is slow

2009-03-31 Thread João Luís Silva
Charles R Harris wrote:
 That should work. The first two integrals are actually the same, but 
 need to be E(t)*E(t).conj(). The second integral needs twice the real 
 part of E(t)*E(t-tau).conj(). Numpy correlate should really have the 
 conjugate built in, but it doesn't.
 
 Chuck
 

It worked, thanks.

João Silva

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array of matrices

2009-03-31 Thread Alexandre Fayolle
Le Friday 27 March 2009 23:38:25 Bryan Cole, vous avez écrit :
 I have a number of arrays of shape (N,4,4). I need to perform a
 vectorised matrix-multiplication between pairs of them I.e.
 matrix-multiplication rules for the last two dimensions, usual
 element-wise rule for the 1st dimension (of length N).

 (How) is this possible with numpy?

I think dot will work, though you'll need to work a little bit to get the 
answer:

 import numpy as np
 a = np.array([[1,2], [3,4]], np.float)
 aa = np.array([a,a+1,a+2])
 bb = np.array((a*5, a*6, a*7, a*8))
 np.dot(aa, bb).shape
(3, 2, 4, 2)
 for i, a_ in enumerate(aa): 
... for j, b_ in enumerate(bb):
... print (np.dot(a_, b_) == np.dot(aa, bb)[i,:,j,:]).all()
... 
True
True
True
True
True
True
True
True
True
True
True
True



-- 
Alexandre Fayolle  LOGILAB, Paris (France)
Formations Python, Zope, Plone, Debian:  http://www.logilab.fr/formations
Développement logiciel sur mesure:   http://www.logilab.fr/services
Informatique scientifique:   http://www.logilab.fr/science
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.3.0 rc1 fails find_duplicates on Solaris

2009-03-31 Thread Mark Sienkiewicz
Pauli Virtanen wrote:

 Probably they are both related to unspecified sort order for
 the duplicates. There were some sort-order ignoring missing in the test.

 I think the test is now fixed in trunk:

   http://projects.scipy.org/numpy/changeset/6827
   

The test passes in 1.4.0.dev6827.  Tested on Solaris 8, Mac OSX 10.4 
(Tiger) on x86 and ppc, and both 32 and 64 bit Red Hat Enterprise, all 
with Python 2.5.1.

Thanks for fixing this.

Mark S.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Windows buildbot

2009-03-31 Thread Charles R Harris
Hi David, Stefan,

The windows buildbot is back online but seems to have a configuration
problem. It would be nice to see that build working before the release, so
it would be nice if you two could take a look at the error messages/contact
Heller.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array of matrices

2009-03-31 Thread Bryan Cole

 
 I think dot will work, though you'll need to work a little bit to get the 
 answer:
 
  import numpy as np
  a = np.array([[1,2], [3,4]], np.float)
  aa = np.array([a,a+1,a+2])
  bb = np.array((a*5, a*6, a*7, a*8))
  np.dot(aa, bb).shape
 (3, 2, 4, 2)
  for i, a_ in enumerate(aa): 
 ... for j, b_ in enumerate(bb):
 ... print (np.dot(a_, b_) == np.dot(aa, bb)[i,:,j,:]).all()
 ... 
 True
 

Thanks. Your comment has helped me understand what dot() does for
ndims2 . It's a pity this will be too memory inefficient for large N.

Bryan

 


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer

2009-03-31 Thread Christopher Barker
David Cournapeau wrote:
 Chris Barker wrote:
 Well, neither Apple nor python.org's builds are 64 bit anyway at this 
 point. There is talk of quad (i386,and ppc_64 i86_64) builds the the 
 future, though.
   
 Yes, but that's something that has to should be supported sooner rather
 than later.

It does, but we don't need a binary installer for a python that doesn't 
have a binary installer.

 Well, maybe we need to hack bdist_mpkg to support this, we're pretty 
 sure that it is possible.

 Yes  - more exactly, there should be a way to guarantee that if I create
 a virtual env from a given python interpreter, I can target a .mpkg to
 this python interpreter.

Hmmm -- I don't know virtualenv enough to know what the virtualenv knows 
about how it was created...

However, I'm not sure you need to do what your saying here. I imagine 
this workflow:

set up a virtualenv for, say numpy x.y.rc-z

play around with it, get everything to build, etc. with plain old 
setup.py build, setup.py install, etc.

Once you are happy, run:

/Library/Frameworks/Python.framework/Versions/2.5/bin/bdist_mpkg

(or the 2.6 equivalent, etc)

I THINK you'd get a .mpkg that was all set for the user to install in 
their Framework python. As long as you don't run the installer, you 
won't end up with it in your virtualenv.

Or is this what you've tried and has failed for you?

By the way, if you run bdist_mpkg from a version installed into your 
virtualenv, you will get an installer that will install into your 
virtualenv, whit the path hard coded, so really useless.

 Installing is necessary to build the doc correctly, and I don't want to
 mess my system with setuptools stuff.

ah -- maybe that's the issue then -- darn. Are the docs included in the 
.mpkg? Do they need to be built for that?

 I have started writing something to organize my thought a
 bit better - I can keep you posted if you are interested).

yes, I am.

 By the way, for the libgfortran issue, while statically linking it may 
 be the best option, it wouldn't be too hard to have the mpkg include and 
 install /usr/local/lib/ligfortran.dylib  (or whatever).
   
 I don't think it is a good idea: it would overwrite existing
 libgfortran.dylib, which would cause a lot of issues because libgfortran
 and gfortran have to be consistent. I know I would be very pissed if
 after installing a software, some unrelated software would be broken or
 worse overwritten.

True. In that case we could put the dylib somewhere obscure:

/usr/local/lib/scipy1.6/lib/

or even:

/Library/Frameworks/Python.framework/Versions/2.5/lib/

But using static linking is probably better.


Actually, and I betray my ignorance here, but IIUC:

  - There are a bunch of different scipy extensions that use libgfortran
  - Many of them are built more-or-less separately
  - So each of them would get their own copy of the static libgfortran
  - Just how many separate copies of libgfortran is that?
- Enough to care?
- How big is libgfortran?

This is making me think solving the dynamic linking problem makes sense.


Also, would it break anything if the libgfortran installed were properly 
versioned:

  libgfortran.a.b.c

Isn't that the point of versioned libs?

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy Positional Array

2009-03-31 Thread Ian Mallett
Hello,
I'm trying to make an array of size n*n*2.  It should be of the form:
[[[0,0],[1,0],[2,0],[3,0],[4,0], ... ,[n,0]],
 [[0,1],[1,1],[2,1],[3,1],[4,1], ... ,[n,1]],
 [[0,2],[1,2],[2,2],[3,2],[4,2], ... ,[n,2]],
 [[0,3],[1,3],[2,3],[3,3],[4,3], ... ,[n,3]],
 [[0,4],[1,4],[2,4],[3,4],[4,4], ... ,[n,4]],
   ...   ...   ...   ...   ...   ...   ...
 [[0,n],[1,n],[2,n],[3,n],[4,n], ... ,[n,n]]]
Each vec2 represents the x,y position of the vec in the array itself.  I'm
completely stuck on implementing this setup in numpy.  Any pointers?

Thanks,
Ian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Positional Array

2009-03-31 Thread Robert Kern
2009/3/31 Ian Mallett geometr...@gmail.com:
 Hello,
 I'm trying to make an array of size n*n*2.  It should be of the form:
 [[[0,0],[1,0],[2,0],[3,0],[4,0], ... ,[n,0]],
  [[0,1],[1,1],[2,1],[3,1],[4,1], ... ,[n,1]],
  [[0,2],[1,2],[2,2],[3,2],[4,2], ... ,[n,2]],
  [[0,3],[1,3],[2,3],[3,3],[4,3], ... ,[n,3]],
  [[0,4],[1,4],[2,4],[3,4],[4,4], ... ,[n,4]],
    ...   ...   ...   ...   ...   ...   ...
  [[0,n],[1,n],[2,n],[3,n],[4,n], ... ,[n,n]]]
 Each vec2 represents the x,y position of the vec in the array itself.  I'm
 completely stuck on implementing this setup in numpy.  Any pointers?

How do you want to fill in the array? If you are typing it in
literally into your code, you would do basically the above, without
the ...'s, and wrap it in numpy.array(...).

Otherwise, you can create empty arrays with numpy.empty((n,n,2)), or
filled in versions using zeros() and ones().

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Positional Array

2009-03-31 Thread Ian Mallett
On Tue, Mar 31, 2009 at 5:32 PM, Robert Kern robert.k...@gmail.com wrote:

 How do you want to fill in the array? If you are typing it in
 literally into your code, you would do basically the above, without
 the ...'s, and wrap it in numpy.array(...).

I know that, but in some cases, n will be quite large, perhaps 1000 on a
side.  I'm trying to generate an array of that form in numpy entirely for
speed and aesthetic reasons.

Ian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Positional Array

2009-03-31 Thread Robert Kern
2009/3/31 Ian Mallett geometr...@gmail.com:
 On Tue, Mar 31, 2009 at 5:32 PM, Robert Kern robert.k...@gmail.com wrote:

 How do you want to fill in the array? If you are typing it in
 literally into your code, you would do basically the above, without
 the ...'s, and wrap it in numpy.array(...).

 I know that, but in some cases, n will be quite large, perhaps 1000 on a
 side.  I'm trying to generate an array of that form in numpy entirely for
 speed and aesthetic reasons.

Again: How do you want to fill in the array? What is the process that
generates the data?

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Positional Array

2009-03-31 Thread Robert Kern
2009/3/31 Ian Mallett geometr...@gmail.com:
 The array follows a pattern: each array of length 2 represents the x,y index
 of that array within the larger array.

Ah, right. Use dstack(mgrid[0:n,0:n]).

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Positional Array

2009-03-31 Thread Ian Mallett
Thanks!
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Positional Array

2009-03-31 Thread Partridge, Matthew BGI SYD
 
 The array follows a pattern: each array of length 2 represents the x,y
index of that array within the larger array.  

Is this what you are after?

 numpy.array(list(numpy.ndindex(n,n))).reshape(n,n,2)

 
--
 
This message and any attachments are confidential, proprietary, and may be 
privileged. If this message was misdirected, Barclays Global Investors (BGI) 
does not waive any confidentiality or privilege. If you are not the intended 
recipient, please notify us immediately and destroy the message without 
disclosing its contents to anyone. Any distribution, use or copying of this 
e-mail or the information it contains by other than an intended recipient is 
unauthorized. The views and opinions expressed in this e-mail message are the 
author's own and may not reflect the views and opinions of BGI, unless the 
author is authorized by BGI to express such views or opinions on its behalf. 
All email sent to or from this address is subject to electronic storage and 
review by BGI. Although BGI operates anti-virus programs, it does not accept 
responsibility for any damage whatsoever caused by viruses being passed.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Positional Array

2009-03-31 Thread Ian Mallett
Same.  Thanks, too.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array of matrices

2009-03-31 Thread Hans-Andreas Engel
Robert Kern robert.kern at gmail.com writes:
 On Sat, Mar 28, 2009 at 23:15, Anne Archibald peridot.faceted at gmail.com
wrote:
  2009/3/28 Geoffrey Irving irving at naml.us:
  On Sat, Mar 28, 2009 at 12:47 AM, Robert Kern robert.kern at gmail.com
wrote:
  2009/3/27 Charles R Harris charlesr.harris at gmail.com:
 
  On Fri, Mar 27, 2009 at 4:43 PM, Robert Kern robert.kern at gmail.com
wrote:
 
  On Fri, Mar 27, 2009 at 17:38, Bryan Cole bryan at cole.uklinux.net
wrote:
   I have a number of arrays of shape (N,4,4). I need to perform a
   vectorised matrix-multiplication between pairs of them I.e.
   matrix-multiplication rules for the last two dimensions, usual
   element-wise rule for the 1st dimension (of length N).
  
(...)
 
  It'd be great if this operation existed as a primitive.  
(...)
 
  The infrastructure to support such generalized ufuncs has been added
  to numpy, but as far as I know no functions yet make use of it.
 
 I don't think there is a way to do it in general with dot(). Some
 cases are ambiguous. I think you will need separate matrix-matrix,
 matrix-vector, and vector-vector gufuncs, to coin a term.
 

By the way, matrix multiplication is one of the testcases for the generalized
ufuncs in numpy 1.3 -- this makes playing around with it easy:

  In [1]: N = 10; a = randn(N, 4, 4); b = randn(N, 4, 4)

  In [2]: import numpy.core.umath_tests

  In [3]: (numpy.core.umath_tests.matrix_multiply(a, b) == [dot(ai, bi) for (ai,
bi) in zip(a, b)]).all()
  Out[3]: True

Best, Hansres



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.3.0 rc1 OS X Installer

2009-03-31 Thread David Cournapeau
Christopher Barker wrote:
 It does, but we don't need a binary installer for a python that doesn't 
 have a binary installer.
   

Yes, not now - but I would prefer avoiding to have to change the process
again when time comes. It may not look like it, but enabling a working
process which works well on all platforms including windows took me
several days to work properly. And I am in a hurry to go into this again :)
 Hmmm -- I don't know virtualenv enough to know what the virtualenv knows 
 about how it was created...

 However, I'm not sure you need to do what your saying here. I imagine 
 this workflow:

 set up a virtualenv for, say numpy x.y.rc-z

 play around with it, get everything to build, etc. with plain old 
 setup.py build, setup.py install, etc.

 Once you are happy, run:

 /Library/Frameworks/Python.framework/Versions/2.5/bin/bdist_mpkg
   

This means building the same thing twice. Now, for numpy, it is not that
a big deal, but for scipy, not so much. If/when we have a good, reliable
build farm for mac os x, this point becomes moot, so.

 By the way, if you run bdist_mpkg from a version installed into your 
 virtualenv, you will get an installer that will install into your 
 virtualenv, whit the path hard coded, so really useless.
   


That's exactly the problem in the current binary :)

 ah -- maybe that's the issue then -- darn. Are the docs included in the 
 .mpkg? Do they need to be built for that?
   

The docs are included in the .dmg, and yes, the doc needs to be built
from the same installation (or more exactly the same source).


 yes, I am.
   

I have not tackled the uninstall part, but I already wrote this to
write in stone my POV in the whole python packaging situation:

http://cournape.wordpress.com/2009/04/01/python-packaging-a-few-observations-cabal-for-a-solution/


 True. In that case we could put the dylib somewhere obscure:

 /usr/local/lib/scipy1.6/lib/
   

Hm, that's strange - why /usr/local/lib ? It is outside the scipy
installation.

 or even:

 /Library/Frameworks/Python.framework/Versions/2.5/lib/
   

That's potentially dangerous: since this directory is likely to be in
LIBDIR, it means libgfortran will be taken there or from /usr/local/lib
if the user builds numpy/scipy after installing numpy. If it is
incompatible with the user gfortran, it will lead to weird issues, hard
to debug.

This problem bit me on windows 64 bits recently: we did something
similar (creating a libpython*.a and put in C:\python*\libs), but the
installed library was not 64 bits compatible - I assumed this library
was built by python itself, and I have wasted several hours looking
elsewhere for a problem caused by numpy.distutils.

If we install something like libgfortran, it should be installed
privately - but dynamically linking against private libraries is hard,
because that's very platform dependent (in particular on windows, I have
yet to see a sane solution - everyone just copy the private .dll
alongside the binaries, AFAICS).

Now, if you bring me a solution to this problem, I would be *really* glad.

 Actually, and I betray my ignorance here, but IIUC:

   - There are a bunch of different scipy extensions that use libgfortran
   - Many of them are built more-or-less separately
   - So each of them would get their own copy of the static libgfortran
   

AFAIK, statically linking a library does not mean the whole copy is put
into the binary. I guess different binary formats do it differently, but
for example, on linux:

gfortran hello.f - a.out is 8 kb
gfortran hello.f -static-libgfortran - a.out is 130 kb
libgfortran.a - ~ 1.3Mb

Of course, this depends on the function you need to pull out from
libgfortran - but I guess we do not pull so much, because we mainly use
intrinsics (gfortran math functions, etc...) which should be very small.
I don't think we use so much the IO fortran runtime - actually, we
should explicitly avoid it since it cause trouble because the C and
fortran runtimes  would 'fight' each other with unspeakable consequences.

And thinking about it: mac os x rather encourage big binaries - fat
binary, so I am not sure it is a big concern.

 This is making me think solving the dynamic linking problem makes sense.
   

It makes sense for a whole lot of reasons, but it is hard. The problem
is much bigger on windows (where almost *everything* is statically
linked), and I tried to tackle this to ship only one numpy installer
with 3 dynamically loaded atlas at runtime - I did not find a workable
solution.


 Also, would it break anything if the libgfortran installed were properly 
 versioned:

   libgfortran.a.b.c

 Isn't that the point of versioned libs?
   

versioned libraries only make sense for shared libraries,I think. On
Linux, the static library is not even publicly available (it is in
/usr/lib/gcc/4.3.3). I wonder whether the mac os x gfortran binary did
not make a mistake there, actually.

cheers,

David
___