Re: [Numpy-discussion] Building on WinXP 64-bit, Intel Compilers

2009-01-30 Thread Hanni Ali
I have been meaning to chip in but so far hadn't got to it so hear goes.

In response to this particular issue I currently use numpy (1.2.1) built
with msvc VS 2008 by simply commenting out these definitions in the
numpy\core\src\umathmodule.c.src

That works just fine and allows me to use the built in lapack light that
comes with numpy on 64-bit windows no problem.

I have spent many hours working on compiling a different lapack/blas
implementation for windows with numpy so far with no joy, so would be very
pleased if we can finally figure this out.

I have previously posted this link on the list:

http://icl.cs.utk.edu/lapack-for-windows/index.html

Using this package the intel visual fortran compiler and msvc C compiler I
have managed to get most of numpy to compile against lapack/blas, but the
process still trips up at linking with the folowwing message:

warning: build_ext: extension 'numpy.linalg.lapack_lite' has Fortran
libraries but no Fortran linker found, using default linker

It complains about missing external symbols.

   Creating library build\temp.win-amd64-2.6\Release\numpy\linalg\
lapack_lite.li
b and object build\temp.win-amd64-2.6\Release\numpy\linalg\lapack_lite.exp
lapack_litemodule.obj : error LNK2019: unresolved external symbol dgeev_
referen
ced in function lapack_lite_dgeev
lapack_litemodule.obj : error LNK2019: unresolved external symbol dsyevd_
refere
nced in function lapack_lite_dsyevd
lapack_litemodule.obj : error LNK2019: unresolved external symbol zheevd_
refere
nced in function lapack_lite_zheevd
lapack_litemodule.obj : error LNK2019: unresolved external symbol dgelsd_
refere
nced in function lapack_lite_dgelsd
lapack_litemodule.obj : error LNK2019: unresolved external symbol dgesv_
referen
ced in function lapack_lite_dgesv
lapack_litemodule.obj : error LNK2019: unresolved external symbol dgesdd_
refere
nced in function lapack_lite_dgesdd
lapack_litemodule.obj : error LNK2019: unresolved external symbol dgetrf_
refere
nced in function lapack_lite_dgetrf
lapack_litemodule.obj : error LNK2019: unresolved external symbol dpotrf_
refere
nced in function lapack_lite_dpotrf
lapack_litemodule.obj : error LNK2019: unresolved external symbol dgeqrf_
refere
nced in function lapack_lite_dgeqrf
lapack_litemodule.obj : error LNK2019: unresolved external symbol dorgqr_
refere
nced in function lapack_lite_dorgqr
lapack_litemodule.obj : error LNK2019: unresolved external symbol zgeev_
referen
ced in function lapack_lite_zgeev
lapack_litemodule.obj : error LNK2019: unresolved external symbol zgelsd_
refere
nced in function lapack_lite_zgelsd
lapack_litemodule.obj : error LNK2019: unresolved external symbol zgesv_
referen
ced in function lapack_lite_zgesv
lapack_litemodule.obj : error LNK2019: unresolved external symbol zgesdd_
refere
nced in function lapack_lite_zgesdd
lapack_litemodule.obj : error LNK2019: unresolved external symbol zgetrf_
refere
nced in function lapack_lite_zgetrf
lapack_litemodule.obj : error LNK2019: unresolved external symbol zpotrf_
refere
nced in function lapack_lite_zpotrf
lapack_litemodule.obj : error LNK2019: unresolved external symbol zgeqrf_
refere
nced in function lapack_lite_zgeqrf
lapack_litemodule.obj : error LNK2019: unresolved external symbol zungqr_
refere
nced in function lapack_lite_zungqr
build\lib.win-amd64-2.6\numpy\linalg\lapack_lite.pyd : fatal error LNK1120:
18 u
nresolved externals
error: Command C:\Program Files (x86)\Microsoft Visual Studio
9.0\VC\BIN\amd64\
link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\Program Files
(x86)\Universit
y Of Tennessee\LAPACK 3.1.1\lib\x64 /LIBPATH:C:\Python26\libs
/LIBPATH:C:\Pytho
n26\PCbuild\amd64 LAPACK.lib BLAS.lib /EXPORT:initlapack_lite
build\temp.win-amd
64-2.6\Release\numpy\linalg\lapack_litemodule.obj
/OUT:build\lib.win-amd64-2.6\n
umpy\linalg\lapack_lite.pyd
/IMPLIB:build\temp.win-amd64-2.6\Release\numpy\linal
g\lapack_lite.lib
/MANIFESTFILE:build\temp.win-amd64-2.6\Release\numpy\linalg\la
pack_lite.pyd.manifest failed with exit status 1120

I suspect persuading setup.py to use the appropriate linker will sort this
out, but I haven't had time to address what - I hope - could be the final
hurdle.

Hanni


2009/1/29 Michael Colonno mcolo...@gmail.com

OK, some progress here. I have two questions: 1) Let's forget about the
 Intel compiler(s) for the moment and focus on a standard Windows build.
 Python 2.6 comes with two classes in distutils: msvccompiler.py and
 msvc9compiler.py. In reading through these, it appears msvc9compiler.py is
 ideally suited for what I'm trying to do (use VS 2008 tools). Is it possible
 to select this via command line flags with config --compiler=xxx? Choosing
 msvc *seems* to go for msvccompiler.py (I'm just tyring to trace the
 magic as things build). 2) when using the standard msvc setup, things do
 seem to work re: setting up the build environemnt (see below). Now, the VS
 compiler kicks out a syntax error (output copied below). Any thoughts? This
 looks like a 

[Numpy-discussion] minor improvment to ones

2009-01-30 Thread Neal Becker
A nit, but it would be nice if 'ones' could fill with a value other than 1.

Maybe an optional val= keyword?


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread David Cournapeau
Neal Becker wrote:
 A nit, but it would be nice if 'ones' could fill with a value other than 1.

 Maybe an optional val= keyword?
   

What would be the advantage compared to fill ? I would guess ones and
zeros are special because those two values are special (they can be
defined for many types, as  neutral elements for + and *),

David

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Scott Sinclair
 2009/1/30 David Cournapeau da...@ar.media.kyoto-u.ac.jp:
 Neal Becker wrote:
 A nit, but it would be nice if 'ones' could fill with a value other than 1.

 Maybe an optional val= keyword?

 What would be the advantage compared to fill ? I would guess ones and
 zeros are special because those two values are special (they can be
 defined for many types, as  neutral elements for + and *),

I couldn't find the numpy fill function, until my tiny brain realized
you meant the ndarray method:

http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.fill.html

Cheers,
Scott
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Grissiom
On Fri, Jan 30, 2009 at 21:54, Scott Sinclair
scott.sinclair...@gmail.comwrote:

  2009/1/30 David Cournapeau da...@ar.media.kyoto-u.ac.jp:
  Neal Becker wrote:
  A nit, but it would be nice if 'ones' could fill with a value other than
 1.
 
  Maybe an optional val= keyword?
 
  What would be the advantage compared to fill ? I would guess ones and
  zeros are special because those two values are special (they can be
  defined for many types, as  neutral elements for + and *),

 I couldn't find the numpy fill function, until my tiny brain realized
 you meant the ndarray method:

 http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.fill.html

 Cheers,
 Scott


Is fill function has any advantage over ones(size)*x ?

-- 
Cheers,
Grissiom
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Sturla Molden
On 1/30/2009 2:18 PM, Neal Becker wrote:
 A nit, but it would be nice if 'ones' could fill with a value other than 1.
 
 Maybe an optional val= keyword?
 

I am -1 on this. Ones should fill with ones, zeros should fill with 
zeros. Anything else is counter-intuitive. Calling numpy.ones to fill 
with fives makes no sense to me. But I would be +1 on having a function 
called numpy.values or numpy.fill that would create and fill an ndarray 
with arbitrary values.


S.M.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Matthieu Brucher
 Is fill function has any advantage over ones(size)*x ?

No intermediate array (inplace) ?

Matthieu
-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Sturla Molden
On 1/30/2009 3:07 PM, Grissiom wrote:

 Is fill function has any advantage over ones(size)*x ?

You avoid filling with ones, all the multiplications and creating an 
temporary array. It can be done like this:

a = empty(size)
a[:] = x

Which would be slightly faster and more memory efficient.


S.M.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Neal Becker
Right now there are 2 options to create an array of constant value:

1) empty (size); fill (val)

2) ones (size) * val

1 has disadvantage of not being an expression, so can't be an arg to a 
function call.  Also probably slower than create+fill @ same time

2 is probably slower than create+fill @ same time

Now what would be _really_ cool is a special array type that would represent 
a constant array without wasting memory.  boost::ublas, for example, has 
this feature.



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Grissiom
On Fri, Jan 30, 2009 at 22:16, Sturla Molden stu...@molden.no wrote:

 On 1/30/2009 3:07 PM, Grissiom wrote:

  Is fill function has any advantage over ones(size)*x ?

 You avoid filling with ones, all the multiplications and creating an
 temporary array. It can be done like this:

 a = empty(size)
 a[:] = x

 Which would be slightly faster and more memory efficient.


Hmm,  I +1 on this one. It's more pythonic ;)

-- 
Cheers,
Grissiom
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Ryan May
Sturla Molden wrote:
 On 1/30/2009 2:18 PM, Neal Becker wrote:
 A nit, but it would be nice if 'ones' could fill with a value other than 1.

 Maybe an optional val= keyword?

 
 I am -1 on this. Ones should fill with ones, zeros should fill with 
 zeros. Anything else is counter-intuitive. Calling numpy.ones to fill 
 with fives makes no sense to me. But I would be +1 on having a function 
 called numpy.values or numpy.fill that would create and fill an ndarray 
 with arbitrary values.

I completely agree here.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Sturla Molden
On 1/30/2009 3:22 PM, Neal Becker wrote:

 Now what would be _really_ cool is a special array type that would represent 
 a constant array without wasting memory.

Which again is a special case of something even cooler: lazy evaluation.

This would require arrays to have immutable buffers. Then an expression 
like a*x + y would result in a symbolic representation of

a * x.buffer + y.buffer

Then the array could evaluate itself (it could even numexpr or a JIT 
compiler) when needed.

The scheme would be very fragile and complicated if arrays were allowed 
to have immutable data buffers like those of numpy.


Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] example reading binary Fortran file

2009-01-30 Thread David Froger
Hy,

My question is about reading Fortran binary file (oh no this question
again...)

Until now, I was using the unpack module like that :

def lread(f,fourBeginning,fourEnd,*tuple):
from struct import unpack
Reading a Fortran binary file in litte-endian
if fourBeginning: f.seek(4,1)
for array in tuple:
for elt in xrange(array.size):
transpose(array).flat[elt] =
unpack(array.dtype.char,f.read(array.itemsize))[0]
if fourEnd: f.seek(4,1)

After googling, I read that fopen and npfille was deprecated, and we should
use numpy.fromfile and ndarray.tofile, but despite of the documentaion, the
cookbook, the mailling list and google, I don't succed in making a simple
example. Considering the simple Fortran code below what is the Python script
to read the four arrrays? What about if my pc is little endian and the file
big endian?

I think it will be a good idea to put the Fortran writting-arrays code and
the Python reading-array script in the cookbook and maybe a page to help
people comming from Fortran to start with Python ?

Best,

David Froger


program makeArray

implicit none

integer,parameter:: nx=10,ny=20
real(4),dimension(nx,ny):: ux,uy,p
integer :: i,j

open(11,file='uxuyp.bin',form='unformatted')

do i = 1,nx
do j = 1,ny
   ux(i,j) = real(i*j)
   uy(i,j) = real(i)/real(j)
   p (i,j)  = real(i) + real(j)
enddo
enddo

write(11) ux,uy
write(11) p

close(11)

end program makeArray
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] example reading binary Fortran file

2009-01-30 Thread Sturla Molden
On 1/30/2009 5:03 PM, David Froger wrote:

 I think it will be a good idea to put the Fortran writting-arrays code 
 and the Python reading-array script in the cookbook and maybe a page to 
 help people comming from Fortran to start with Python ?

If you want to be completely safe, read the file in Fortran, then send 
it as an array to Python (use f2py). Aside from that, assuming your 
compiler only writes the raw bytes in Fortran order to the file:

If you have

real(4),dimension(nx,ny):: ux

in Fortran, and writes ux to disk. It could be retrieved like this in NumPy:

ux = np.fromfile(nx*ny, dtype=np.float32).view((nx,ny), order='F')

assuming real(kind=4) is equivalent to np.float32.


Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] concatenate trouble

2009-01-30 Thread Neal Becker
What's the problem here?

 print np.concatenate (np.ones (10, dtype=complex), np.zeros (10, 
dtype=complex))
TypeError: only length-1 arrays can be converted to Python scalars



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] example reading binary Fortran file

2009-01-30 Thread Sturla Molden
On 1/30/2009 5:23 PM, Sturla Molden wrote:

 ux = np.fromfile(nx*ny, dtype=np.float32).view((nx,ny), order='F')

oops.. this should be

ux = np.fromfile(file, count=nx*ny, dtype=np.float32).view((nx,ny),
   order='F')



S.M.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] concatenate trouble

2009-01-30 Thread Robert Cimrman
Neal Becker wrote:
 What's the problem here?
 
  print np.concatenate (np.ones (10, dtype=complex), np.zeros (10, 
 dtype=complex))
 TypeError: only length-1 arrays can be converted to Python scalars

You should enclose the arrays you concatenate into a tuple: 
np.concatenate((a,b)).

r.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] example reading binary Fortran file

2009-01-30 Thread Sturla Molden
On 1/30/2009 5:27 PM, Sturla Molden wrote:
 On 1/30/2009 5:23 PM, Sturla Molden wrote:
 
 ux = np.fromfile(nx*ny, dtype=np.float32).view((nx,ny), order='F')
 
 oops.. this should be
 
 ux = np.fromfile(file, count=nx*ny, dtype=np.float32).view((nx,ny),
order='F')

fu*k


ux = np.fromfile(file, count=nx*ny,
 dtype=np.float32).reshape((nx,ny), order='F')


Sorry for the previous typos, it's Friday and soon weekend...



Sturla Molden
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Data file format choice.

2009-01-30 Thread Gary Pajer
It's time for me to select a data format.

My data are (more or less) spectra ( a couple of thousand samples), six
channels, each channel running around 10 Hz, collecting for a minute or so.
Plus all the settings on the instrument.

I don't see any significant differences between netCDF4 and HDF5.
Similarly, I don't see significant differences between pytables and h5py.
Does one play better with numpy?  What are the best numpy solutions for
netCDF4?

Can anyone provide thoughts, pros and cons, etc, that I can mull over?

-gary
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Christopher Barker
 On 1/30/2009 3:22 PM, Neal Becker wrote:
 
 Now what would be _really_ cool is a special array type that would represent 
 a constant array without wasting memory.

Can't you do that with scary stride tricks? I think I remember some 
discussion of this a while back.

-Chris

-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Ryan May
Christopher Barker wrote:
 On 1/30/2009 3:22 PM, Neal Becker wrote:

 Now what would be _really_ cool is a special array type that would 
 represent 
 a constant array without wasting memory.
 
 Can't you do that with scary stride tricks? I think I remember some 
 discussion of this a while back.

I think that's right, but at that point, what gain is that over using a regular
constant and relying on numpy's broadcasting?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] example reading binary Fortran file

2009-01-30 Thread Christopher Barker
 If you want to be completely safe, read the file in Fortran, then send 
 it as an array to Python (use f2py). Aside from that, assuming your 
 compiler only writes the raw bytes in Fortran order to the file:

Careful -- the last time I read a Fortran-=written binary file, I found 
that the various structures (is that what you call them in Fortran?) 
were padded with I think 4 bytes.

I did it by reading through the file bit by bit and parsing it out with 
the struct module. Not very efficient, but easy to control.

In that case, there were a bunch of tings written, where a few values 
described the size of the next array, then the array, etc. If you've got 
a single array, it'll be easier.

You might start that way, and then, when you've got it figured out, 
translate to fromfile().

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] puzzle: generate index with many ranges

2009-01-30 Thread Raik Gruenberg
Hi there,

perhaps someone has a bright idea for this one:

I want to concatenate ranges of numbers into a single array (for indexing). So I
have generated an array a with starting positions, for example:

a = [4, 0, 11]

I have an array b with stop positions:

b = [11, 4, 15]

and I would like to generate an index array that takes 4..11, then 0..4, then
11..15.

In reality, a and b have 1+ elements and the arrays to be sliced are very
large so I want to avoid any for loops etc. Any idea how this could be done? I
thought some combination of *repeat* and adding of *arange* should do the trick
but just cannot nail it down.

Thanks in advance for any hints!

Greetings,
Raik


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] example reading binary Fortran file

2009-01-30 Thread David Froger
Thank Sturla and Christopher,

yes, with the Fortran code :

!=
program makeArray

implicit none

integer,parameter:: nx=2,ny=5
real(4),dimension(nx,ny):: ux,uy,p
integer :: i,j

open(11,file='uxuyp.bin',form='unformatted')

do i = 1,nx
do j = 1,ny
   ux(i,j) = 100  + j+(i-1)*10
   uy(i,j) = 200. + j+(i-1)*10
   p (i,j) = 300. + j+(i-1)*10
enddo
enddo

write(11) ux,uy
write(11) p

close(11)

write(*,*) 'ux='
do i = 1,nx
   write(*,*) ( ux(i,j) , j =1,ny)
enddo
write(*,*)

write(*,*) 'uy='
do i = 1,nx
   write(*,*) ( uy(i,j) , j =1,ny)
enddo
write(*,*)

write(*,*) 'p='
do i = 1,nx
   write(*,*) ( p(i,j) , j =1,ny)
enddo
write(*,*)

end program makeArray
!=

the size of the 'uxuyp.bin' file is:

  4bytes + ux_bytes + uy_bytes + 4bytes
+ 4bytes + p_bytes + 4bytes

= 4 + nx*ny*4 + nx*ny*4 + 4 + 4 +nx*ny*4 + 4 =  136 bytes


the arrays are :

 ux=
   101.0   102.0   103.0   104.0   105.0
   111.0   112.0   113.0   114.0   115.0

 uy=
   201.0   202.0   203.0   204.0   205.0
   211.0   212.0   213.0   214.0   215.0

 p=
   301.0   302.0   303.0   304.0   305.0
   311.0   312.0   313.0   314.0   315.0


and with the Python script :

#===
import numpy as np

nx,ny = 2,5

fourBytes = np.fromfile('uxuyp.bin', count=1, dtype=np.float32)
ux = np.fromfile('uxuyp.bin', count=nx*ny,
dtype=np.float32).reshape((nx,ny), order='F')

print ux
#===

I get :

[[  1.12103877e-43   1.1100e+02   1.1200e+02   1.1300e+02
1.1400e+02]
 [  1.0100e+02   1.0200e+02   1.0300e+02   1.0400e+02
1.0500e+02]]


this function do the trick, but is it optimized?

#===
def lread(f,fourBeginning,fourEnd,*tuple):
from struct import unpack
Reading a Fortran binary file in litte-endian

if fourBeginning: f.seek(4,1)
for array in tuple:
for elt in xrange(array.size):
transpose(array).flat[elt] =
unpack(array.dtype.char,f.read(array.itemsize))[0]
if fourEnd: f.seek(4,1)
#===
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] puzzle: generate index with many ranges

2009-01-30 Thread Jim Vickroy
Raik Gruenberg wrote:
 Hi there,

 perhaps someone has a bright idea for this one:

 I want to concatenate ranges of numbers into a single array (for indexing). 
 So I
 have generated an array a with starting positions, for example:

 a = [4, 0, 11]

 I have an array b with stop positions:

 b = [11, 4, 15]

 and I would like to generate an index array that takes 4..11, then 0..4, then
 11..15.
   
Does this help? 

Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit 
(Intel)] on win32
Type help, copyright, credits or license for more information.
  a = [4, 0, 11]
  b = [11, 4, 15]
  zip(a,b)
[(4, 11), (0, 4), (11, 15)]
 

Apologies if I'm stating the obvious.

-- jv
 In reality, a and b have 1+ elements and the arrays to be sliced are 
 very
 large so I want to avoid any for loops etc. Any idea how this could be done? I
 thought some combination of *repeat* and adding of *arange* should do the 
 trick
 but just cannot nail it down.

 Thanks in advance for any hints!

 Greetings,
 Raik


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] using numpy functions on an array of objects

2009-01-30 Thread Sebastian Walter
Hey,

What is  the best solution to get this code working?
Anyone a good idea?

-- test.py ---
import numpy
import numpy.linalg

class afloat:
def __init__(self,x):
self.x = x

def __add__(self,rhs):
return self.x + rhs.x

def sin(self):
return numpy.sin(self.x)

def inv(self):
return numpy.linalg.inv(self.x)

def trace(self):
return 0

y = afloat(numpy.eye(3))
z = afloat(numpy.ones(3))
print y + z # works
print numpy.sin(y) # works
print numpy.trace(y) # doesn't work...???
print numpy.linalg.inv(y)  # doesn't work ...???

 end test.py 


=== Explanation why I need that ===

I have the following problem. I need to do numerical calculations on
generalized versions of real numbers.
In particular with  truncated Taylor polynomials. I've implemented
that as a class that I called TC.
To define what the multiplication of two Taylor polynomials is I use
operator overloading.
Additionally, I need to compute sine, cosine, exp, etc. of Taylor polynomials.

For that, I can use the numpy functions. Numpy is apparently smart
enough to call the member function sin(self) of my class afloat
when it realizes that the argument of numpy.sin is not a known type.

This is really great. However, some functions are not as smart: Among them
trace
inv
dot


As a workaround, i could this:

def inv(X):
if X.__class__ == afloat:
return X.inv()
else:
return numpy.linalg.inv(X)

This is somewhat OK, but I'd like to use already existing Python code
that uses Numpy internally.
So I have to rely that numpy.inv(X) calls X.inv() when X is object of my class.


best regards,
Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] puzzle: generate index with many ranges

2009-01-30 Thread Raik Gruenberg
Jim Vickroy wrote:
 Raik Gruenberg wrote:
 Hi there,

 perhaps someone has a bright idea for this one:

 I want to concatenate ranges of numbers into a single array (for indexing). 
 So I
 have generated an array a with starting positions, for example:

 a = [4, 0, 11]

 I have an array b with stop positions:

 b = [11, 4, 15]

 and I would like to generate an index array that takes 4..11, then 0..4, then
 11..15.
   
 Does this help? 
 
 Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit 
 (Intel)] on win32
 Type help, copyright, credits or license for more information.
   a = [4, 0, 11]
   b = [11, 4, 15]
   zip(a,b)
 [(4, 11), (0, 4), (11, 15)]
  

Mhm, I got this far. But how do I get from here to a single index array

[ 4, 5, 6, ... 10, 0, 1, 2, 3, 11, 12, 13, 14 ] ?

Greetings
Raik

 



 Apologies if I'm stating the obvious.
 
 -- jv
 In reality, a and b have 1+ elements and the arrays to be sliced are 
 very
 large so I want to avoid any for loops etc. Any idea how this could be done? 
 I
 thought some combination of *repeat* and adding of *arange* should do the 
 trick
 but just cannot nail it down.

 Thanks in advance for any hints!

 Greetings,
 Raik


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
 
 

-- 


Dr. Raik Gruenberg
http://www.raiks.de/contact.html

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] example reading binary Fortran file

2009-01-30 Thread Ryan May
David Froger wrote:
 import numpy as np
 
 nx,ny = 2,5
 
 fourBytes = np.fromfile('uxuyp.bin', count=1, dtype=np.float32)
 ux = np.fromfile('uxuyp.bin', count=nx*ny,
 dtype=np.float32).reshape((nx,ny), order='F')
 
 print ux
 #===
 
 I get :
 
 [[  1.12103877e-43   1.1100e+02   1.1200e+02   1.1300e+02
 1.1400e+02]
  [  1.0100e+02   1.0200e+02   1.0300e+02   1.0400e+02
 1.0500e+02]]
 
 
 this function do the trick, but is it optimized?
 
 #===
 def lread(f,fourBeginning,fourEnd,
 *tuple):
 from struct import unpack
 Reading a Fortran binary file in litte-endian
 
 if fourBeginning: f.seek(4,1)
 for array in tuple:
 for elt in xrange(array.size):
 transpose(array).flat[elt] =
 unpack(array.dtype.char,f.read(array.itemsize))[0]
 if fourEnd: f.seek(4,1)
 #===

I'm not sure about whether or not its optimized, but I can tell you that the
mystery 4 bytes are the number of bytes it that wrote out followed by that
number of bytes of data.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Data file format choice.

2009-01-30 Thread Jeff Whitaker
Gary Pajer wrote:
 It's time for me to select a data format.

 My data are (more or less) spectra ( a couple of thousand samples), 
 six channels, each channel running around 10 Hz, collecting for a 
 minute or so. Plus all the settings on the instrument.

 I don't see any significant differences between netCDF4 and HDF5.   
Gary:  netCDF4 is just a thin wrapper on top of HDF5 1.8 - think of it 
as a higher level API.
 Similarly, I don't see significant differences between pytables and 
 h5py.  Does one play better with numpy?  
pytables has been around longer and is well-tested, has nice pythonic 
features, but files you write with it may not be readable by C or 
fortran clients.  h5py works only with python 2.5/2.6, and writes 
'vanilla' hdf5 files readable by anybody.
 What are the best numpy solutions for netCDF4?

There's only one that I know of - http://code.google.com/p/netcdf4-python.

-Jeff

-- 
Jeffrey S. Whitaker Phone  : (303)497-6313
Meteorologist   FAX: (303)497-6449
NOAA/OAR/PSD  R/PSD1Email  : jeffrey.s.whita...@noaa.gov
325 BroadwayOffice : Skaggs Research Cntr 1D-113
Boulder, CO, USA 80303-3328 Web: http://tinyurl.com/5telg

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] puzzle: generate index with many ranges

2009-01-30 Thread Jim Vickroy

Raik Gruenberg wrote:

Jim Vickroy wrote:
  

Raik Gruenberg wrote:


Hi there,

perhaps someone has a bright idea for this one:

I want to concatenate ranges of numbers into a single array (for indexing). So I
have generated an array a with starting positions, for example:

a = [4, 0, 11]

I have an array b with stop positions:

b = [11, 4, 15]

and I would like to generate an index array that takes 4..11, then 0..4, then
11..15.
  
  
Does this help? 

Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit 
(Intel)] on win32

Type help, copyright, credits or license for more information.
  a = [4, 0, 11]
  b = [11, 4, 15]
  zip(a,b)
[(4, 11), (0, 4), (11, 15)]
 



Mhm, I got this far. But how do I get from here to a single index array

[ 4, 5, 6, ... 10, 0, 1, 2, 3, 11, 12, 13, 14 ] ?

  

not sure I understand your goal ... is this what you want:
 [range(i,j) for i,j in zip(a,b)]
[[4, 5, 6, 7, 8, 9, 10], [0, 1, 2, 3], [11, 12, 13, 14]]


Greetings
Raik

  




  

Apologies if I'm stating the obvious.

-- jv


In reality, a and b have 1+ elements and the arrays to be sliced are very
large so I want to avoid any for loops etc. Any idea how this could be done? I
thought some combination of *repeat* and adding of *arange* should do the trick
but just cannot nail it down.

Thanks in advance for any hints!

Greetings,
Raik


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
  
  

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion





  


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] puzzle: generate index with many ranges

2009-01-30 Thread Pierre GM

On Jan 30, 2009, at 1:11 PM, Raik Gruenberg wrote:


 Mhm, I got this far. But how do I get from here to a single index  
 array

 [ 4, 5, 6, ... 10, 0, 1, 2, 3, 11, 12, 13, 14 ] ?

np.concatenate([np.arange(aa,bb) for (aa,bb) in zip(a,b)])
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] puzzle: generate index with many ranges

2009-01-30 Thread Raik Gruenberg
Pierre GM wrote:
 On Jan 30, 2009, at 1:11 PM, Raik Gruenberg wrote:
 
 Mhm, I got this far. But how do I get from here to a single index  
 array

 [ 4, 5, 6, ... 10, 0, 1, 2, 3, 11, 12, 13, 14 ] ?
 
 np.concatenate([np.arange(aa,bb) for (aa,bb) in zip(a,b)])

exactly! Now, the question was, is there a way to do this only using numpy 
functions (sum, repeat, ...), that means without any python for loop?

Sorry about being so insistent on this one but, in my experience, eliminating 
those for loops makes a huge difference in terms of speed. The zip is probably 
also quite costly on a very large data set.

Thanks!
Raik

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
 
 

-- 


Dr. Raik Gruenberg
http://www.raiks.de/contact.html

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help on fast slicing on a grid

2009-01-30 Thread frank wang

I have created a test example for the question using for loop and hope someone 
can help me to get fast solution. My data set is about 200 data.
 
However, I have the problem to run the code, the Out[i]=cnstl[j] line gives me 
error says:
 
In [107]: 
Out[0]=cnstl[0]---TypeError
 Traceback (most recent call last)
C:\Frank_share\qamslicer.py in module() 1  2  3  4  5
TypeError: can't convert complex to float; use abs(z)
In [108]: cnstl.dtypeOut[108]: dtype('complex128')
I do not know why that my data is complex128 already. Can anyone help to figure 
why?
 
Thanks
 
Frank
 
from numpy import *a = 
arange(-15,16,2)cnstl=a.reshape(16,1)+1j*acnstl=cnstl.reshape(256,1)
X = array([1.4 + 1j*2.7, -4.9 + 1j*8.3])
Out = array(X)error =array(X)for i in xrange(2):for j in xrange(256):   
 a0 = real(X[i])  (real(cnstl[j])+1)a1 = real(X[i])  
(real(cnstl[j])-1)a2 = imag(X[i])  (imag(cnstl[j])-1)a3 = 
imag(X[i])  (imag(cnstl[j])+1) if (a0  a1  a2 a3):Out[i] = 
cnstl[j]error[i] = X[i] - cnstl[j]

From: f...@hotmail.comto: numpy-discuss...@scipy.orgsubject: RE: 
[Numpy-discussion] help on fast slicing on a gridDate: Wed, 28 Jan 2009 
23:28:47 -0700

Hi, Bob, Thanks for your help.  I am sorry for my type error. qam array is the 
X array in my example. cntl is a complex array contains the point (x,y) axises. 
I will try to make a workable example. Also I will try to find out the 
zeros_like function. However, I guess that zeros_like(X) will create an array 
the same size as X. It it is. Then the two line Out=X and error=X should be 
Out=zeros_like(X) and error=zeros(X). Also, can where command handel the logic 
command? aa = np.where((real(X)real(cnstl[j])+1)  (real(X)real(cnstl[j])-1) 
 (imag(X)imag(cnstl[j])+1)  (imag(X)imag(cnstl[j]-1)) For example, 
cntl[j]=3+1j*5, then the where command is the same as: aa = 
np.where((real(X)4)  (real(X)2 ) (imag(X)6)  (imag(X)4)) Thanks Frank 
Date: Thu, 29 Jan 2009 00:15:48 -0600 From: robert.k...@gmail.com To: 
numpy-discussion@scipy.org Subject: Re: [Numpy-discussion] help on fast 
slicing on a grid  On Thu, Jan 29, 2009 at 00:09, frank wang 
f...@hotmail.com wrote:  Here is the for loop that I am think about. Also, 
I do not know whether the  where commands can handle the complicated logic. 
 The where command basically find the data in the square around the point  
cnstl[j].  cnstl is a 2D array from your previous description.   Let the 
data array is qam with size N  I don't see qam anywhere. Did you mean X?   
Out = X  error = X  Don't you want something like zeros_like(X) for these? 
  for i in arange(N):  for j in arange(L):  aa = 
np.where((real(X)real(cnstl[j])+1)   (real(X)real(cnstl[j])-1)  
(imag(X)imag(cnstl[j])+1)   (imag(X)imag(cnstl[j]-1))  Out[aa]=cnstl[j] 
 error[aa]=abs(X)**2 - abs(cnstl[j])**2  I'm still confused. Can you show me 
a complete, working script with possibly fake data?  --  Robert Kern  I 
have come to believe that the whole world is an enigma, a harmless enigma that 
is made terrible by our own mad attempt to interpret it as though it had an 
underlying truth. -- Umberto Eco 
___ Numpy-discussion mailing list 
Numpy-discussion@scipy.org 
http://projects.scipy.org/mailman/listinfo/numpy-discussion

Windows Live™: E-mail. Chat. Share. Get more ways to connect. Check it out.
_
Hotmail® goes where you go. On a PC, on the Web, on your phone. 
http://www.windowslive-hotmail.com/learnmore/versatility.aspx#mobile?ocid=TXT_TAGHM_WL_HM_versatility_121208
 ___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] puzzle: generate index with many ranges

2009-01-30 Thread Pierre GM

On Jan 30, 2009, at 1:53 PM, Raik Gruenberg wrote:

 Pierre GM wrote:
 On Jan 30, 2009, at 1:11 PM, Raik Gruenberg wrote:

 Mhm, I got this far. But how do I get from here to a single index
 array

 [ 4, 5, 6, ... 10, 0, 1, 2, 3, 11, 12, 13, 14 ] ?

 np.concatenate([np.arange(aa,bb) for (aa,bb) in zip(a,b)])

 exactly! Now, the question was, is there a way to do this only using  
 numpy
 functions (sum, repeat, ...), that means without any python for  
 loop?

Can't really see it right now. Make np.arange(max(b)) and take the  
slices you need ? But you still have to look in 2 arrays to find the  
beginning and end of slices, so...


 Sorry about being so insistent on this one but, in my experience,  
 eliminating
 those for loops makes a huge difference in terms of speed. The zip  
 is probably
 also quite costly on a very large data set.

yeah, but it's in a list comprehension, which may make things a tad  
faster. If you prefer, use itertools.izip instead of zip, but I wonder  
where the advantage would be. Anyway, are you sure this particular  
part is your bottleneck ? You know the saying about premature  
optimization...
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] using numpy functions on an array of objects

2009-01-30 Thread Christopher Barker
I think you want to subclass an ndarray here. It's a bit tricky to so, 
but if you look in the wiki and these mailing list archives, you'll find 
advise on how to do it.


-Chris

-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Data file format choice.

2009-01-30 Thread Francesc Alted
A Friday 30 January 2009, Jeff Whitaker escrigué:
 Gary Pajer wrote:
  It's time for me to select a data format.
 
  My data are (more or less) spectra ( a couple of thousand samples),
  six channels, each channel running around 10 Hz, collecting for a
  minute or so. Plus all the settings on the instrument.
 
  I don't see any significant differences between netCDF4 and HDF5.

 Gary:  netCDF4 is just a thin wrapper on top of HDF5 1.8 - think of
 it as a higher level API.

  Similarly, I don't see significant differences between pytables and
  h5py.  Does one play better with numpy?

 pytables has been around longer and is well-tested, has nice pythonic
 features, but files you write with it may not be readable by C or
 fortran clients.

Just to be clear.  PyTables only will write pickled objects on file if 
it is not possible to reasonably represent them as native HDF5 objects.  
But, if you try to save NumPy objects or regular Python scalars they 
are effectively written as native HDF5 objects (see [1]).

Regarding a comparison with h5py (disclaimer: I'm the main author of 
PyTables), I'd say that h5py is thought to have a direct map with NumPy 
array capabilities, but doesn't try to go further.  Also, it is worth 
to note that h5py offers access to the low-level HDF5 functions, which 
can be interesting if you want to get deeper into HDF5 intrincacies, 
which can be great for some users.  

On his hand, PyTables doesn't try to go this low-level and, besides 
supporting general NumPy objects, it is more focused on implementing 
advanced features that are normally only available in database-oriented 
approaches, like enumerated types, flexible query iterators for tables 
(on-disk equivalent to recarrays), indexing (only Pro version), do/undo 
features or natural naming (for an enhanced interactive experience).  
PyTables also tries hard to be a high performance interface to HDF5, 
implementing niceties like internal LRU caches for nodes, automatic 
chunksizes for the datasets or making use of numexpr internally so as 
to accelerate queries to a maximum.

Finally, and although h5py is relatively recent, I'm really impressed by 
the work that Andrew has already done, and in fact, I'm looking forward 
to backport some of the h5py features (like general NumPy-like fancy 
indexing capabilities) to PyTables.  At any rate, it is clear that the 
binomial h5py/PyTables will benefit users, with the only handicap that 
they have to choose their preferred API to HDF5 (or they can use both, 
which could be really a lot of fun ;-).  NetCDF4-based interfaces are 
also probably a good approach and, as it is based in HDF5, the 
compatibility is ensured.

HTH,

[1] http://www.pytables.org/docs/manual/ch04.html#id2553542

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] example reading binary Fortran file

2009-01-30 Thread Sturla Molden

 Careful -- the last time I read a Fortran-=written binary file, I found
 that the various structures (is that what you call them in Fortran?)
 were padded with I think 4 bytes.

That is precisely why I suggested using f2py. If you let Fortran read the
file (be careful to the same compiler!), it is easy to pass the data on to
Python.

Otherwise, you will have to figure out how your Fortran program writes the
file. I.e. what padding, metainformation, etc. that are used. If you
switch Fortran compiler, or even compiler version from the same vendor,
you must start over again. On the other hand, an f2py'd solution just
requires a recompile.

For my own work, I just makes sure NEVER to do any I/O in Fortran! It is
asking for trouble. I leave the I/O to Python or C, where it belongs. That
way I know what data are written and what data are read.

S. M.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] NumPy distutils limitation?

2009-01-30 Thread Francesc Alted
Hi,

Gregor and I are trying to give support for Intel's VML (Vector
Mathematical Library) in numexpr.  For this, we are trying to make use
of the weaponery in NumPy's distutils so as to be able to discover
where the MKL (the package that contains VML) is located.  The
libraries that we need to link with are: mkl_gf_lp64, mkl_gnu_thread,
mkl_core *and* iomp5 (Intel's OpenMP library).

The problem is that I have installed MKL as part as the Intel compiler
for Unix.  In this setup, most of the libraries are in one place,
namely:

/opt/intel/Compiler/11.0/074/mkl/lib/em64t/

However, the OpenMP library is in another directory:

/opt/intel/Compiler/11.0/074/lib/intel64

So, I need to specify *two* directories to get the complete set of
libraries.  My first attempt was setting a site.cfg like:

[DEFAULT]
#libraries = gfortran

[mkl]
library_dirs= 
/opt/intel/Compiler/11.0/074/mkl/lib/em64t/:/opt/intel/Compiler/11.0/074/lib/intel64
include_dirs =  /opt/intel/Compiler/11.0/074/mkl/include/
mkl_libs = mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5


Unfortunately, distutils complains and says that it cannot find the
complete set of libraries:

mkl_info:
  libraries mkl_gf_lp64,mkl_gnu_thread,mkl_core,iomp5 not found 
in /opt/intel/Compiler/11.0/074/mkl/lib/em64t/
  libraries mkl_gf_lp64,mkl_gnu_thread,mkl_core,iomp5 not found 
in /opt/intel/Compiler/11.0/074/lib/intel64
  NOT AVAILABLE

After some debugging of the problem, it seems that distutils needs to
find *all* the required libraries in *one* single directory.  As iomp5
is on a different directory, distutils thinks that the requisites
are not fulfilled.

I've solved this by requering the iomp5 to be find in the DEFAULT
section.  Something like:

[DEFAULT]
library_dirs = /opt/intel/Compiler/11.0/074/lib/intel64
#libraries = gfortran, iomp5
libraries = iomp5

[mkl]
library_dirs = /opt/intel/Compiler/11.0/074/mkl/lib/em64t/
include_dirs =  /opt/intel/Compiler/11.0/074/mkl/include/
mkl_libs = mkl_gf_lp64, mkl_gnu_thread, mkl_core


However, in case one would need to specify several other directories
in the DEFAULT section for finding other hypothetical necessary
libraries (like gfortran or others), we may run into the same problem 
than above.

My question is, is there an elegant way to handle this problem, or it
is a limitation of the current distutils?  If the later, it would be
nice it that could be solved in a future version, and several libraries 
can be found in *several* directories.

Thanks,

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Robert Kern
On Fri, Jan 30, 2009 at 11:26, Ryan May rma...@gmail.com wrote:
 Christopher Barker wrote:
 On 1/30/2009 3:22 PM, Neal Becker wrote:

 Now what would be _really_ cool is a special array type that would 
 represent
 a constant array without wasting memory.

 Can't you do that with scary stride tricks? I think I remember some
 discussion of this a while back.

 I think that's right, but at that point, what gain is that over using a 
 regular
 constant and relying on numpy's broadcasting?

The filled array may be providing some of the shape information that's
not in the other arrays.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Robert Kern
On Fri, Jan 30, 2009 at 08:22, Neal Becker ndbeck...@gmail.com wrote:
 Right now there are 2 options to create an array of constant value:

 1) empty (size); fill (val)

 2) ones (size) * val

 1 has disadvantage of not being an expression, so can't be an arg to a
 function call.

So wrap it in a function.

 Also probably slower than create+fill @ same time

Only marginally. In any case, (1) is exactly how ones() and zeros()
are implemented. I would be +1 on a patch that adds a filled()
function along the lines of ones() and zeros(), but I'm -1 on adding
this functionality to ones() or zeros().

 2 is probably slower than create+fill @ same time

 Now what would be _really_ cool is a special array type that would represent
 a constant array without wasting memory.  boost::ublas, for example, has
 this feature.

In [2]: from numpy.lib.stride_tricks import as_strided

In [3]: def hollow_filled(shape, value, dtype=None):
   ...: x = asarray(value, dtype=dtype)
   ...: return as_strided(x, shape, [0]*len(shape))
   ...:

In [5]: hollow_filled([2,3,4], 5)
Out[5]:
array([[[5, 5, 5, 5],
[5, 5, 5, 5],
[5, 5, 5, 5]],

   [[5, 5, 5, 5],
[5, 5, 5, 5],
[5, 5, 5, 5]]])

In [6]: hollow_filled([2,3,4], 5.0)
Out[6]:
array([[[ 5.,  5.,  5.,  5.],
[ 5.,  5.,  5.,  5.],
[ 5.,  5.,  5.,  5.]],

   [[ 5.,  5.,  5.,  5.],
[ 5.,  5.,  5.,  5.],
[ 5.,  5.,  5.,  5.]]])

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help on fast slicing on a grid

2009-01-30 Thread Robert Kern
On Fri, Jan 30, 2009 at 12:58, frank wang f...@hotmail.com wrote:
 I have created a test example for the question using for loop and hope
 someone can help me to get fast solution. My data set is about 200 data.

 However, I have the problem to run the code, the Out[i]=cnstl[j] line gives
 me error says:

 In [107]: Out[0]=cnstl[0]
 ---
 TypeError Traceback (most recent call last)
 C:\Frank_share\qamslicer.py in module()
  1
   2
   3
   4
   5
 TypeError: can't convert complex to float; use abs(z)
 In [108]: cnstl.dtype
 Out[108]: dtype('complex128')

 I do not know why that my data is complex128 already. Can anyone help to
 figure why?

It's an odd error message, certainly. The root of the problem is that
you are attempting to put a (1,)-shaped array into a scalar. You don't
want to do that.

By the way, you don't want to use the variable name Out in IPython.
It's already used to capture the output.

 Thanks

 Frank

 from numpy import *
 a = arange(-15,16,2)
 cnstl=a.reshape(16,1)+1j*a
 cnstl=cnstl.reshape(256,1)

Change that line to

  cnstl = cnstl.ravel()

 X = array([1.4 + 1j*2.7, -4.9 + 1j*8.3])
 Out = array(X)
 error =array(X)
 for i in xrange(2):
 for j in xrange(256):
 a0 = real(X[i])  (real(cnstl[j])+1)
 a1 = real(X[i])  (real(cnstl[j])-1)
 a2 = imag(X[i])  (imag(cnstl[j])-1)
 a3 = imag(X[i])  (imag(cnstl[j])+1)
  if (a0  a1  a2 a3):
 Out[i] = cnstl[j]
 error[i] = X[i] - cnstl[j]

After reindenting this correctly, I get the following results:

In [22]: out
Out[22]: array([ 1.+3.j, -5.+9.j])

In [23]: error
Out[23]: array([ 0.4-0.3j,  0.1-0.7j])


Are those correct?

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] using numpy functions on an array of objects

2009-01-30 Thread Robert Kern
On Fri, Jan 30, 2009 at 13:18, Christopher Barker chris.bar...@noaa.gov wrote:
 I think you want to subclass an ndarray here. It's a bit tricky to so,
 but if you look in the wiki and these mailing list archives, you'll find
 advise on how to do it.

That still won't work. numpy.linalg.inv() simply does a particular
algorithm on float and complex arrays and nothing else.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] puzzle: generate index with many ranges

2009-01-30 Thread Rick White
Here's a technique that works:

Python 2.4.2 (#5, Nov 21 2005, 23:08:11)
[GCC 4.0.0 20041026 (Apple Computer, Inc. build 4061)] on darwin
Type help, copyright, credits or license for more information.
  import numpy as np
  a = np.array([0,4,0,11])
  b = np.array([-1,11,4,15])
  rangelen = b-a+1
  cumlen = rangelen.cumsum()
  c = np.arange(cumlen[-1],dtype=np.int32)
  c += np.repeat(a[1:]-c[cumlen[0:-1]], rangelen[1:])
  print c
[ 4  5  6  7  8  9 10 11  0  1  2  3  4 11 12 13 14 15]

The basic idea is that the difference of your desired output from a  
simple range is an array with a bunch of constant values appended  
together, and that is what repeat() does.  I'm assuming that you'll  
never have b  a.  Notice the slight ugliness of prepending the  
elements at the beginning so that the cumsum starts with zero.   
(Maybe there is a cleaner way to do that.)

This does create a second array (via the repeat) that is the same  
length as the result.  If that uses too much memory, you could break  
up the repeat and update of c into segments using a loop.  (You  
wouldn't need a loop for every a,b element -- do a bunch in each  
iteration.)

-- Rick

Raik Gruenberg wrote:

 Hi there,

 perhaps someone has a bright idea for this one:

 I want to concatenate ranges of numbers into a single array (for  
 indexing). So I
 have generated an array a with starting positions, for example:

 a = [4, 0, 11]

 I have an array b with stop positions:

 b = [11, 4, 15]

 and I would like to generate an index array that takes 4..11, then  
 0..4, then
 11..15.

 In reality, a and b have 1+ elements and the arrays to be  
 sliced are very
 large so I want to avoid any for loops etc. Any idea how this could  
 be done? I
 thought some combination of *repeat* and adding of *arange* should  
 do the trick
 but just cannot nail it down.

 Thanks in advance for any hints!

 Greetings,
 Raik

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Neal Becker
Robert Kern wrote:

 On Fri, Jan 30, 2009 at 08:22, Neal Becker ndbeck...@gmail.com wrote:
 Right now there are 2 options to create an array of constant value:

 1) empty (size); fill (val)

 2) ones (size) * val

 1 has disadvantage of not being an expression, so can't be an arg to a
 function call.
 
 So wrap it in a function.
 
 Also probably slower than create+fill @ same time
 
 Only marginally. In any case, (1) is exactly how ones() and zeros()
 are implemented. I would be +1 on a patch that adds a filled()
 function along the lines of ones() and zeros(), but I'm -1 on adding
 this functionality to ones() or zeros().
 
 2 is probably slower than create+fill @ same time

 Now what would be _really_ cool is a special array type that would
 represent
 a constant array without wasting memory.  boost::ublas, for example, has
 this feature.
 
 In [2]: from numpy.lib.stride_tricks import as_strided
 
 In [3]: def hollow_filled(shape, value, dtype=None):
...: x = asarray(value, dtype=dtype)
...: return as_strided(x, shape, [0]*len(shape))
...:
 
 In [5]: hollow_filled([2,3,4], 5)
 Out[5]:
 array([[[5, 5, 5, 5],
 [5, 5, 5, 5],
 [5, 5, 5, 5]],
 
[[5, 5, 5, 5],
 [5, 5, 5, 5],
 [5, 5, 5, 5]]])
 
 In [6]: hollow_filled([2,3,4], 5.0)
 Out[6]:
 array([[[ 5.,  5.,  5.,  5.],
 [ 5.,  5.,  5.,  5.],
 [ 5.,  5.,  5.,  5.]],
 
[[ 5.,  5.,  5.,  5.],
 [ 5.,  5.,  5.,  5.],
 [ 5.,  5.,  5.,  5.]]])


Where can I find doc on stride_tricks?  Nothing here:
http://docs.scipy.org/doc/numpy/search.html?q=stride_trickscheck_keywords=yesarea=default



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] minor improvment to ones

2009-01-30 Thread Robert Kern
On Fri, Jan 30, 2009 at 16:32, Neal Becker ndbeck...@gmail.com wrote:
 Robert Kern wrote:

 On Fri, Jan 30, 2009 at 08:22, Neal Becker ndbeck...@gmail.com wrote:
 Right now there are 2 options to create an array of constant value:

 1) empty (size); fill (val)

 2) ones (size) * val

 1 has disadvantage of not being an expression, so can't be an arg to a
 function call.

 So wrap it in a function.

 Also probably slower than create+fill @ same time

 Only marginally. In any case, (1) is exactly how ones() and zeros()
 are implemented. I would be +1 on a patch that adds a filled()
 function along the lines of ones() and zeros(), but I'm -1 on adding
 this functionality to ones() or zeros().

 2 is probably slower than create+fill @ same time

 Now what would be _really_ cool is a special array type that would
 represent
 a constant array without wasting memory.  boost::ublas, for example, has
 this feature.

 In [2]: from numpy.lib.stride_tricks import as_strided

 In [3]: def hollow_filled(shape, value, dtype=None):
...: x = asarray(value, dtype=dtype)
...: return as_strided(x, shape, [0]*len(shape))
...:

 In [5]: hollow_filled([2,3,4], 5)
 Out[5]:
 array([[[5, 5, 5, 5],
 [5, 5, 5, 5],
 [5, 5, 5, 5]],

[[5, 5, 5, 5],
 [5, 5, 5, 5],
 [5, 5, 5, 5]]])

 In [6]: hollow_filled([2,3,4], 5.0)
 Out[6]:
 array([[[ 5.,  5.,  5.,  5.],
 [ 5.,  5.,  5.,  5.],
 [ 5.,  5.,  5.,  5.]],

[[ 5.,  5.,  5.,  5.],
 [ 5.,  5.,  5.,  5.],
 [ 5.,  5.,  5.,  5.]]])


 Where can I find doc on stride_tricks?

Source is always the best place. as_strided is not exposed as such
since you can cause segfaults with it if you have a bug. Rather, it's
useful for devs to make tools that, once debugged, can't cause
segfaults.

 Nothing here:
 http://docs.scipy.org/doc/numpy/search.html?q=stride_trickscheck_keywords=yesarea=default

Use this search box to search the development version of the docs:

  http://docs.scipy.org/numpy/search/

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] example reading binary Fortran file

2009-01-30 Thread David Froger
ok for f2py!

 Otherwise, you will have to figure out how your Fortran program writes the
file. I.e. what padding, metainformation, etc. that are used. If you
switch Fortran compiler, or even compiler version from the same vendor,
you must start over again.

In my experience, I never had this kind of problem. I just have to convert
files between big/little endian with uswap (http://linux.die.net/man/1/uswap),
but I never see, in my experience, a Fortran program writting data
differently given the compilator.

For my own work, I just makes sure NEVER to do any I/O in Fortran! It is
asking for trouble. I leave the I/O to Python or C, where it belongs. That
way I know what data are written and what data are read.

Unfortunately., binary files are mandatory in the contex I work. I use a
scientific code written in Fortran to compute fluid dynamics. Typically the
simulation is run on supercalculator and generate giga and giga of datum, so
we must use the binary format, which recquire less memory for the stockage.
Then, I like to post-trait the datum using Python and Gnuplot.py.

That's why I'm looking for a performantant, easy and 'standard'  way to read
binary Fortran files. (I think many people have the same need).

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.distutils and shared libraries

2009-01-30 Thread Brian Granger
Hi,

2 ?'s about numpy.distutils:

1.

I am using config.add_library to build a c++ library that I will link
into some Cython extensions.  This is working fine and generating a .a
library for me.  However, I need a shared library instead.  Is this
possible with numpy.distutils or will I need something like numscons?

2.

When calling add_library, what is the difference between the depends
and headers arguments?

Thanks!

Brian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.distutils and shared libraries

2009-01-30 Thread Robert Kern
On Fri, Jan 30, 2009 at 18:13, Brian Granger ellisonbg@gmail.com wrote:
 Hi,

 2 ?'s about numpy.distutils:

 1.

 I am using config.add_library to build a c++ library that I will link
 into some Cython extensions.  This is working fine and generating a .a
 library for me.  However, I need a shared library instead.  Is this
 possible with numpy.distutils or will I need something like numscons?

numscons or you can adapt the code from OOF2 and contribute it to
numpy.distutils.

 2.

 When calling add_library, what is the difference between the depends
 and headers arguments?

headers get installed via the distutils install_headers command. This
will install the headers ... somewhere. Not exactly sure.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.distutils and shared libraries

2009-01-30 Thread Brian Granger
 I am using config.add_library to build a c++ library that I will link
 into some Cython extensions.  This is working fine and generating a .a
 library for me.  However, I need a shared library instead.  Is this
 possible with numpy.distutils or will I need something like numscons?

 numscons or you can adapt the code from OOF2 and contribute it to
 numpy.distutils.

OK, thanks I hadn't seen the stuff in OOF2.

 When calling add_library, what is the difference between the depends
 and headers arguments?

 headers get installed via the distutils install_headers command. This
 will install the headers ... somewhere. Not exactly sure.

Again, thanks, that helps.

Cheers,

Brian

 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
  -- Umberto Eco
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy distutils limitation?

2009-01-30 Thread David Cournapeau
On Sat, Jan 31, 2009 at 5:30 AM, Francesc Alted fal...@pytables.org wrote:
 Hi,

 Gregor and I are trying to give support for Intel's VML (Vector
 Mathematical Library) in numexpr.  For this, we are trying to make use
 of the weaponery in NumPy's distutils so as to be able to discover
 where the MKL (the package that contains VML) is located.  The
 libraries that we need to link with are: mkl_gf_lp64, mkl_gnu_thread,
 mkl_core *and* iomp5 (Intel's OpenMP library).

 The problem is that I have installed MKL as part as the Intel compiler
 for Unix.  In this setup, most of the libraries are in one place,
 namely:

 /opt/intel/Compiler/11.0/074/mkl/lib/em64t/

 However, the OpenMP library is in another directory:

 /opt/intel/Compiler/11.0/074/lib/intel64

 So, I need to specify *two* directories to get the complete set of
 libraries.  My first attempt was setting a site.cfg like:

 [DEFAULT]
 #libraries = gfortran

 [mkl]
 library_dirs= 
 /opt/intel/Compiler/11.0/074/mkl/lib/em64t/:/opt/intel/Compiler/11.0/074/lib/intel64
 include_dirs =  /opt/intel/Compiler/11.0/074/mkl/include/
 mkl_libs = mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5


 Unfortunately, distutils complains and says that it cannot find the
 complete set of libraries:

 mkl_info:
  libraries mkl_gf_lp64,mkl_gnu_thread,mkl_core,iomp5 not found
 in /opt/intel/Compiler/11.0/074/mkl/lib/em64t/
  libraries mkl_gf_lp64,mkl_gnu_thread,mkl_core,iomp5 not found
 in /opt/intel/Compiler/11.0/074/lib/intel64
  NOT AVAILABLE

 After some debugging of the problem, it seems that distutils needs to
 find *all* the required libraries in *one* single directory.

Yes


 My question is, is there an elegant way to handle this problem, or it
 is a limitation of the current distutils?

The elegant solution is to make softlink on unix.

 If the later, it would be
 nice it that could be solved in a future version, and several libraries
 can be found in *several* directories.

Unfortunately, this would mean rewriting system_info, because this
assumption is deeply ingrained in the whole design. Personally, I
think the really idea of looking for files for libraries is not the
right one - all other tools I know (autoconf, scons, jam, cmake) link
code snippet instead. But doing it in distutils won't be fun.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] example reading binary Fortran file

2009-01-30 Thread Gary Ruben
The only time I've done this, I used numpy.fromfile exactly as follows. 
The file had a header followed by a number of records (one float 
followed by 128 complex numbers), requiring separate calls of 
numpy.fromfile to read each part. The only strange part about this was 
that the Fortran code was only supposed to be outputting 3 header fields 
but was adding an extra integer field for some unknown reason. I used a 
binary file viewer to work out the actual format of the file. To get at 
just the complex data in the records, I viewed the data as a recarray. 
Hopefully it's reasonably self-explanatory:

---

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm

f = open(field.dat,'rb')

# Read header
a = np.fromfile(f, np.dtype([('f1','i4'), ('f2','i4'), ('f3','f8'),
  ('f4','i4')]), count = 1)

# Read data records
b = np.fromfile(f, np.dtype([('f1','f8'), ('f2','128c16')]))
bv = b.view(np.recarray)

plt.imshow(np.abs(np.fft.fftshift(np.fft.fft(bv.f2, axis=1), axes=[1])),
origin='lower', interpolation='nearest', cmap=cm.Oranges)
plt.show()

---

Gary

David Froger wrote:
 Hy,
 
 My question is about reading Fortran binary file (oh no this question 
 again...)

snip
 
 After googling, I read that fopen and npfille was deprecated, and we 
 should use numpy.fromfile and ndarray.tofile, but despite of the 
 documentaion, the cookbook, the mailling list and google, I don't succed 
 in making a simple example. Considering the simple Fortran code below 
 what is the Python script to read the four arrrays? What about if my pc 
 is little endian and the file big endian?

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help on fast slicing on a grid

2009-01-30 Thread Robert Kern
On Fri, Jan 30, 2009 at 22:41, frank wang f...@hotmail.com wrote:
 Thanks for the correction. I will learn the ravel() function since I do not
 know it. Moving from Matlab world into python is tricky sometime.

 Your output
 In [22]: out
 Out[22]: array([ 1.+3.j, -5.+9.j])

 In [23]: error
 Out[23]: array([ 0.4-0.3j, 0.1-0.7j])

 are correct answer.

 However, if my data set is large, this solution takes long time to run. Are
 there any python/numpy magic to speed it up?

from numpy import *

a = arange(-15,16,2)
cnstl = a[:,newaxis] + 1j*a
cnstl = cnstl.ravel()
X = array([1.4 + 1j*2.7, -3.9 + 1j*8.3])

out = around((X + 1+1j) / 2.0) * 2.0 - (1+1j)
error = X - out

print out
print error

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help on fast slicing on a grid

2009-01-30 Thread frank wang

Hi, Bob,
 
Thanks. This solution works great. It really helps me a lot. 
 
Frank Date: Fri, 30 Jan 2009 23:08:35 -0600 From: robert.k...@gmail.com To: 
numpy-discussion@scipy.org Subject: Re: [Numpy-discussion] help on fast 
slicing on a grid  On Fri, Jan 30, 2009 at 22:41, frank wang 
f...@hotmail.com wrote:  Thanks for the correction. I will learn the 
ravel() function since I do not  know it. Moving from Matlab world into 
python is tricky sometime.   Your output  In [22]: out  Out[22]: 
array([ 1.+3.j, -5.+9.j])   In [23]: error  Out[23]: array([ 0.4-0.3j, 
0.1-0.7j])   are correct answer.   However, if my data set is large, 
this solution takes long time to run. Are  there any python/numpy magic to 
speed it up?  from numpy import *  a = arange(-15,16,2) cnstl = 
a[:,newaxis] + 1j*a cnstl = cnstl.ravel() X = array([1.4 + 1j*2.7, -3.9 + 
1j*8.3])  out = around((X + 1+1j) / 2.0) * 2.0 - (1+1j) error = X - out  
print out print error  --  Robert Kern  I have come to believe that the 
whole world is an enigma, a harmless enigma that is made terrible by our own 
mad attempt to interpret it as though it had an underlying truth. -- Umberto 
Eco ___ Numpy-discussion mailing 
list Numpy-discussion@scipy.org 
http://projects.scipy.org/mailman/listinfo/numpy-discussion
_
Windows Live™ Hotmail®:…more than just e-mail. 
http://windowslive.com/explore?ocid=TXT_TAGLM_WL_t2_hm_justgotbetter_explore_012009___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion