Re: [Numpy-discussion] numpy with icc/MKL

2008-05-19 Thread rex
 No, they heavily changed how to link against mkl in 10. There is a whole
 chapter about it in the releases notes.

Yes, I read it, but it appears to me that the new layer libraries
are an option, and that the legacy link format still works. From
chapter 3: 

Pure layered libraries give more flexibility to choose the appropriate 
combination of
libraries but do not have backward compatibility by library names in link 
lines. Dummy
libraries are introduced to provide backward compatibility with earlier version 
of Intel MKL,
which did not use layered libraries.
Dummy libraries do not contain any functionality, but only dependencies on a 
set of layered
libraries. Placed in a link line, dummy libraries enable omitting dependent 
layered libraries,
which will be linked automatically. Dummy libraries contain dependency on the 
following
layered libraries (default principle):
·Interface: Intel, LP64
·Threading: Intel compiled
·Computational: the computation library.
So, if you employ the above interface and use OpenMP* threading provided by the 
Intel®
compiler, you may not change your link lines.

 I don't have the same thing at all. You have many more libraries than I
 have, and that's because you were using the intel compiler, I think
 (libguide is from intel cc, imf is also used by icc and ifort).

 Remove the installer numpy AND the build directory, and retry building
 numpy with the mkl.

I always remove the build directory (if I forget the much faster
compilation reminds me). Do you mean remove the installed numpy?
Did that, built numpy again, and it fails numpy.test() exactly as before.

I changed site.cfg to:

[mkl]
library_dirs = /opt/intel/mkl/10.0.3.020/lib/32
lapack_libs = mkl_lapack
mkl_libs = mkl, mkl_vml_p4m, guide

(the vml is appropriate for my CPU). The build log shows:

F2PY Version 2_5189
blas_opt_info:
blas_mkl_info:
  FOUND:
libraries = ['mkl', 'mkl_vml_p4m', 'guide', 'pthread']
library_dirs = ['/opt/intel/mkl/10.0.3.020/lib/32']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['/opt/intel/mkl/10.0.3.020/include']

  FOUND:
libraries = ['mkl', 'mkl_vml_p4m', 'guide', 'pthread']
library_dirs = ['/opt/intel/mkl/10.0.3.020/lib/32']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['/opt/intel/mkl/10.0.3.020/include']

lapack_opt_info:
lapack_mkl_info:
mkl_info:
  FOUND:
libraries = ['mkl', 'mkl_vml_p4m', 'guide', 'pthread']
library_dirs = ['/opt/intel/mkl/10.0.3.020/lib/32']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['/opt/intel/mkl/10.0.3.020/include']

  FOUND:
libraries = ['mkl_lapack', 'mkl', 'mkl_vml_p4m', 'guide', 'pthread']
library_dirs = ['/opt/intel/mkl/10.0.3.020/lib/32']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['/opt/intel/mkl/10.0.3.020/include']

  FOUND:
libraries = ['mkl_lapack', 'mkl', 'mkl_vml_p4m', 'guide', 'pthread']
library_dirs = ['/opt/intel/mkl/10.0.3.020/lib/32']

So it's finding the vector math library, but checking with ldd
shows that of all the *.so libs in site-packages/numpy only
lapack_lite.so is using vml. Any thoughts on why none of the other
libs are using it?

ldd /usr/local/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so
linux-gate.so.1 =  (0xe000)
libmkl_lapack.so = /opt/intel/mkl/10.0.3.020/lib/32/libmkl_lapack.so 
(0xb7a48000)
/opt/intel/mkl/10.0.3.020/lib/32/libmkl_intel.so (0xb790a000)
/opt/intel/mkl/10.0.3.020/lib/32/libmkl_intel_thread.so (0xb770d000)
/opt/intel/mkl/10.0.3.020/lib/32/libmkl_core.so (0xb76a9000)
libmkl_vml_p4m.so = /opt/intel/mkl/10.0.3.020/lib/32/libmkl_vml_p4m.so 
(0xb73bd000)
libguide.so = /opt/intel/mkl/10.0.3.020/lib/32/libguide.so (0xb735a000)
libpthread.so.0 = /lib/tls/i686/cmov/libpthread.so.0 (0xb732f000)
libimf.so = /opt/intel/fc/10.1.015/lib/libimf.so (0xb70ff000)
libm.so.6 = /lib/tls/i686/cmov/libm.so.6 (0xb70da000)
libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb70cf000)
libintlc.so.5 = /opt/intel/fc/10.1.015/lib/libintlc.so.5 (0xb708c000)
libc.so.6 = /lib/tls/i686/cmov/libc.so.6 (0xb6f5a000)
libdl.so.2 = /lib/tls/i686/cmov/libdl.so.2 (0xb6f56000)
/lib/ld-linux.so.2 (0x8000)

As you can see from the above, it uses libmkl_vml_p4m, but it's
the only *.so that does. It would be interesting to see the output
of these commands below on a box running BlAS and Lapack to see how many
of the *.so libs use them.

ldd /usr/local/lib/python2.5/site-packages/numpy/fft/fftpack_lite.so
ldd /usr/local/lib/python2.5/site-packages/numpy/lib/_compiled_base.so
ldd /usr/local/lib/python2.5/site-packages/numpy//core/multiarray.so
ldd /usr/local/lib/python2.5/site-packages/numpy//core/_dotblas.so
ldd /usr/local/lib/python2.5/site-packages/numpy//core/_sort.so
ldd /usr/local/lib/python2.5/site-packages/numpy/core/scalarmath.so 
ldd /usr/local/lib/python2.5/site-packages/numpy/core/umath.so
ldd 

[Numpy-discussion] Generalised inner product

2008-05-19 Thread Peter Creasey
Hi,

Does numpy have some sort of generalised inner product? For example I have
arrays

a.shape = (5,6,7)
b.shape = (8,7,9,10)

and I want to perform a product over the 3rd axis of a and the 2nd of b,
i.e.

c[i,j,k,l,m] = sum (over x) of a[i,j,x] * b[k,x,l,m]

I guess I could do it with swapaxes and numpy.dot or numpy.inner but I
wondered if there was a general function.

Thanks,
Peter
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Which to test: 1.1.x or 1.1.0rc1?

2008-05-19 Thread James Snyder
I've been using git-svn, so I suppose I'm pulling the last rev that
was in 1.1.x. Checked out the RC, looks like there are more unit
tests, but they all still pass for me:

In [2]: numpy.test()
Numpy is installed in /Library/Python/2.5/site-packages/numpy
Numpy version 1.1.0.dev5142
Python version 2.5.1 (r251:54863, Feb  4 2008, 21:48:13) [GCC 4.0.1
(Apple Inc. build 5465)]
  Found 18/18 tests for numpy.core.defmatrix
  Found 3/3 tests for numpy.core.memmap
  Found 283/283 tests for numpy.core.multiarray
  Found 70/70 tests for numpy.core.numeric
  Found 36/36 tests for numpy.core.numerictypes
  Found 12/12 tests for numpy.core.records
  Found 7/7 tests for numpy.core.scalarmath
  Found 16/16 tests for numpy.core.umath
  Found 5/5 tests for numpy.ctypeslib
  Found 5/5 tests for numpy.distutils.misc_util
  Found 2/2 tests for numpy.fft.fftpack
  Found 3/3 tests for numpy.fft.helper
  Found 24/24 tests for numpy.lib._datasource
  Found 10/10 tests for numpy.lib.arraysetops
  Found 1/1 tests for numpy.lib.financial
  Found 0/0 tests for numpy.lib.format
  Found 53/53 tests for numpy.lib.function_base
  Found 5/5 tests for numpy.lib.getlimits
  Found 6/6 tests for numpy.lib.index_tricks
  Found 15/15 tests for numpy.lib.io
  Found 1/1 tests for numpy.lib.machar
  Found 4/4 tests for numpy.lib.polynomial
  Found 49/49 tests for numpy.lib.shape_base
  Found 15/15 tests for numpy.lib.twodim_base
  Found 43/43 tests for numpy.lib.type_check
  Found 1/1 tests for numpy.lib.ufunclike
  Found 89/89 tests for numpy.linalg
  Found 94/94 tests for numpy.ma.core
  Found 15/15 tests for numpy.ma.extras
  Found 7/7 tests for numpy.random
  Found 16/16 tests for numpy.testing.utils
  Found 0/0 tests for __main__
..
 ..
--
Ran 1004 tests in 1.342s

OK

Out[2]: unittest._TextTestResult run=1004 errors=0 failures=0

On Sun, May 18, 2008 at 9:47 PM, Jarrod Millman [EMAIL PROTECTED] wrote:
 On Sun, May 18, 2008 at 12:56 PM, James Snyder [EMAIL PROTECTED] wrote:
 I've been running out of trunk recently, and I've noted that an rc release
 has appeared and the 1.1.x branch has been regenerated.

 Which would be most helpful to provide feedback from?

 Hmmm.  I deleted the 1.1.x branch and it doesn't appear to exist
 anymore.  How did you get it?

 Please test the 1.1.0rc1:
 http://projects.scipy.org/scipy/numpy/browser/tags/1.1.0rc1


 --
 Jarrod Millman
 Computational Infrastructure for Research Labs
 10 Giannini Hall, UC Berkeley
 phone: 510.643.4014
 http://cirl.berkeley.edu/
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
James Snyder
Biomedical Engineering
Northwestern University
[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Generalised inner product

2008-05-19 Thread Charles R Harris
On Mon, May 19, 2008 at 4:23 AM, Peter Creasey 
[EMAIL PROTECTED] wrote:

 Hi,

 Does numpy have some sort of generalised inner product? For example I have
 arrays

 a.shape = (5,6,7)
 b.shape = (8,7,9,10)

 and I want to perform a product over the 3rd axis of a and the 2nd of b,
 i.e.

 c[i,j,k,l,m] = sum (over x) of a[i,j,x] * b[k,x,l,m]

 I guess I could do it with swapaxes and numpy.dot or numpy.inner but I
 wondered if there was a general function.


Try tensordot.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy with icc/MKL

2008-05-19 Thread David Cournapeau
rex wrote:

 I always remove the build directory (if I forget the much faster
 compilation reminds me). Do you mean remove the installed numpy?
   

Yes.

 Did that, built numpy again, and it fails numpy.test() exactly as before.

 I changed site.cfg to:

 [mkl]
 library_dirs = /opt/intel/mkl/10.0.3.020/lib/32
 lapack_libs = mkl_lapack
 mkl_libs = mkl, mkl_vml_p4m, guide
   

Please follow exactly my instruction, otherwise, we cannot compare what 
we are doing: use exactly the same site.cfg as me.

 ldd /usr/local/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so
 linux-gate.so.1 =  (0xe000)
 libmkl_lapack.so = /opt/intel/mkl/10.0.3.020/lib/32/libmkl_lapack.so 
 (0xb7a48000)
 /opt/intel/mkl/10.0.3.020/lib/32/libmkl_intel.so (0xb790a000)
 /opt/intel/mkl/10.0.3.020/lib/32/libmkl_intel_thread.so (0xb770d000)
 /opt/intel/mkl/10.0.3.020/lib/32/libmkl_core.so (0xb76a9000)
 libmkl_vml_p4m.so = 
 /opt/intel/mkl/10.0.3.020/lib/32/libmkl_vml_p4m.so (0xb73bd000)
 libguide.so = /opt/intel/mkl/10.0.3.020/lib/32/libguide.so 
 (0xb735a000)
 libpthread.so.0 = /lib/tls/i686/cmov/libpthread.so.0 (0xb732f000)
 libimf.so = /opt/intel/fc/10.1.015/lib/libimf.so (0xb70ff000)
 libm.so.6 = /lib/tls/i686/cmov/libm.so.6 (0xb70da000)
 libgcc_s.so.1 = /lib/libgcc_s.so.1 (0xb70cf000)
 libintlc.so.5 = /opt/intel/fc/10.1.015/lib/libintlc.so.5 (0xb708c000)
 libc.so.6 = /lib/tls/i686/cmov/libc.so.6 (0xb6f5a000)
 libdl.so.2 = /lib/tls/i686/cmov/libdl.so.2 (0xb6f56000)
 /lib/ld-linux.so.2 (0x8000)
   

You still have libraries linked to intel compilers (libintlc, libimf). 
If you remove both the build directory and installed numpy, I don't why 
those are used. As a last resort, can you put aside /opt/intel/fc 
temporally ?

 As you can see from the above, it uses libmkl_vml_p4m, but it's
 the only *.so that does. It would be interesting to see the output
 of these commands below on a box running BlAS and Lapack to see how many
 of the *.so libs use them.

 ldd /usr/local/lib/python2.5/site-packages/numpy/fft/fftpack_lite.so
 ldd /usr/local/lib/python2.5/site-packages/numpy/lib/_compiled_base.so
 ldd /usr/local/lib/python2.5/site-packages/numpy//core/multiarray.so
 ldd /usr/local/lib/python2.5/site-packages/numpy//core/_dotblas.so
 ldd /usr/local/lib/python2.5/site-packages/numpy//core/_sort.so
 ldd /usr/local/lib/python2.5/site-packages/numpy/core/scalarmath.so 
 ldd /usr/local/lib/python2.5/site-packages/numpy/core/umath.so
 ldd /usr/local/lib/python2.5/site-packages/numpy/random/mtrand.so
 ldd /usr/local/lib/python2.5/site-packages/numpy/numarray/_capi.so
   

Only _dotblas and lapack_lite use blas/lapack on numpy, AFAIK. They may 
be linked to other extensions, but won't be used.

 In your list post you show mkl/10.0.1.014/. I'm using 10.0.3.020,
 but I've tried an older 10.0.X version with no better luck. BTW,
 my build runs, i.e., I can run some programs that use NumPy. 

   

You are not using the free version, right ? The problem is that the MKL 
error has no clear meaning, and google did not return anything 
meaningful. Maybe your environment is corrupted in some way, because 
something is likely to influence how MKL is initialized. But what 
exactly, I have no idea.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] noncentral_chisquare buglet?

2008-05-19 Thread Neal Becker
def noncentral_chisquare(self, df, nonc, size=None):
Noncentral Chi^2 distribution.

noncentral_chisquare(df, nonc, size=None) - random values

cdef ndarray odf, ononc
cdef double fdf, fnonc
fdf = PyFloat_AsDouble(df)
fnonc = PyFloat_AsDouble(nonc)
if not PyErr_Occurred():
if fdf = 1:
raise ValueError(df = 0) 

I think this message should be df = 1?

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy with icc/MKL

2008-05-19 Thread rex
 Please follow exactly my instruction, otherwise, we cannot compare what 
 we are doing: use exactly the same site.cfg as me.

OK, I used the same MKL version you did (10.0.1.014), the same
site.cfg, and set .bashrc to do:

source /opt/intel/mkl/10.0.1.014/tools/environment/mklvars32.sh

and compiled 1.1.0rc1 with:

python setup.py config build_clib build_ext install \
 prefix=/usr/local/  buildXX

Then I ran:

# python
Python 2.5 (release25-maint, Dec  9 2006, 14:35:53)
[GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-20)] on linux2
Type help, copyright, credits or license for more information.
 import numpy
 numpy.test()
Numpy is installed in /usr/local/lib/python2.5/site-packages/numpy
Numpy version 1.1.0rc1
Python version 2.5 (release25-maint, Dec  9 2006, 14:35:53) [GCC 4.1.2 20061115 
(prerelease) (Debian 4.1.1-20)]
  Found 18/18 tests for numpy.core.defmatrix
  Found 3/3 tests for numpy.core.memmap
  Found 283/283 tests for numpy.core.multiarray
  Found 70/70 tests for numpy.core.numeric
  Found 36/36 tests for numpy.core.numerictypes
  Found 12/12 tests for numpy.core.records
  Found 0/0 tests for numpy.lib.format
  Found 53/53 tests for numpy.lib.function_base
  Found 5/5 tests for numpy.lib.getlimits
  Found 6/6 tests for numpy.lib.index_tricks
  Found 15/15 tests for numpy.lib.io
  Found 1/1 tests for numpy.lib.machar
  Found 4/4 tests for numpy.lib.polynomial
  Found 49/49 tests for numpy.lib.shape_base
  Found 15/15 tests for numpy.lib.twodim_base
  Found 43/43 tests for numpy.lib.type_check
  Found 1/1 tests for numpy.lib.ufunclike
  Found 89/89 tests for numpy.linalg
  Found 94/94 tests for numpy.ma.core
  Found 15/15 tests for numpy.ma.extras
  Found 7/7 tests for numpy.random
  Found 16/16 tests for numpy.testing.utils
  Found 0/0 tests for __main__
..
 ..
--
Ran 1004 tests in 1.429s

OK
unittest._TextTestResult run=1004 errors=0 failures=0


So it works. :) 

ldd /usr/local/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so
linux-gate.so.1 =  (0xe000)
libmkl_lapack.so = /opt/intel/mkl/10.0.1.014/lib/32/libmkl_lapack.so 
(0xb7a53000)
/opt/intel/mkl/10.0.1.014/lib/32/libmkl_intel.so (0xb791f000)
/opt/intel/mkl/10.0.1.014/lib/32/libmkl_intel_thread.so (0xb7735000)
/opt/intel/mkl/10.0.1.014/lib/32/libmkl_core.so (0xb76d9000)
libguide.so = /opt/intel/mkl/10.0.1.014/lib/32/libguide.so (0xb767f000)
libpthread.so.0 = /lib/tls/i686/cmov/libpthread.so.0 (0xb7653000)
libc.so.6 = /lib/tls/i686/cmov/libc.so.6 (0xb7522000)
libdl.so.2 = /lib/tls/i686/cmov/libdl.so.2 (0xb751e000)
/lib/ld-linux.so.2 (0x8000)

No more links to icc libs.


 ldd /usr/local/lib/python2.5/site-packages/numpy/linalg/lapack_lite.so
 linux-gate.so.1 =  (0xe000)
 libmkl_lapack.so = 
 /opt/intel/mkl/10.0.3.020/lib/32/libmkl_lapack.so (0xb7a48000)
 /opt/intel/mkl/10.0.3.020/lib/32/libmkl_intel.so (0xb790a000)
 /opt/intel/mkl/10.0.3.020/lib/32/libmkl_intel_thread.so (0xb770d000)
 /opt/intel/mkl/10.0.3.020/lib/32/libmkl_core.so (0xb76a9000)
 libmkl_vml_p4m.so = 
 /opt/intel/mkl/10.0.3.020/lib/32/libmkl_vml_p4m.so (0xb73bd000)
 libguide.so = /opt/intel/mkl/10.0.3.020/lib/32/libguide.so 
 (0xb735a000)
 libpthread.so.0 = /lib/tls/i686/cmov/libpthread.so.0 (0xb732f000)
 libimf.so = /opt/intel/fc/10.1.015/lib/libimf.so (0xb70ff000)
 libintlc.so.5 = /opt/intel/fc/10.1.015/lib/libintlc.so.5 
 (0xb708c000)


 You still have libraries linked to intel compilers (libintlc, libimf). 
 If you remove both the build directory and installed numpy, I don't why 
 those are used. As a last resort, can you put aside /opt/intel/fc 
 temporally ?

They are linked because that build was with the Intel compiler. As
you can see from the new ldd above, those links are not there when
I build with gcc.

 You are not using the free version, 

Re: [Numpy-discussion] svd in numpy

2008-05-19 Thread Nripun Sredar
I am running on Windows Xp, Intel Xeon CPU. I'd like to fill in a few more
things here. If I send 0 in the second and third argument of svd then I get
the singular_values, but if its 1 then the problem persists. I've tried this
on sparse and non-sparse matrices. This is with the latest windows binaries
numpy-1.0.4.win32-py2.5.msi.




On Sat, May 17, 2008 at 2:34 AM, David Cournapeau 
[EMAIL PROTECTED] wrote:

  Nripun Sredar wrote:
  I have a sparse matrix 416x52. I tried to factorize this matrix using
  svd from numpy. But it didn't produce a result and looked like it is
  in an infinite loop.
  I tried a similar operation using random numbers in the matrix. Even
  this is in an infinite loop.
  Did anyone else face a similar problem?
  Can anyone please give some suggestions?

 Are you on windows ? What is the CPU on your machine ? I suspect this is
 caused by windows binaries which shipped blas/lapack without support for
 old CPU.

 cheers,

 David
  ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy with icc/MKL

2008-05-19 Thread rex
 Next step is to try icc instead of gcc, and if that works, try
 the latest MKL (10.0.3.020).

OK, either I've got a corrupted copy of MKL 10.0.3.020, or it has
a problem. Building with icc  MKL 10.0.1.014 works. 

Erik, are you reading this? If so, roll back to MKL 10.0.014 and it
should work, both with gcc and icc.

[EMAIL PROTECTED]:/usr/local/src/1.1.0rc1# python setup.py config 
--compiler=intel build_clib --compiler=intel build_ext --compiler=intel install 
--prefix=/usr/local/ build25
Running from numpy source directory.
[EMAIL PROTECTED]:/usr/local/src/1.1.0rc1# cd ..
[EMAIL PROTECTED]:/usr/local/src# python
Python 2.5 (release25-maint, Dec  9 2006, 14:35:53)
[GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-20)] on linux2
Type help, copyright, credits or license for more information.
 import numpy
 numpy.test()
Numpy is installed in /usr/local/lib/python2.5/site-packages/numpy
Numpy version 1.1.0rc1
Python version 2.5 (release25-maint, Dec  9 2006, 14:35:53) [GCC 4.1.2 20061115 
(prerelease) (Debian 4.1.1-20)]
  Found 18/18 tests for numpy.core.defmatrix
  Found 3/3 tests for numpy.core.memmap
  Found 283/283 tests for numpy.core.multiarray
  Found 70/70 tests for numpy.core.numeric
  Found 36/36 tests for numpy.core.numerictypes
  Found 12/12 tests for numpy.core.records
  Found 7/7 tests for numpy.core.scalarmath
  Found 16/16 tests for numpy.core.umath
  Found 5/5 tests for numpy.ctypeslib
  Found 5/5 tests for numpy.distutils.misc_util
  Found 2/2 tests for numpy.fft.fftpack
  Found 3/3 tests for numpy.fft.helper
  Found 24/24 tests for numpy.lib._datasource
  Found 10/10 tests for numpy.lib.arraysetops
  Found 1/1 tests for numpy.lib.financial
  Found 0/0 tests for numpy.lib.format
  Found 53/53 tests for numpy.lib.function_base  
  Found 5/5 tests for numpy.lib.getlimits
  Found 6/6 tests for numpy.lib.index_tricks
  Found 15/15 tests for numpy.lib.io
  Found 1/1 tests for numpy.lib.machar
  Found 4/4 tests for numpy.lib.polynomial
  Found 49/49 tests for numpy.lib.shape_base
  Found 15/15 tests for numpy.lib.twodim_base
  Found 43/43 tests for numpy.lib.type_check
  Found 1/1 tests for numpy.lib.ufunclike
  Found 89/89 tests for numpy.linalg
  Found 94/94 tests for numpy.ma.core
  Found 15/15 tests for numpy.ma.extras
  Found 7/7 tests for numpy.random
  Found 16/16 tests for numpy.testing.utils
  Found 0/0 tests for __main__
..
 ..
--
Ran 1004 tests in 1.041s

OK
unittest._TextTestResult run=1004 errors=0 failures=0

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] svd in numpy

2008-05-19 Thread Nripun Sredar
Thank You.. The problem is resolved

On Mon, May 19, 2008 at 10:31 AM, Bruce Southey [EMAIL PROTECTED] wrote:

 Nripun Sredar wrote:
  I am running on Windows Xp, Intel Xeon CPU. I'd like to fill in a few
  more things here. If I send 0 in the second and third argument of svd
  then I get the singular_values, but if its 1 then the problem
  persists. I've tried this on sparse and non-sparse matrices. This is
  with the latest windows binaries numpy-1.0.4.win32-py2.5.msi.
 
 
 
 
  On Sat, May 17, 2008 at 2:34 AM, David Cournapeau
  [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
  wrote:
 
  Nripun Sredar wrote:
   I have a sparse matrix 416x52. I tried to factorize this matrix
  using
   svd from numpy. But it didn't produce a result and looked like it
 is
   in an infinite loop.
   I tried a similar operation using random numbers in the matrix.
 Even
   this is in an infinite loop.
   Did anyone else face a similar problem?
   Can anyone please give some suggestions?
 
  Are you on windows ? What is the CPU on your machine ? I suspect
  this is
  caused by windows binaries which shipped blas/lapack without
  support for
  old CPU.
 
  cheers,
 
  David
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org mailto:Numpy-discussion@scipy.org
  http://projects.scipy.org/mailman/listinfo/numpy-discussion
 
 
  
 
  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
  http://projects.scipy.org/mailman/listinfo/numpy-discussion
 
 Hi,
 The issue relates to the closed ticket 627:
 http://www.scipy.org/scipy/numpy/ticket/627

 Please update to the latest Numpy 1.1 (rc1 is available) or, as a
 temporary measure, use the installer:


 http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy-superpack-python2.5.exe


 That uses numpy version 1.0.5.dev5008 which should solve you problem.

 Regards
 Bruce
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slicing a numpy array and getting the complement

2008-05-19 Thread Anne Archibald
2008/5/19 Orest Kozyar [EMAIL PROTECTED]:
 Given a slice, such as s_[..., :-2:], is it possible to take the
 complement of this slice?  Specifically, s_[..., ::-2].  I have a
 series of 2D arrays that I need to split into two subarrays via
 slicing where the members of the second array are all the members
 leftover from the slice.  The problem is that the slice itself will
 vary, and could be anything such as s_[..., 1:4:] or s_[..., 1:-4:],
 etc, so I'm wondering if there's a straightforward idiom or routine in
 Numpy that would facilitate taking the complement of a slice?  I've
 looked around the docs, and have not had much luck.

If you are using boolean indexing, of course complements are easy
(just use ~). But if you want slice indexing, so that you get views,
sometimes the complement cannot be expressed as a slice: for example:

A = np.arange(10)
A[2:4]

The complement of A[2:4] is np.concatenate((A[:2],A[4:])). Things
become even more complicated if you start skipping elements.

If you don't mind fancy indexing, you can convert your index arrays
into boolean form:
complement = A==A
complement[idx] = False

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy with icc/MKL

2008-05-19 Thread Erik Nugent
I'm here... i am rolling back now and will post my results...

e

On Mon, May 19, 2008 at 9:22 AM, rex [EMAIL PROTECTED] wrote:
 Next step is to try icc instead of gcc, and if that works, try
 the latest MKL (10.0.3.020).

 OK, either I've got a corrupted copy of MKL 10.0.3.020, or it has
 a problem. Building with icc  MKL 10.0.1.014 works.

 Erik, are you reading this? If so, roll back to MKL 10.0.014 and it
 should work, both with gcc and icc.

 [EMAIL PROTECTED]:/usr/local/src/1.1.0rc1# python setup.py config 
 --compiler=intel build_clib --compiler=intel build_ext --compiler=intel 
 install --prefix=/usr/local/ build25
 Running from numpy source directory.
 [EMAIL PROTECTED]:/usr/local/src/1.1.0rc1# cd ..
 [EMAIL PROTECTED]:/usr/local/src# python
 Python 2.5 (release25-maint, Dec  9 2006, 14:35:53)
 [GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-20)] on linux2
 Type help, copyright, credits or license for more information.
 import numpy
 numpy.test()
 Numpy is installed in /usr/local/lib/python2.5/site-packages/numpy
 Numpy version 1.1.0rc1
 Python version 2.5 (release25-maint, Dec  9 2006, 14:35:53) [GCC 4.1.2 
 20061115 (prerelease) (Debian 4.1.1-20)]
  Found 18/18 tests for numpy.core.defmatrix
  Found 3/3 tests for numpy.core.memmap
  Found 283/283 tests for numpy.core.multiarray
  Found 70/70 tests for numpy.core.numeric
  Found 36/36 tests for numpy.core.numerictypes
  Found 12/12 tests for numpy.core.records
  Found 7/7 tests for numpy.core.scalarmath
  Found 16/16 tests for numpy.core.umath
  Found 5/5 tests for numpy.ctypeslib
  Found 5/5 tests for numpy.distutils.misc_util
  Found 2/2 tests for numpy.fft.fftpack
  Found 3/3 tests for numpy.fft.helper
  Found 24/24 tests for numpy.lib._datasource
  Found 10/10 tests for numpy.lib.arraysetops
  Found 1/1 tests for numpy.lib.financial
  Found 0/0 tests for numpy.lib.format
  Found 53/53 tests for numpy.lib.function_base
  Found 5/5 tests for numpy.lib.getlimits
  Found 6/6 tests for numpy.lib.index_tricks
  Found 15/15 tests for numpy.lib.io
  Found 1/1 tests for numpy.lib.machar
  Found 4/4 tests for numpy.lib.polynomial
  Found 49/49 tests for numpy.lib.shape_base
  Found 15/15 tests for numpy.lib.twodim_base
  Found 43/43 tests for numpy.lib.type_check
  Found 1/1 tests for numpy.lib.ufunclike
  Found 89/89 tests for numpy.linalg
  Found 94/94 tests for numpy.ma.core
  Found 15/15 tests for numpy.ma.extras
  Found 7/7 tests for numpy.random
  Found 16/16 tests for numpy.testing.utils
  Found 0/0 tests for __main__
 
 ..
  ..
 --
 Ran 1004 tests in 1.041s

 OK
 unittest._TextTestResult run=1004 errors=0 failures=0

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
Erik Nugent
System Administrator
Department of Computer Science
The University of Montana - Missoula
(406) 243-2812 V
(406) 243-5139 F
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Noncentral chi square

2008-05-19 Thread Peck, Jon
Message: 1
Date: Mon, 19 May 2008 09:20:21 -0400
From: Neal Becker [EMAIL PROTECTED]
Subject: [Numpy-discussion] noncentral_chisquare buglet?
To: numpy-discussion@scipy.org
Message-ID: [EMAIL PROTECTED]
Content-Type: text/plain; charset=us-ascii

def noncentral_chisquare(self, df, nonc, size=None):
Noncentral Chi^2 distribution.

noncentral_chisquare(df, nonc, size=None) - random values

cdef ndarray odf, ononc
cdef double fdf, fnonc
fdf = PyFloat_AsDouble(df)
fnonc = PyFloat_AsDouble(nonc)
if not PyErr_Occurred():
if fdf = 1:
raise ValueError(df = 0) 

I think this message should be df = 1?

[Peck, Jon] 
Isn't it rather that the message is correct but the test is wrong?  Shouldn't 
it be 
if fdf = 0  ?

-Jon Peck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Cookbook/Documentation

2008-05-19 Thread Pierre GM
All,
* I've just noticed that the page describing RecordArrays 
(http://www.scipy.org/RecordArrays) is not listed under the Cookbook: should 
this be changed ? Shouldn't there be at least a link in the documentation 
page ?
* Same problem with Subclasses (http://www.scipy.org/Subclasses)
* I was eventually considering writing down some basic docs for MaskedArrays: 
should I create a page under the Cookbook ? Elsewhere ?
* Congrats for the DocMarathon initiative ! The 3 points I've just raised 
would fit nicely with objective #3 (reference sections): what's the plan for 
that ? Any specific directions to follow ?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slicing a numpy array and getting the complement

2008-05-19 Thread Robert Kern
On Mon, May 19, 2008 at 9:34 AM, Orest Kozyar [EMAIL PROTECTED] wrote:
 Given a slice, such as s_[..., :-2:], is it possible to take the
 complement of this slice?  Specifically, s_[..., ::-2].

Hmm, that doesn't look like the complement. Did you mean s_[..., -2:]
and s_[..., :-2]?

 I have a
 series of 2D arrays that I need to split into two subarrays via
 slicing where the members of the second array are all the members
 leftover from the slice.  The problem is that the slice itself will
 vary, and could be anything such as s_[..., 1:4:] or s_[..., 1:-4:],
 etc, so I'm wondering if there's a straightforward idiom or routine in
 Numpy that would facilitate taking the complement of a slice?  I've
 looked around the docs, and have not had much luck.

In general, for any given slice, there may not be a slice giving the
complement. For example, the complement of arange(6)[1:4] should be
array([0,4,5]), but there is no slice which can make that. Things get
even more difficult with start:stop:step slices let alone simultaneous
multidimensional slices. Can you be more specific as to exactly the
variety of slices you need to support?

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Noncentral chi square

2008-05-19 Thread Robert Kern
On Mon, May 19, 2008 at 11:33 AM, Peck, Jon [EMAIL PROTECTED] wrote:
 Message: 1
 Date: Mon, 19 May 2008 09:20:21 -0400
 From: Neal Becker [EMAIL PROTECTED]
 Subject: [Numpy-discussion] noncentral_chisquare buglet?
 To: numpy-discussion@scipy.org
 Message-ID: [EMAIL PROTECTED]
 Content-Type: text/plain; charset=us-ascii

def noncentral_chisquare(self, df, nonc, size=None):
Noncentral Chi^2 distribution.

noncentral_chisquare(df, nonc, size=None) - random values

cdef ndarray odf, ononc
cdef double fdf, fnonc
fdf = PyFloat_AsDouble(df)
fnonc = PyFloat_AsDouble(nonc)
if not PyErr_Occurred():
if fdf = 1:
raise ValueError(df = 0) 

 I think this message should be df = 1?

 [Peck, Jon]
 Isn't it rather that the message is correct but the test is wrong?  Shouldn't 
 it be
 if fdf = 0  ?

goes back over assumptions Yes, you are correct.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Cookbook/Documentation

2008-05-19 Thread Stéfan van der Walt
Hi Pierre

2008/5/19 Pierre GM [EMAIL PROTECTED]:
 * I've just noticed that the page describing RecordArrays
 (http://www.scipy.org/RecordArrays) is not listed under the Cookbook: should
 this be changed ? Shouldn't there be at least a link in the documentation
 page ?

How about we add those pages to a CookBookCategory and auto-generate
the Cookbook (like I've done with ProposedEnhancements)?

 * I was eventually considering writing down some basic docs for MaskedArrays:
 should I create a page under the Cookbook ? Elsewhere ?

That's a good place for now.  Use ReST (on the wiki, use {{{#!rst
your stuff here }}}), then we can incorporate your work into the
user guide later.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 tagged

2008-05-19 Thread Neal Becker
Jarrod Millman wrote:

 Please test the release candidate:
 svn co http://svn.scipy.org/svn/numpy/tags/1.1.0rc1 1.1.0rc1
 
 Also please review the release notes:
 http://projects.scipy.org/scipy/numpy/milestone/1.1.0
 
 I am going to ask Chris and David to create Windows and Mac binaries,
 which I hope they will have time to create ASAP.
 
 Sorry that it has taken me so long, I am on vacation with my family
 and am having a difficult time getting on my computer.
 
 Thanks,
 
Built OK on Fedora F9 x86_64 using
 ['lapack', 'f77blas', 'cblas', 'atlas']

Used rpmbuild with slightly modified version of fedora 9 spec file.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread James Snyder
Hi -

First off, I know that optimization is evil, and I should make sure
that everything works as expected prior to bothering with squeezing
out extra performance, but the situation is that this particular block
of code works, but it is about half as fast with numpy as in matlab,
and I'm wondering if there's a better approach than what I'm doing.

I have a chunk of code, below, that generally iterates over 2000
iterations, and the vectors that are being worked on at a given step
generally have ~14000 elements in them.

In matlab, doing pretty much exactly the same thing takes about 6-7
seconds, always around 13-14 with numpy on the same machine.  I've
gotten this on Linux  Mac OS X.

self.aff_input has the bulk of the data in it (2000x14000 array), and
the various steps are for computing the state of some afferent neurons
(if there's any interest, here's a paper that includes the model:
Brandman, R. and Nelson ME (2002) A simple model of long-term spike
train regularization. Neural Computation 14, 1575-1597.)

I've imported numpy as np.

Is there anything in practice here that could be done to speed this
up?  I'm looking more for general numpy usage tips, that I can use
while writing further code and not so things that would be obscure or
difficult to maintain in the future.

Also, the results of this are a binary array, I'm wondering if there's
anything more compact for expressing than using 8 bits to represent
each single bit.  I've poked around, but I haven't come up with any
clean and unhackish ideas :-)

Thanks!

I can provide the rest of the code if needed, but it's basically just
filling some vectors with random and empty data and initializing a few
things.

for n in range(0,time_milliseconds):
self.u  =  self.expfac_m  *  self.prev_u +
(1-self.expfac_m) * self.aff_input[n,:]
self.v = self.u + self.sigma *
np.random.standard_normal(size=(1,self.naff))
self.theta = self.expfac_theta * self.prev_theta -
(1-self.expfac_theta)

idx_spk = np.where(self.v=self.theta)

self.S[n,idx_spk] = 1
self.theta[idx_spk] = self.theta[idx_spk] + self.b

self.prev_u = self.u
self.prev_theta = self.theta

-- 
James Snyder
Biomedical Engineering
Northwestern University
[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slicing a numpy array and getting the complement

2008-05-19 Thread Orest Kozyar
 If you don't mind fancy indexing, you can convert your index arrays
 into boolean form:
 complement = A==A
 complement[idx] = False

This actually would work perfectly for my purposes.  I don't really
need super-fancy indexing.

 Given a slice, such as s_[..., :-2:], is it possible to take the
 complement of this slice?  Specifically, s_[..., ::-2].

 Hmm, that doesn't look like the complement. Did you mean s_[..., -2:]
 and s_[..., :-2]?

Whoops, yes you're right.

 In general, for any given slice, there may not be a slice giving the
 complement. For example, the complement of arange(6)[1:4] should be
 array([0,4,5]), but there is no slice which can make that. Things get
 even more difficult with start:stop:step slices let alone simultaneous
 multidimensional slices. Can you be more specific as to exactly the
 variety of slices you need to support?

I think Anne's solution will work well for what I need to do.  Thanks!
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 tagged

2008-05-19 Thread David Huard
Ticket 793 has a patch, submitted by Alan McIntyre, waiting for review from
someone C-API-wise.

Cheers,

David



2008/5/19 Neal Becker [EMAIL PROTECTED]:

 Jarrod Millman wrote:

  Please test the release candidate:
  svn co http://svn.scipy.org/svn/numpy/tags/1.1.0rc1 1.1.0rc1
 
  Also please review the release notes:
  http://projects.scipy.org/scipy/numpy/milestone/1.1.0
 
  I am going to ask Chris and David to create Windows and Mac binaries,
  which I hope they will have time to create ASAP.
 
  Sorry that it has taken me so long, I am on vacation with my family
  and am having a difficult time getting on my computer.
 
  Thanks,
 
 Built OK on Fedora F9 x86_64 using
  ['lapack', 'f77blas', 'cblas', 'atlas']

 Used rpmbuild with slightly modified version of fedora 9 spec file.

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Robin
On Mon, May 19, 2008 at 7:08 PM, James Snyder [EMAIL PROTECTED] wrote:

for n in range(0,time_milliseconds):
self.u  =  self.expfac_m  *  self.prev_u +
 (1-self.expfac_m) * self.aff_input[n,:]
self.v = self.u + self.sigma *
 np.random.standard_normal(size=(1,self.naff))
self.theta = self.expfac_theta * self.prev_theta -
 (1-self.expfac_theta)

idx_spk = np.where(self.v=self.theta)

self.S[n,idx_spk] = 1
self.theta[idx_spk] = self.theta[idx_spk] + self.b

self.prev_u = self.u
self.prev_theta = self.theta

Hello,

The only thoughts I had were that depending on how 'vectorised'
random.standard_normal is, it might be better to calculate a big block
of random data outside the loop and index it in the same way as the
aff_input. Not certain if the indexing would be faster than the
function call but it could be worth a try if you have enough memory.

The other thing is there are a lot of self.'s there. I don't have a
lot of practicle experience, but I've read (
http://wiki.python.org/moin/PythonSpeed/PerformanceTips#head-aa6c07c46a630a2fa10bd6502510e532806f1f62
) that . based lookups are slower than local variables so another
thing to try would be to rebind everything to a local variable outside
the loop:
u = self.u
v = self.v
etc. which although a bit unsightly actually can make the inner loop
more readable and might speed things up.

The only other thing is be careful with things like this when
translating from matlab:
self.prev_u = self.u
since this is a reference not a copy of the data. This is OK because
when you recreate u as a product it creates a new object, but if you
changed u in another way ie self.u[:100] = 10 then self.prev_u would
still be pointing to the same array and also reflect those changes.
In this case it doesn't look like you explicitly need the prev_ values
so it's possible you could do the v and theta updates in place
(although I'm not sure if that's quicker)
u *= expfac_m
u += (1-expfac_m)*aff_input.. etc.
Of course you can also take the (1-)'s outside of the loop although
again I'm not sure how much difference it would make.

So sorry I can't give any concrete advise but I hope I've given some ideas...

Cheers

Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Robin
Also you could use xrange instead of range...

Again, not sure of the size of the effect but it seems to be
recommended by the docstring.

Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Hoyt Koepke
for n in range(0,time_milliseconds):
self.u  =  self.expfac_m  *  self.prev_u +
 (1-self.expfac_m) * self.aff_input[n,:]
self.v = self.u + self.sigma *
 np.random.standard_normal(size=(1,self.naff))
self.theta = self.expfac_theta * self.prev_theta -
 (1-self.expfac_theta)

idx_spk = np.where(self.v=self.theta)

self.S[n,idx_spk] = 1
self.theta[idx_spk] = self.theta[idx_spk] + self.b

self.prev_u = self.u
self.prev_theta = self.theta

Copying elements into array objects that already exist will always be
faster than creating a new object with separate data.  However, in
this case, you don't need to do any copying or creation if you use a
flip-flopping index to handle keeping track of the previous. If I drop
the selfs, you can translate the above into (untested):


curidx = 0  # prev will be 2-curidx

u = empty( (2, naff) )
v = empty( naff )
theta = empty( (2, naff) )

stdnormrvs = np.random.standard_normal(size=(time_milliseconds,naff) )

for n in xrange(time_milliseconds):
u[curidx, :] = expfac_m  *  u[2-curidx, :] + (1-expfac_m) * aff_input[n,:]
v[:] = u[curidx, :] + sigma * stdnormrvs[n, :]
theta[curidx, :] = expfac_theta * theta[2-curidx] - (1-expfac_theta)

idx_spk = np.where(v = theta)
S[n,idx_spk] = 1
theta[curidx, idx_spk] += b

# Flop to handle previous stuff
curidx = 2 - curidx

This should give you a substantial speedup.  Also, I have to say that
this is begging for weave.blitz, which compiles such statements using
templated c++ code to avoid temporaries.  It doesn't work on all
systems, but if it does in your case, here's what your code might look
like:

import scipy.weave as wv

curidx = 0

u = empty( (2, naff) )
v = empty( (1, naff) )
theta = empty( (2, naff) )

stdnormrvs = np.random.standard_normal(size=(time_milliseconds,naff) )

for n in xrange(time_milliseconds):
wv.blitz(u[curidx, :] = expfac_m  *  u[2-curidx, :] +
(1-expfac_m) * aff_input[n,:])
wv.blitz(v[:] = u[curidx, :] + sigma * stdnormrvs[n, :])
wv.blitz(theta[curidx, :] = expfac_theta * theta[2-curidx] -
(1-expfac_theta))

idx_spk = np.where(v = theta)
S[n,idx_spk] = 1
theta[curidx, idx_spk] += b

# Flop to handle previous stuff
curidx = 2 - curidx



-- 
+++
Hoyt Koepke
UBC Department of Computer Science
http://www.cs.ubc.ca/~hoytak/
[EMAIL PROTECTED]
+++


On Mon, May 19, 2008 at 11:08 AM, James Snyder [EMAIL PROTECTED] wrote:
 Hi -

 First off, I know that optimization is evil, and I should make sure
 that everything works as expected prior to bothering with squeezing
 out extra performance, but the situation is that this particular block
 of code works, but it is about half as fast with numpy as in matlab,
 and I'm wondering if there's a better approach than what I'm doing.

 I have a chunk of code, below, that generally iterates over 2000
 iterations, and the vectors that are being worked on at a given step
 generally have ~14000 elements in them.

 In matlab, doing pretty much exactly the same thing takes about 6-7
 seconds, always around 13-14 with numpy on the same machine.  I've
 gotten this on Linux  Mac OS X.

 self.aff_input has the bulk of the data in it (2000x14000 array), and
 the various steps are for computing the state of some afferent neurons
 (if there's any interest, here's a paper that includes the model:
 Brandman, R. and Nelson ME (2002) A simple model of long-term spike
 train regularization. Neural Computation 14, 1575-1597.)

 I've imported numpy as np.

 Is there anything in practice here that could be done to speed this
 up?  I'm looking more for general numpy usage tips, that I can use
 while writing further code and not so things that would be obscure or
 difficult to maintain in the future.

 Also, the results of this are a binary array, I'm wondering if there's
 anything more compact for expressing than using 8 bits to represent
 each single bit.  I've poked around, but I haven't come up with any
 clean and unhackish ideas :-)

 Thanks!

 I can provide the rest of the code if needed, but it's basically just
 filling some vectors with random and empty data and initializing a few
 things.

for n in range(0,time_milliseconds):
self.u  =  self.expfac_m  *  self.prev_u +
 (1-self.expfac_m) * self.aff_input[n,:]
self.v = self.u + self.sigma *
 np.random.standard_normal(size=(1,self.naff))
self.theta = self.expfac_theta * self.prev_theta -
 (1-self.expfac_theta)

idx_spk = np.where(self.v=self.theta)

self.S[n,idx_spk] = 1
self.theta[idx_spk] = self.theta[idx_spk] + self.b

self.prev_u = self.u
self.prev_theta = self.theta

 --
 James Snyder
 Biomedical Engineering
 Northwestern University
 [EMAIL PROTECTED]
 ___
 Numpy-discussion 

Re: [Numpy-discussion] Slicing a numpy array and getting the complement

2008-05-19 Thread Anne Archibald
2008/5/19 Orest Kozyar [EMAIL PROTECTED]:
 If you don't mind fancy indexing, you can convert your index arrays
 into boolean form:
 complement = A==A
 complement[idx] = False

 This actually would work perfectly for my purposes.  I don't really
 need super-fancy indexing.

Heh. Actually fancy indexing is numpy-speak for indexing with
anything that's not an integer or a slice. In this case, indexing with
a boolean array is fancy indexing. The reason we make this
distinction is that with slices, the new array you get is actually
just a reference to the original array (so you can modify the original
array through it). With fancy indexing, the new array you get is
actually a copy. (Assigning to fancy-indexed arrays is handled
specially in __setitem__, so it works.)

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 1.1.0rc1 OSX Installer - please test

2008-05-19 Thread Christopher Burns
I've built a Mac binary for the 1.1 release candidate.  Mac users,
please test it from:

https://cirl.berkeley.edu/numpy/numpy-1.1.0rc1-py2.5-macosx10.5.dmg

This is for the MacPython installed from python.org.

Thanks,
Chris

On Sat, May 17, 2008 at 9:01 PM, Jarrod Millman [EMAIL PROTECTED] wrote:
 Please test the release candidate:
 svn co http://svn.scipy.org/svn/numpy/tags/1.1.0rc1 1.1.0rc1

 Also please review the release notes:
 http://projects.scipy.org/scipy/numpy/milestone/1.1.0

 I am going to ask Chris and David to create Windows and Mac binaries,
 which I hope they will have time to create ASAP.

 Sorry that it has taken me so long, I am on vacation with my family
 and am having a difficult time getting on my computer.

 Thanks,

 --
 Jarrod Millman
 Computational Infrastructure for Research Labs
 10 Giannini Hall, UC Berkeley
 phone: 510.643.4014
 http://cirl.berkeley.edu/
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
Christopher Burns
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Robin
Hi,

I think my understanding is somehow incomplete... It's not clear to me
why (simplified case)

a[curidx,:] = scalar * a[2-curidx,:]
should be faster than
a = scalar * b

In both cases I thought the scalar multiplication results in a new
array (new memory allocated) and then the difference between copying
that result into the existing array u[curix,:] or reassining the
reference u to that result should be very similar?

If anything I would have thought the direct assignment would be
quicker since then there is no copying.

What am I missing?

 This should give you a substantial speedup.  Also, I have to say that
 this is begging for weave.blitz, which compiles such statements using
 templated c++ code to avoid temporaries.  It doesn't work on all
 systems, but if it does in your case, here's what your code might look
 like:

If you haven't seen it this page gives useful examples of methods to
speed up python code (incuding weave.blitz), which has Hoyt says would
be ideal in this case:
http://scipy.org/PerformancePython

Robin
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Anne Archibald
2008/5/19 James Snyder [EMAIL PROTECTED]:

 First off, I know that optimization is evil, and I should make sure
 that everything works as expected prior to bothering with squeezing
 out extra performance, but the situation is that this particular block
 of code works, but it is about half as fast with numpy as in matlab,
 and I'm wondering if there's a better approach than what I'm doing.

 I have a chunk of code, below, that generally iterates over 2000
 iterations, and the vectors that are being worked on at a given step
 generally have ~14000 elements in them.

With arrays this size, I wouldn't worry about python overhead - things
like range versus xrange or self lookups.

 Is there anything in practice here that could be done to speed this
 up?  I'm looking more for general numpy usage tips, that I can use
 while writing further code and not so things that would be obscure or
 difficult to maintain in the future.

Try using a profiler to find which steps are using most of your time.
With such a simple function it may not be very informative, but it's
worth a try.

 Also, the results of this are a binary array, I'm wondering if there's
 anything more compact for expressing than using 8 bits to represent
 each single bit.  I've poked around, but I haven't come up with any
 clean and unhackish ideas :-)

There's a tradeoff between compactness and speed here. The *fastest*
is probably one boolean per 32-bit integer. It sounds awful, I know,
but most modern CPUs have to work harder to access bytes individually
than they do to access them four at a time. On the other hand, cache
performance can make a huge difference, so compactness might actually
amount to speed. I don't think numpy has a packed bit array data type
(which is a shame, but would require substantial implementation
effort).

 I can provide the rest of the code if needed, but it's basically just
 filling some vectors with random and empty data and initializing a few
 things.

It would kind of help, since it would make it clearer what's a scalar
and what's an array, and what the dimensions of the various arrays
are.

for n in range(0,time_milliseconds):
self.u  =  self.expfac_m  *  self.prev_u +
 (1-self.expfac_m) * self.aff_input[n,:]
self.v = self.u + self.sigma *
 np.random.standard_normal(size=(1,self.naff))

You can use scale to rescale the random numbers on creation; that'll
save you a temporary.

self.theta = self.expfac_theta * self.prev_theta -
 (1-self.expfac_theta)

idx_spk = np.where(self.v=self.theta)

You can probably skip the where; the result of the expression
self.v=self.theta is a boolean array, which you can use directly for
indexing.

self.S[n,idx_spk] = 1
self.theta[idx_spk] = self.theta[idx_spk] + self.b

+= here might speed things up, not just in terms of temporaries but by
saving a fancy-indexing operation.

self.prev_u = self.u
self.prev_theta = self.theta



Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Eric Firing
Robin wrote:
 Also you could use xrange instead of range...
 
 Again, not sure of the size of the effect but it seems to be
 recommended by the docstring.

No, it is going away in Python 3.0, and its only real benefit is a 
memory saving in extreme cases.

 From the Python library docs:
The advantage of xrange() over range() is minimal (since xrange() still 
has to create the values when asked for them) except when a very large 
range is used on a memory-starved machine or when all of the range's 
elements are never used (such as when the loop is usually terminated 
with break).

Eric
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Hoyt Koepke
On Mon, May 19, 2008 at 12:53 PM, Robin [EMAIL PROTECTED] wrote:
 Hi,

 I think my understanding is somehow incomplete... It's not clear to me
 why (simplified case)

 a[curidx,:] = scalar * a[2-curidx,:]
 should be faster than
 a = scalar * b

 In both cases I thought the scalar multiplication results in a new
 array (new memory allocated) and then the difference between copying
 that result into the existing array u[curix,:] or reassining the
 reference u to that result should be very similar?

 If anything I would have thought the direct assignment would be
 quicker since then there is no copying.

 What am I missing?

Actually, I think you are correct.  My bad.  I was mainly thinking in
terms of weave.blitz, where it would make a difference, then
translating back...

--Hoyt

+++
Hoyt Koepke
UBC Department of Computer Science
http://www.cs.ubc.ca/~hoytak/
[EMAIL PROTECTED]
+++
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test

2008-05-19 Thread Tommy Grav
On May 19, 2008, at 3:39 PM, Christopher Burns wrote:

 I've built a Mac binary for the 1.1 release candidate.  Mac users,
 please test it from:

 https://cirl.berkeley.edu/numpy/numpy-1.1.0rc1-py2.5-macosx10.5.dmg

 This is for the MacPython installed from python.org.

 Thanks,
 Chris

I tried this build on my PPC running 10.5.2. It works with
two failed tests given below.

[*:~] tgrav% python
ActivePython 2.5.1.1 (ActiveState Software Inc.) based on
Python 2.5.1 (r251:54863, May  1 2007, 17:40:00)
[GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin
Type help, copyright, credits or license for more information.


[*:~] tgrav% python -c 'import numpy; numpy.test()'
Numpy is installed in /Library/Frameworks/Python.framework/Versions/ 
2.5/lib/python2.5/site-packages/numpy
Numpy version 1.1.0rc1
Python version 2.5.1 (r251:54863, May  1 2007, 17:40:00) [GCC 4.0.1  
(Apple Computer, Inc. build 5250)]

[Test log snipped]

==
FAIL: test_basic (numpy.core.tests.test_multiarray.TestView)
--
Traceback (most recent call last):
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/core/tests/test_multiarray.py, line  
843, in test_basic
 assert_array_equal(y, [67305985, 134678021])
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/testing/utils.py, line 248, in  
assert_array_equal
 verbose=verbose, header='Arrays are not equal')
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/testing/utils.py, line 240, in  
assert_array_compare
 assert cond, msg
AssertionError:
Arrays are not equal

(mismatch 100.0%)
  x: array([16909060, 84281096])
  y: array([ 67305985, 134678021])

==
FAIL: test_keywords (numpy.core.tests.test_multiarray.TestView)
--
Traceback (most recent call last):
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/core/tests/test_multiarray.py, line  
852, in test_keywords
 assert_array_equal(y,[[513]])
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/testing/utils.py, line 248, in  
assert_array_equal
 verbose=verbose, header='Arrays are not equal')
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/ 
python2.5/site-packages/numpy/testing/utils.py, line 240, in  
assert_array_compare
 assert cond, msg
AssertionError:
Arrays are not equal

(mismatch 100.0%)
  x: array([[258]], dtype=int16)
  y: array([[513]])

--
Ran 1004 tests in 2.569s

FAILED (failures=2)

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test

2008-05-19 Thread Robert Kern
On Mon, May 19, 2008 at 3:20 PM, Tommy Grav [EMAIL PROTECTED] wrote:
 ==
 FAIL: test_basic (numpy.core.tests.test_multiarray.TestView)
 --
 Traceback (most recent call last):
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/
 python2.5/site-packages/numpy/core/tests/test_multiarray.py, line
 843, in test_basic
 assert_array_equal(y, [67305985, 134678021])
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/
 python2.5/site-packages/numpy/testing/utils.py, line 248, in
 assert_array_equal
 verbose=verbose, header='Arrays are not equal')
   File /Library/Frameworks/Python.framework/Versions/2.5/lib/
 python2.5/site-packages/numpy/testing/utils.py, line 240, in
 assert_array_compare
 assert cond, msg
 AssertionError:
 Arrays are not equal

 (mismatch 100.0%)
  x: array([16909060, 84281096])
  y: array([ 67305985, 134678021])

Endianness issues. Probably bugs in the code.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test

2008-05-19 Thread Robert Kern
On Mon, May 19, 2008 at 3:35 PM, Robert Kern [EMAIL PROTECTED] wrote:
 Endianness issues. Probably bugs in the code.

By which I meant test code. numpy itself is fine and is working
correctly. The tests themselves incorrectly assume little-endianness.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test

2008-05-19 Thread Robert Kern
On Mon, May 19, 2008 at 3:38 PM, Robert Kern [EMAIL PROTECTED] wrote:
 On Mon, May 19, 2008 at 3:35 PM, Robert Kern [EMAIL PROTECTED] wrote:
 Endianness issues. Probably bugs in the code.

 By which I meant test code. numpy itself is fine and is working
 correctly. The tests themselves incorrectly assume little-endianness.

And now fixed on the trunk. I believe that's the correct place to fix
bugs for 1.1.0 at this time.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test

2008-05-19 Thread Tommy Grav

On May 19, 2008, at 4:38 PM, Robert Kern wrote:

 On Mon, May 19, 2008 at 3:35 PM, Robert Kern [EMAIL PROTECTED]  
 wrote:
 Endianness issues. Probably bugs in the code.

 By which I meant test code. numpy itself is fine and is working
 correctly. The tests themselves incorrectly assume little-endianness.

I am just a silent newbie of the numpy list, so I hope that someone
will put this in as a ticket if it is warranted :)

Cheers
   Tommy
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Christopher Barker
Anne Archibald wrote:
 2008/5/19 James Snyder [EMAIL PROTECTED]:
 I can provide the rest of the code if needed, but it's basically just
 filling some vectors with random and empty data and initializing a few
 things.
 
 It would kind of help, since it would make it clearer what's a scalar
 and what's an array, and what the dimensions of the various arrays
 are.

It would also help if you provided a complete example (as little code as 
possible), so we could try out and time our ideas before suggesting them.

 np.random.standard_normal(size=(1,self.naff))

Anyone know how fast this is compared to Matlab? That could be the 
difference right there.

-Chris

-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test

2008-05-19 Thread Christopher Burns
Thanks Tommy!  Robert has already committed a fix.

On Mon, May 19, 2008 at 1:42 PM, Tommy Grav [EMAIL PROTECTED] wrote:

 On May 19, 2008, at 4:38 PM, Robert Kern wrote:

 On Mon, May 19, 2008 at 3:35 PM, Robert Kern [EMAIL PROTECTED]
 wrote:
 Endianness issues. Probably bugs in the code.

 By which I meant test code. numpy itself is fine and is working
 correctly. The tests themselves incorrectly assume little-endianness.

 I am just a silent newbie of the numpy list, so I hope that someone
 will put this in as a ticket if it is warranted :)

 Cheers
   Tommy
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
Christopher Burns
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Charles R Harris
On Mon, May 19, 2008 at 2:53 PM, Christopher Barker [EMAIL PROTECTED]
wrote:

 Anne Archibald wrote:
  2008/5/19 James Snyder [EMAIL PROTECTED]:
  I can provide the rest of the code if needed, but it's basically just
  filling some vectors with random and empty data and initializing a few
  things.
 
  It would kind of help, since it would make it clearer what's a scalar
  and what's an array, and what the dimensions of the various arrays
  are.

 It would also help if you provided a complete example (as little code as
 possible), so we could try out and time our ideas before suggesting them.

  np.random.standard_normal(size=(1,self.naff))

 Anyone know how fast this is compared to Matlab? That could be the
 difference right there.


The latest versions of Matlab use the ziggurat method to generate random
normals and it is faster than the method used in numpy. I have ziggurat code
at hand, but IIRC, Robert doesn't trust the method ;) I don't know if it
would actually speed things up, though.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Robert Kern
On Mon, May 19, 2008 at 5:27 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
 The latest versions of Matlab use the ziggurat method to generate random
 normals and it is faster than the method used in numpy. I have ziggurat code
 at hand, but IIRC, Robert doesn't trust the method ;)

Well, I outlined the tests that would satisfy me, but I don't think
you ever responded.

  http://projects.scipy.org/pipermail/scipy-dev/2005-December/004405.html
  which references
  http://projects.scipy.org/pipermail/scipy-dev/2005-December/004400.html

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread James Snyder
Separating the response into 2 emails, here's the aspect that comes
from implementations of random:

In short, that's part of the difference. I ran these a few times to
check for consistency.

MATLAB (R2008a:
tic
for i = 1:2000
a = randn(1,13857);
end
toc

Runtime: ~0.733489 s

NumPy (1.1.0rc1):
import numpy as np
import time

t1 = time.time()
for n in xrange(0,2000):
a = np.random.standard_normal(size=(1,14000))
t2 = time.time()

print 'Runtime: %1.3f s' % ((t2-t1))

Runtime: ~2.716 s

On Mon, May 19, 2008 at 3:53 PM, Christopher Barker
[EMAIL PROTECTED] wrote:
 Anne Archibald wrote:
 2008/5/19 James Snyder [EMAIL PROTECTED]:
 I can provide the rest of the code if needed, but it's basically just
 filling some vectors with random and empty data and initializing a few
 things.

 It would kind of help, since it would make it clearer what's a scalar
 and what's an array, and what the dimensions of the various arrays
 are.

 It would also help if you provided a complete example (as little code as
 possible), so we could try out and time our ideas before suggesting them.

 np.random.standard_normal(size=(1,self.naff))

 Anyone know how fast this is compared to Matlab? That could be the
 difference right there.

 -Chris

 --
 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 [EMAIL PROTECTED]
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
James Snyder
Biomedical Engineering
Northwestern University
[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Charles R Harris
On Mon, May 19, 2008 at 4:36 PM, Robert Kern [EMAIL PROTECTED] wrote:

 On Mon, May 19, 2008 at 5:27 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
  The latest versions of Matlab use the ziggurat method to generate random
  normals and it is faster than the method used in numpy. I have ziggurat
 code
  at hand, but IIRC, Robert doesn't trust the method ;)

 Well, I outlined the tests that would satisfy me, but I don't think
 you ever responded.

  http://projects.scipy.org/pipermail/scipy-dev/2005-December/004405.html
  which references
  http://projects.scipy.org/pipermail/scipy-dev/2005-December/004400.html


It's been tested in the literature. It's basically just sampling a Gaussian
but done so most of the tests for being under the curve are trivial and
there are few misses, i.e., the Gaussian is covered with a stack (ziggurat)
of slices of equal areas, each slice is randomly chosen, then the position
along the slice is randomly chosen. Most of those last points will be under
the curve except at the ends, and it is those last that require computation.
However, like all sampling it has to be carefully implemented and the
samples are discretized differently than for the current way. Floats are
strange that way because they are on a log scale. The tails will be fine,
the real question is how much precision you want when doubles are returned,
i.e., how fine the discetization of the resulting samples should be. The
same method also works for the exponential distribution.

I don't feel this is a pressing issue, when I need fast normals I use my own
code. But if we are in competition with Matlab maybe we should give it a
shot.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Robert Kern
On Mon, May 19, 2008 at 6:39 PM, Charles R Harris
[EMAIL PROTECTED] wrote:

 On Mon, May 19, 2008 at 4:36 PM, Robert Kern [EMAIL PROTECTED] wrote:

 On Mon, May 19, 2008 at 5:27 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
  The latest versions of Matlab use the ziggurat method to generate random
  normals and it is faster than the method used in numpy. I have ziggurat
  code
  at hand, but IIRC, Robert doesn't trust the method ;)

 Well, I outlined the tests that would satisfy me, but I don't think
 you ever responded.

  http://projects.scipy.org/pipermail/scipy-dev/2005-December/004405.html
  which references
  http://projects.scipy.org/pipermail/scipy-dev/2005-December/004400.html

 It's been tested in the literature.

And it happened to fail such tests. Hence the Doornik paper which
improves Marsaglia's method to pass the appropriate tests.
Consequently, I want to see the tests performed on the actual
implementation before using it. It's a complicated algorithm that is
*demonstrably* easy to get wrong.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread James Snyder
On to the code, here's a current implementation, attached.  I make no
claims about it being great code, I've modified it so that there is a
weave version and a sans-weave version.

Many of the suggestions make things a bit faster.  The weave version
bombs out with a rather long log, which can be found at:
http://pastebin.com/m79699c04

I can tell it's failing for the second weave.blitz line, but I don't
understand why exactly.

What does this mean?:
error: no match for call to '(blitz::FastArrayIteratordouble, 1)
(const blitz::TinyVectorint, 2)'

Also note, I'm not asking to match MATLAB performance.  It'd be nice,
but again I'm just trying to put together decent, fairly efficient
numpy code.

On Mon, May 19, 2008 at 3:53 PM, Christopher Barker
[EMAIL PROTECTED] wrote:
 Anne Archibald wrote:
 2008/5/19 James Snyder [EMAIL PROTECTED]:
 I can provide the rest of the code if needed, but it's basically just
 filling some vectors with random and empty data and initializing a few
 things.

 It would kind of help, since it would make it clearer what's a scalar
 and what's an array, and what the dimensions of the various arrays
 are.

 It would also help if you provided a complete example (as little code as
 possible), so we could try out and time our ideas before suggesting them.

 np.random.standard_normal(size=(1,self.naff))

 Anyone know how fast this is compared to Matlab? That could be the
 difference right there.

 -Chris

 --
 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 [EMAIL PROTECTED]
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
James Snyder
Biomedical Engineering
Northwestern University
[EMAIL PROTECTED]
import os.path as path
import sys
import time
import numpy as np
import scipy.weave as wv

class AfferentData(object):
Compute afferent response to transdermal potentials

def __init__(self, b=0.09, sigma=0.04, tau_m=2, aff_input=np.zeros((2000,13857)),runnum=1):
super(AfferentData, self).__init__()

# Data + dimens
self.aff_input = aff_input
self.endtime = aff_input.shape[0]
self.naff = aff_input.shape[1]
self.runnum = runnum

# Main Params
self.b = b
self.sigma = sigma
self.tau_m = tau_m
rnd_exp = -np.log(np.random.uniform(size=(1,self.naff)))

# Implement approach in the future to hold over previous rates
self.tau_theta = 21 + 18 * rnd_exp
self.expfac_m = np.exp(-1/self.tau_m)
self.expfac_theta = np.exp(-1/self.tau_theta)

# Set initial states
self.prev_u = np.zeros((1,self.naff))
prev_rand = np.random.standard_normal(size=(1,self.naff))
self.prev_theta = 0.053+0.040*prev_rand

# Pre-allocation
self.u = np.empty((1,self.naff))
self.v = np.empty((1,self.naff))
self.theta = np.empty((1,self.naff))

self.S  =  np.uint8(np.zeros((self.endtime, self.naff)))

def gen_spikes(self,runtype=numpy):
generate all spikes for available transdermal voltages

self.runtype=runtype

print Starting Run: %d % self.runnum
t1 = time.time()
self.gen_spikes_to_time(self.endtime)
t2 = time.time()
self.runtime = t2-t1
print 'Run %d took %1.3f s' % ((self.runnum,self.runtime))

def gen_spikes_to_time(self, time_milliseconds):
generate spikes up until a particular time index

# Unpack Data
# I suppose this could be programmatic with local or something else?
u = self.u
v = self.v
theta = self.theta
S = self.S
aff_input = self.aff_input
prev_u = self.prev_u
prev_theta = self.prev_theta
expfac_theta = self.expfac_theta
expfac_m = float(self.expfac_m)
sigma = self.sigma
naff = self.naff
b = self.b

stdnormrvs = np.random.standard_normal(size=(time_milliseconds,naff) )

if (self.runtype == weave):  
u = np.empty( (2, naff) )
v = np.empty( (1, naff) )
theta = np.empty( (2, naff) )
curidx = int(0)

for n in xrange(time_milliseconds):
   wv.blitz(u[curidx, :] = expfac_m  *  u[1-curidx, :] + (1-expfac_m) * aff_input[n,:])
   wv.blitz(v[:] = u[curidx, :] + sigma * stdnormrvs[n, :])
   wv.blitz(theta[curidx, :] = expfac_theta * theta[1-curidx] - (1-expfac_theta))
   
   idx_spk = np.where(v = theta)
   S[n,idx_spk] = 1
   theta[curidx, idx_spk] += b
 

Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Charles R Harris
On Mon, May 19, 2008 at 5:52 PM, Robert Kern [EMAIL PROTECTED] wrote:

 On Mon, May 19, 2008 at 6:39 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
 
  On Mon, May 19, 2008 at 4:36 PM, Robert Kern [EMAIL PROTECTED]
 wrote:
 
  On Mon, May 19, 2008 at 5:27 PM, Charles R Harris
  [EMAIL PROTECTED] wrote:
   The latest versions of Matlab use the ziggurat method to generate
 random
   normals and it is faster than the method used in numpy. I have
 ziggurat
   code
   at hand, but IIRC, Robert doesn't trust the method ;)
 
  Well, I outlined the tests that would satisfy me, but I don't think
  you ever responded.
 
 
 http://projects.scipy.org/pipermail/scipy-dev/2005-December/004405.html
   which references
 
 http://projects.scipy.org/pipermail/scipy-dev/2005-December/004400.html
 
  It's been tested in the literature.

 And it happened to fail such tests. Hence the Doornik paper which
 improves Marsaglia's method to pass the appropriate tests.


Exactly. Doornik was more careful about using independent samples and also
used a better random number generator (MWC8222), not exactly rocket
science.  Believe it or not, I had read Doornik's paper before I did my
implementation. I also used a better ziggurat, IMHO, than Marsaglia and
Doornik.

MWC8222 is also about twice as fast on AMD hardware as the Mersenne Twister,
but does require more careful initialization. On Pentium V they are about
they are about the same. I haven't benchmarked either on Core2.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Robert Kern
On Mon, May 19, 2008 at 6:55 PM, James Snyder [EMAIL PROTECTED] wrote:
 Also note, I'm not asking to match MATLAB performance.  It'd be nice,
 but again I'm just trying to put together decent, fairly efficient
 numpy code.

I can cut the time by about a quarter by just using the boolean mask
directly instead of using where().

for n in range(0,time_milliseconds):
u  =  expfac_m  *  prev_u + (1-expfac_m) * aff_input[n,:]
v = u + sigma * stdnormrvs[n, :]
theta = expfac_theta * prev_theta - (1-expfac_theta)

mask = (v = theta)

S[n,np.squeeze(mask)] = 1
theta[mask] += b

prev_u = u
prev_theta = theta


There aren't any good line-by-line profiling tools in Python, but you
can fake it by making a local function for each line:

def f1():
u  =  expfac_m  *  prev_u + (1-expfac_m) * aff_input[n,:]
return u
def f2():
v = u + sigma * stdnormrvs[n, :]
return v
def f3():
theta = expfac_theta * prev_theta - (1-expfac_theta)
return theta
def f4():
mask = (v = theta)
return mask
def f5():
S[n,np.squeeze(mask)] = 1
def f6():
theta[mask] += b

# Run Standard, Unoptimized Model
for n in range(0,time_milliseconds):
u = f1()
v = f2()
theta = f3()
mask = f4()
f5()
f6()

prev_u = u
prev_theta = theta

I get f6() as being the biggest bottleneck, followed by the general
time spent in the loop (about the same), followed by f5(), f1(), and
f3() (each about half of f6()), followed by f2() (about half of f5()).
f4() is negligible.

Masked operations are inherently slow. They mess up CPU's branch
prediction. Worse, the use of iterators in that part of the code
frustrates compilers' attempts to optimize that away in the case of
contiguous arrays.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Robert Kern
On Mon, May 19, 2008 at 7:30 PM, Charles R Harris
[EMAIL PROTECTED] wrote:

 On Mon, May 19, 2008 at 5:52 PM, Robert Kern [EMAIL PROTECTED] wrote:

 On Mon, May 19, 2008 at 6:39 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
 
  On Mon, May 19, 2008 at 4:36 PM, Robert Kern [EMAIL PROTECTED]
  wrote:
 
  On Mon, May 19, 2008 at 5:27 PM, Charles R Harris
  [EMAIL PROTECTED] wrote:
   The latest versions of Matlab use the ziggurat method to generate
   random
   normals and it is faster than the method used in numpy. I have
   ziggurat
   code
   at hand, but IIRC, Robert doesn't trust the method ;)
 
  Well, I outlined the tests that would satisfy me, but I don't think
  you ever responded.
 
 
   http://projects.scipy.org/pipermail/scipy-dev/2005-December/004405.html
   which references
 
   http://projects.scipy.org/pipermail/scipy-dev/2005-December/004400.html
 
  It's been tested in the literature.

 And it happened to fail such tests. Hence the Doornik paper which
 improves Marsaglia's method to pass the appropriate tests.

 Exactly. Doornik was more careful about using independent samples and also
 used a better random number generator (MWC8222), not exactly rocket
 science.  Believe it or not, I had read Doornik's paper before I did my
 implementation.

Good! I believe this is the first time you've mentioned it.

 I also used a better ziggurat, IMHO, than Marsaglia and
 Doornik.

Ah. HOs need testing. dieharder is probably de rigeur these days, and
it will accept input from a file.

  http://www.phy.duke.edu/~rgb/General/dieharder.php

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Quick Question about Optimization

2008-05-19 Thread Eric Firing
Robert Kern wrote:
 On Mon, May 19, 2008 at 6:55 PM, James Snyder [EMAIL PROTECTED] wrote:
 Also note, I'm not asking to match MATLAB performance.  It'd be nice,
 but again I'm just trying to put together decent, fairly efficient
 numpy code.
 
 I can cut the time by about a quarter by just using the boolean mask
 directly instead of using where().
 
 for n in range(0,time_milliseconds):
 u  =  expfac_m  *  prev_u + (1-expfac_m) * aff_input[n,:]
 v = u + sigma * stdnormrvs[n, :]
 theta = expfac_theta * prev_theta - (1-expfac_theta)
 
 mask = (v = theta)
 
 S[n,np.squeeze(mask)] = 1
 theta[mask] += b
 
 prev_u = u
 prev_theta = theta
 
 
 There aren't any good line-by-line profiling tools in Python, but you
 can fake it by making a local function for each line:
 
 def f1():
 u  =  expfac_m  *  prev_u + (1-expfac_m) * aff_input[n,:]
 return u
 def f2():
 v = u + sigma * stdnormrvs[n, :]
 return v
 def f3():
 theta = expfac_theta * prev_theta - (1-expfac_theta)
 return theta
 def f4():
 mask = (v = theta)
 return mask
 def f5():
 S[n,np.squeeze(mask)] = 1
 def f6():
 theta[mask] += b
 
 # Run Standard, Unoptimized Model
 for n in range(0,time_milliseconds):
 u = f1()
 v = f2()
 theta = f3()
 mask = f4()
 f5()
 f6()
 
 prev_u = u
 prev_theta = theta
 
 I get f6() as being the biggest bottleneck, followed by the general
 time spent in the loop (about the same), followed by f5(), f1(), and
 f3() (each about half of f6()), followed by f2() (about half of f5()).
 f4() is negligible.
 
 Masked operations are inherently slow. They mess up CPU's branch
 prediction. Worse, the use of iterators in that part of the code
 frustrates compilers' attempts to optimize that away in the case of
 contiguous arrays.
 

f6 can be sped up more than a factor of 2 by using putmask:

In [10]:xx = np.random.rand(10)

In [11]:mask = xx  0.5

In [12]:timeit xx[mask] += 2.34
100 loops, best of 3: 4.06 ms per loop

In [14]:timeit np.putmask(xx, mask, xx+2.34)
1000 loops, best of 3: 1.4 ms per loop

I think that
xx += 2.34*mask
will be similarly quick, but I can't get ipython timeit to work with it.

Eric


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] svd in numpy

2008-05-19 Thread David Cournapeau
Bruce Southey wrote:
 Nripun Sredar wrote:
   
 I am running on Windows Xp, Intel Xeon CPU. I'd like to fill in a few 
 more things here. If I send 0 in the second and third argument of svd 
 then I get the singular_values, but if its 1 then the problem 
 persists. I've tried this on sparse and non-sparse matrices. This is 
 with the latest windows binaries numpy-1.0.4.win32-py2.5.msi.
 

There was a problem with the way those binaries were built, depending on 
your CPU. Hopefully, the new binary for 1.1.0 will not have this problem 
anymore. When available, please test it and report whether it is working 
or not for you,

thanks,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Is it ok to create a tool directory in numpy svn (for building tools, etc...)

2008-05-19 Thread David Cournapeau
Hi,

To build numpy binaries, I have some pretty boring python scripts, 
and I think it would be useful to have them somewhere in numpy trunk 
(for example in tools). Does anyone have something against it ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 1.1.0rc1, Win32 Installer: please test it

2008-05-19 Thread David Cournapeau
Hi,

Sorry for the delay, but it is now ready: numpy superpack 
installers for numpy 1.1.0rc1:

http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy-1.1.0rc1-win32-superpack-python2.5.exe
http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy-1.1.0rc1-win32-superpack-python2.4.exe

(Python 2.4 binaries are not there yet). This binary should work on any 
(32 bits) CPU on windows, and in particular should solve the recurring 
problem of segfault/hangs on older CPU with previous binary releases.

I used a fairly heavy compression scheme (lzma), because it cut the size 
~ 30 %. If it is a problem, please let me know,

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is it ok to create a tool directory in numpy svn (for building tools, etc...)

2008-05-19 Thread Charles R Harris
On Mon, May 19, 2008 at 7:54 PM, David Cournapeau 
[EMAIL PROTECTED] wrote:

 Hi,

To build numpy binaries, I have some pretty boring python scripts,
 and I think it would be useful to have them somewhere in numpy trunk
 (for example in tools). Does anyone have something against it ?


Hey, you can always delete it later. I say go for it.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is it ok to create a tool directory in numpy svn (for building tools, etc...)

2008-05-19 Thread Robert Kern
On Mon, May 19, 2008 at 8:54 PM, David Cournapeau
[EMAIL PROTECTED] wrote:
 Hi,

To build numpy binaries, I have some pretty boring python scripts,
 and I think it would be useful to have them somewhere in numpy trunk
 (for example in tools). Does anyone have something against it ?

Nope. Go for it.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py

2008-05-19 Thread Pearu Peterson
CC: numpy-discussion because of other reactions on the subject.

On Tue, May 20, 2008 1:26 am, Robert Kern wrote:
 Is this an important bugfix? If not, can you hold off until 1.1.0 is
 released?

The patch fixes a long existing and unreported bug in f2py - I think
the bug was introduced when Python defined min and max functions.
I learned about the bug when reading a manuscript about f2py. Such bugs
should not end up in a paper demonstrating f2py inability to process
certain
features as it would have not been designed to do so. So, I'd consider
the bugfix important.

On the other hand, the patch does not affect numpy users who do not
use f2py, in any way. So, it is not important for numpy users, in general.

Hmm, I also thought that the trunk is open for development, even though
r5198 is only fixing a bug (and I do not plan to develop f2py in numpy
further, just fix bugs and maintain it). If the release process
is going to take for weeks and is locking the trunk, may be the
release candidates should live in a separate branch?

Pearu

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion