[Numpy-discussion] Moved python install, import errors
Hi all, I built ATLAS, Python 2.5 and NumPy on the local disk of a cluster node, so that disk access would be faster than over NFS, and then moved it back. I made sure to modify all the relevant paths in __config__.py but when importing I receive this error, which I can't make heads or tails of, since core/ does contain an __init__.py. Has anyone seen anything like this before? Thanks, David In [1]: import numpy --- AttributeErrorTraceback (most recent call last) /home/dwf/ipython console in module() /home/dwf/software/Python-2.5.4/lib/python2.5/site-packages/numpy/ __init__.py in module() 128 return loader(*packages, **options) 129 -- 130 import add_newdocs 131 __all__ = ['add_newdocs'] 132 /home/dwf/software/Python-2.5.4/lib/python2.5/site-packages/numpy/ add_newdocs.py in module() 7 # core/fromnumeric.py, core/defmatrix.py up-to-date. 8 9 from lib import add_newdoc 10 11 ### /home/dwf/software/Python-2.5.4/lib/python2.5/site-packages/numpy/lib/ __init__.py in module() 11 12 import scimath as emath --- 13 from polynomial import * 14 from machar import * 15 from getlimits import * /home/dwf/software/Python-2.5.4/lib/python2.5/site-packages/numpy/lib/ polynomial.py in module() 9 import re 10 import warnings --- 11 import numpy.core.numeric as NX 12 13 from numpy.core import isscalar, abs AttributeError: 'module' object has no attribute 'core' ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Normalization of ifft
Thu, 26 Mar 2009 19:10:31 -0700, Lutz Maibaum wrote: On Thu, Mar 26, 2009 at 7:02 PM, Gideon Simpson simp...@math.toronto.edu wrote: I thought it was the same as the MATLAB format: http://www.mathworks.com/access/helpdesk/help/techdoc/index.html?/ access/helpdesk/help/techdoc/ref/fft.htmlhttp://www.google.com/search ?client=safarirls=en-usq=MATLAB+fftie=UTF-8oe=UTF-8 I believe this is true for the implementation, but I think the description of ifft in the NumPy User Guide might be incorrect. Yes, the description of ifft in the Guide to NumPy book is probably incorrect: np.fft.ifft([1,0,0,0]) array([ 0.25+0.j, 0.25+0.j, 0.25+0.j, 0.25+0.j]) whereas that of the online reference guide is correct. (To avoid confusion between the different docs, it's probably best to use refer to the ebook by its name.) -- Pauli Virtanen ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Is it ok to include GPL scripts in the numpy *repository* ?
Hi, To build the numpy .dmg mac os x installer, I use a script from the adium project, which uses applescript and some mac os x black magic. The script seems to be GPL, as adium itself: http://trac.adiumx.com/browser/trunk/Release For now, I keep the build scripts separately from the svn repository, but it would be more practical if everything was together. As far as I understand, the GPL does not apply to the output of some build scripts, and as such, nothing would be tainted by the GPL - is this right ? Would it be problematic to put those in the svn repo ? cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is it ok to include GPL scripts in the numpy *repository* ?
On 3/27/2009 6:48 AM David Cournapeau apparently wrote: To build the numpy .dmg mac os x installer, I use a script from the adium project, which uses applescript and some mac os x black magic. The script seems to be GPL, as adium itself: It might be worth a query to see if the author would release just this script under the modified BSD license. http://trac.adiumx.com/wiki/ContactUs Alan Isaac ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Behavior of numpy.random.exponential
Hi, I noticed a problem with numpy.random.exponential. Apparently, the samples generated by numpy.random.exponential(scale=scale) follow the distribution f(x)=1/scale*exp(-x/scale) (and not f(x)=scale*exp(-x*scale) as stated by the docstring). The script below illustrates this. -- import numpy as N import pylab as pl print N.__version__ pl.figure() lamda = 2. noise_modulus = N.random.exponential(scale=lamda,\ size=(10,)) #noise_modulus = -N.log(N.random.uniform(size=(10,)))/lamda # this works y_hist, x_hist = N.histogram(noise_modulus, bins=51,\ normed=True, new=True) x_pl = N.linspace(0, x_hist.max()) pl.semilogy(x_hist[0:-1], y_hist, label='Empirical, lambda=%s' % lamda) pl.semilogy(x_pl, lamda * N.exp(-x_pl*lamda), ':', \ label='exact, lambda=%s' % lamda) pl.semilogy(x_pl, 1./lamda * N.exp(-x_pl*1./lamda), ':', \ label='exact, lambda=1/%s' % lamda) pl.legend(loc='best') pl.show() -- Could this be a bug? I also checked with the latest svn version: In [1]: import numpy; numpy.__version__ Out[1]: '1.4.0.dev6731' Best, YVES ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is it ok to include GPL scripts in the numpy *repository* ?
2009/3/27 Alan G Isaac ais...@american.edu: On 3/27/2009 6:48 AM David Cournapeau apparently wrote: To build the numpy .dmg mac os x installer, I use a script from the adium project, which uses applescript and some mac os x black magic. The script seems to be GPL, as adium itself: It might be worth a query to see if the author would release just this script under the modified BSD license. http://trac.adiumx.com/wiki/ContactUs I don't see the need. This is just a tool, of which the source code, as well as our modifications, are available. We don't link to it, we don't derive anything in NumPy from it and we do not distribute it, so we are not in any disagreement with the GPL. Regards Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Changeset 6729
Hi Chuck 2009/3/27 Charles R Harris charlesr.har...@gmail.com: Also, the test is buggy. Could you be a bit more specific? Which test, what is the problem, what would you like to see? Cheers Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type
Error messages? Sure;-) python -c 'import numpy; numpy.test()' Running unit tests for numpy NumPy version 1.3.0b1 NumPy is installed in /opt/apps/lib/python2.5/site-packages/numpy Python version 2.5.2 (r252:60911, Aug 31 2008, 15:16:34) [GCC Intel(R) C++ gcc 4.2 mode] nose version 0.10.4 ...K.FF..FF.. == FAIL: test_cdouble (test_linalg.TestEigh) -- Traceback (most recent call last): File /opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py, line 221, in test_cdouble self.do(a) File /opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py, line 259, in do assert_almost_equal(ev, evalues) File /opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py, line 23, in assert_almost_equal old_assert_almost_equal(a, b, decimal=decimal, **kw) File /opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py, line 215, in assert_almost_equal return assert_array_almost_equal(actual, desired, decimal, err_msg) File /opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py, line 321, in assert_array_almost_equal header='Arrays are not almost equal') File /opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py, line 302, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.60555128, -2.60555128]) y: array([-2.60555128 +1.11022302e-16j, 4.60555128 -1.11022302e-16j]) == FAIL: test_csingle (test_linalg.TestEigh) -- Traceback (most recent call last): File /opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py, line 217, in test_csingle self.do(a) File /opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py, line 259, in do assert_almost_equal(ev, evalues) File /opt/apps/lib/python2.5/site-packages/numpy/linalg/tests/test_linalg.py, line 23, in assert_almost_equal old_assert_almost_equal(a, b, decimal=decimal, **kw) File /opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py, line 215, in assert_almost_equal return assert_array_almost_equal(actual, desired, decimal, err_msg) File /opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py, line 321, in assert_array_almost_equal header='Arrays are not almost equal') File /opt/apps/lib/python2.5/site-packages/numpy/testing/utils.py, line 302, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.60555124, -2.60555124], dtype=float32) y: array([-2.60555124 +1.11022302e-16j, 4.60555124 -1.11022302e-16j], dtype=complex64) == FAIL: test_cdouble
Re: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type
Hi David, I *guess* that the compiler command line does not work with your changes, and that distutils got confused, and fails somewhere later (or sooner, who knows). Without actually seeing the errors you got, it is difficult to know more - but I would make sure the command line arguments are ok instead of focusing on the .src error, cheers, David I'n not sure if I understand... The compiler options I have changed seem to work (and installation without the build_clib --compiler=intel option to setup.py works fine with them). To be sure I've compiled numpy from the distribution tar file without any patches. With python setup.py config --compiler=intel \ config_fc --fcompiler=intel \ build_ext --compiler=intel build everything compiles fine (and builds the internal lapack, as I haven't given the MKL paths, and have no other lapack / blas installed). With python setup.py config --compiler=intel \ config_fc --fcompiler=intel \ build_clib --compiler=intel \ build_ext --compiler=intel build the attempt to build fails (complete output is below). The python installation I use is also build with the Intel icc compiler; so it does pick up that compiler by default. Maybe something is going wrong in the implementation of build_clib in the numpy distutils? Where would I search for that in the code? Many thanks, Chris. snip, snip MEDEA /home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1python setup.py config --compiler=intel config_fc --fcompiler=intel build_clib --compiler=intel build_ext --compiler=intel build Running from numpy source directory. non-existing path in 'numpy/distutils': 'site.cfg' F2PY Version 2 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /opt/intel/mkl/10.0.2.018/lib/32 NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /opt/apps/lib libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in /opt/apps/lib libraries f77blas,cblas,atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib NOT AVAILABLE /home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1/numpy/distutils/system_info.py:1383: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in /opt/apps/lib libraries blas not found in /usr/local/lib libraries blas not found in /usr/lib NOT AVAILABLE /home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1/numpy/distutils/system_info.py:1392: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE /home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1/numpy/distutils/system_info.py:1395: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) NOT AVAILABLE lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in /opt/intel/mkl/10.0.2.018/lib/32 NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /opt/apps/lib libraries lapack_atlas not found in /opt/apps/lib libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries f77blas,cblas,atlas not found in /opt/apps/lib libraries lapack_atlas not found in /opt/apps/lib libraries f77blas,cblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_info NOT AVAILABLE /home/marq/src/python/04_science/01_numpy/numpy-1.3.0b1/numpy/distutils/system_info.py:1290: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the
Re: [Numpy-discussion] Behavior of numpy.random.exponential
On Fri, Mar 27, 2009 at 7:49 AM, Yves Frederix yves.frede...@gmail.com wrote: Hi, I noticed a problem with numpy.random.exponential. Apparently, the samples generated by numpy.random.exponential(scale=scale) follow the distribution f(x)=1/scale*exp(-x/scale) (and not f(x)=scale*exp(-x*scale) as stated by the docstring). The script below illustrates this. -- import numpy as N import pylab as pl print N.__version__ pl.figure() lamda = 2. noise_modulus = N.random.exponential(scale=lamda,\ size=(10,)) #noise_modulus = -N.log(N.random.uniform(size=(10,)))/lamda # this works y_hist, x_hist = N.histogram(noise_modulus, bins=51,\ normed=True, new=True) x_pl = N.linspace(0, x_hist.max()) pl.semilogy(x_hist[0:-1], y_hist, label='Empirical, lambda=%s' % lamda) pl.semilogy(x_pl, lamda * N.exp(-x_pl*lamda), ':', \ label='exact, lambda=%s' % lamda) pl.semilogy(x_pl, 1./lamda * N.exp(-x_pl*1./lamda), ':', \ label='exact, lambda=1/%s' % lamda) pl.legend(loc='best') pl.show() -- Could this be a bug? I also checked with the latest svn version: In [1]: import numpy; numpy.__version__ Out[1]: '1.4.0.dev6731' Best, YVES ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion I changed this a while ago in the documentation editor, but it hasn't been merged yet to the source docstring http://docs.scipy.org/numpy/docs/numpy.random.mtrand.RandomState.exponential/ There is also an open ticket for this http://projects.scipy.org/numpy/ticket/987 Can you review the new docstring, so we can mark it as reviewed and close the ticket? Josef ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Behavior of numpy.random.exponential
Hi, I changed this a while ago in the documentation editor, but it hasn't been merged yet to the source docstring http://docs.scipy.org/numpy/docs/numpy.random.mtrand.RandomState.exponential/ There is also an open ticket for this http://projects.scipy.org/numpy/ticket/987 Can you review the new docstring, so we can mark it as reviewed and close the ticket? The new docstring looks fine to me. Please go ahead and close it. Regards, YVES ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
Hey Everyone, I built Lapack and Atlas from source last night on a C2D running 32-bit Linux Mint 6. I ran 'make check' and 'make time' on the lapack build, and ran the dynamic LU decomp test on atlas. Both packages checked out fine. Then, I built numpy and scipy against them using the appropriate flags in site.cfg for the parallel thread atlas libraries. This seems to have worked properly as numpy.dot() utilizes both cores at 100% on very large arrays. I have also done id(numpy.dot) and id(numpy.core.multiarray.dot) and verified that the two ids are different. So I believe the build went properly. The problem I am having now is that numpy.linalg.eig (and the eig functions in scipy) hang at 100% CPU and never returns (no matter the array size). Numpy.test() hung as well, I'm assuming for this same reason. I have included the configurations below. Any idea what would cause this? Thanks! Chris Python 2.5.2 (r252:60911, Oct 5 2008, 19:24:49) [GCC 4.3.2] on linux2 Type help, copyright, credits or license for more information. import numpy import scipy numpy.show_config() atlas_threads_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] language = f77 include_dirs = ['/usr/local/atlas/include'] blas_opt_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] define_macros = [('NO_ATLAS_INFO', 2)] language = c include_dirs = ['/usr/local/atlas/include'] atlas_blas_threads_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] language = c include_dirs = ['/usr/local/atlas/include'] lapack_opt_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] define_macros = [('NO_ATLAS_INFO', 2)] language = f77 include_dirs = ['/usr/local/atlas/include'] lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE mkl_info: NOT AVAILABLE scipy.show_config() umfpack_info: NOT AVAILABLE atlas_threads_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] language = f77 include_dirs = ['/usr/local/atlas/include'] blas_opt_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] define_macros = [('ATLAS_INFO', '\\3.8.3\\')] language = c include_dirs = ['/usr/local/atlas/include'] atlas_blas_threads_info: libraries = ['ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] language = c include_dirs = ['/usr/local/atlas/include'] lapack_opt_info: libraries = ['lapack', 'ptf77blas', 'ptcblas', 'atlas'] library_dirs = ['/usr/local/atlas/lib'] define_macros = [('NO_ATLAS_INFO', 2)] language = f77 include_dirs = ['/usr/local/atlas/include'] lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE mkl_info: NOT AVAILABLE ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
Chris Colbert wrote: Hey Everyone, I built Lapack and Atlas from source last night on a C2D running 32-bit Linux Mint 6. I ran 'make check' and 'make time' on the lapack build, and ran the dynamic LU decomp test on atlas. Both packages checked out fine. Then, I built numpy and scipy against them using the appropriate flags in site.cfg for the parallel thread atlas libraries. This seems to have worked properly as numpy.dot() utilizes both cores at 100% on very large arrays. I have also done id(numpy.dot) and id(numpy.core.multiarray.dot) and verified that the two ids are different. So I believe the build went properly. The problem I am having now is that numpy.linalg.eig (and the eig functions in scipy) hang at 100% CPU and never returns (no matter the array size). Numpy.test() hung as well, I'm assuming for this same reason. I have included the configurations below. Any idea what would cause this? What does numpy.test() returns ? This smells like a fortran runtime problem, cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
numpy.test() doesn't return (after 2 hours of running at 100% at least). I imagine its hanging on this eig function as well. Chris On Fri, Mar 27, 2009 at 10:12 AM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Chris Colbert wrote: Hey Everyone, I built Lapack and Atlas from source last night on a C2D running 32-bit Linux Mint 6. I ran 'make check' and 'make time' on the lapack build, and ran the dynamic LU decomp test on atlas. Both packages checked out fine. Then, I built numpy and scipy against them using the appropriate flags in site.cfg for the parallel thread atlas libraries. This seems to have worked properly as numpy.dot() utilizes both cores at 100% on very large arrays. I have also done id(numpy.dot) and id(numpy.core.multiarray.dot) and verified that the two ids are different. So I believe the build went properly. The problem I am having now is that numpy.linalg.eig (and the eig functions in scipy) hang at 100% CPU and never returns (no matter the array size). Numpy.test() hung as well, I'm assuming for this same reason. I have included the configurations below. Any idea what would cause this? What does numpy.test() returns ? This smells like a fortran runtime problem, cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
Chris Colbert wrote: numpy.test() doesn't return (after 2 hours of running at 100% at least). I imagine its hanging on this eig function as well. Can you run the following test ? nosetests -v -s test_build.py (in numpy/linalg). If it fails, it almost surely a problem in the way you built numpy and/or atlas. Make sure that everything is built with the same fortran compiler (blas, lapack, atlas and numpy). cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
2009/3/27 Chris Colbert sccolb...@gmail.com Hey Everyone, I built Lapack and Atlas from source last night on a C2D running 32-bit Linux Mint 6. I ran 'make check' and 'make time' on the lapack build, and ran the dynamic LU decomp test on atlas. Both packages checked out fine. Then, I built numpy and scipy against them using the appropriate flags in site.cfg for the parallel thread atlas libraries. This seems to have worked properly as numpy.dot() utilizes both cores at 100% on very large arrays. I have also done id(numpy.dot) and id(numpy.core.multiarray.dot) and verified that the two ids are different. So I believe the build went properly. The problem I am having now is that numpy.linalg.eig (and the eig functions in scipy) hang at 100% CPU and never returns (no matter the array size). Numpy.test() hung as well, I'm assuming for this same reason. I have included the configurations below. Any idea what would cause this? This is a problem that used to turn up regularly and was related to the atlas build. The atlas version can matter here, but I don't know what the currently recommended atlas version is. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
here are the results from that test: test_lapack (test_build.TestF77Mismatch) ... ok -- Ran 1 test in 0.055s OK thanks again for the help, Chris On Fri, Mar 27, 2009 at 10:24 AM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Chris Colbert wrote: numpy.test() doesn't return (after 2 hours of running at 100% at least). I imagine its hanging on this eig function as well. Can you run the following test ? nosetests -v -s test_build.py (in numpy/linalg). If it fails, it almost surely a problem in the way you built numpy and/or atlas. Make sure that everything is built with the same fortran compiler (blas, lapack, atlas and numpy). cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
I compiled everything with gfortran. I dont even have g77 on my system. On Fri, Mar 27, 2009 at 11:18 AM, Chris Colbert sccolb...@gmail.com wrote: here are the results from that test: test_lapack (test_build.TestF77Mismatch) ... ok -- Ran 1 test in 0.055s OK thanks again for the help, Chris On Fri, Mar 27, 2009 at 10:24 AM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Chris Colbert wrote: numpy.test() doesn't return (after 2 hours of running at 100% at least). I imagine its hanging on this eig function as well. Can you run the following test ? nosetests -v -s test_build.py (in numpy/linalg). If it fails, it almost surely a problem in the way you built numpy and/or atlas. Make sure that everything is built with the same fortran compiler (blas, lapack, atlas and numpy). cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
I built Atlas 3.8.3 which I assume is the newest release. Chris 2009/3/27 Charles R Harris charlesr.har...@gmail.com 2009/3/27 Chris Colbert sccolb...@gmail.com Hey Everyone, I built Lapack and Atlas from source last night on a C2D running 32-bit Linux Mint 6. I ran 'make check' and 'make time' on the lapack build, and ran the dynamic LU decomp test on atlas. Both packages checked out fine. Then, I built numpy and scipy against them using the appropriate flags in site.cfg for the parallel thread atlas libraries. This seems to have worked properly as numpy.dot() utilizes both cores at 100% on very large arrays. I have also done id(numpy.dot) and id(numpy.core.multiarray.dot) and verified that the two ids are different. So I believe the build went properly. The problem I am having now is that numpy.linalg.eig (and the eig functions in scipy) hang at 100% CPU and never returns (no matter the array size). Numpy.test() hung as well, I'm assuming for this same reason. I have included the configurations below. Any idea what would cause this? This is a problem that used to turn up regularly and was related to the atlas build. The atlas version can matter here, but I don't know what the currently recommended atlas version is. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
Atlas 3.8.3 and Lapack 3.1.1 On Fri, Mar 27, 2009 at 11:05 AM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Chris Colbert wrote: I compiled everything with gfortran. I dont even have g77 on my system. Ok. Which version of atlas and lapack are you using ? Lapack 3.2 is known to cause trouble. Atlas 3.8.0 and 3.8.1 had some bugs too, I can't remember exactly which one. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
Chris Colbert wrote: Atlas 3.8.3 and Lapack 3.1.1 Hm... I am afraid I don't see what may cause this problem. Could you rebuild numpy from scratch and give us the log ? rm -rf build python setup.py build build.log David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Behavior of numpy.random.exponential
Fri, 27 Mar 2009 09:20:09 -0400, josef.pktd wrote: [clip: numpy.random.exponential docstring] I changed this a while ago in the documentation editor, but it hasn't been merged yet to the source docstring It is merged, but I forgot to regenerate the mtrand.c file. -- Pauli Virtanen ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
David, The log was too big for the list, so I sent it to your email address directly. Chris 2009/3/27 Chris Colbert sccolb...@gmail.com David, The log is attached. Thanks for giving me the bash command. I would have never figured that one out Chris On Fri, Mar 27, 2009 at 11:23 AM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Chris Colbert wrote: Atlas 3.8.3 and Lapack 3.1.1 Hm... I am afraid I don't see what may cause this problem. Could you rebuild numpy from scratch and give us the log ? rm -rf build python setup.py build build.log David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
Chris Colbert wrote: David, The log was too big for the list, so I sent it to your email address directly. Hm, never saw this one. In the build log, one can see: ... compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/local/atlas/lib -llapack -lptf77blas -lptcblas -latlas -o _configtest /usr/bin/ld: _configtest: hidden symbol `__powidf2' in /usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO /usr/bin/ld: final link failed: Nonrepresentable section on output collect2: ld returned 1 exit status /usr/bin/ld: _configtest: hidden symbol `__powidf2' in /usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO /usr/bin/ld: final link failed: Nonrepresentable section on output collect2: ld returned 1 exit status failure. This does not look good. It may be a problem in your toolchain (i.e. how your distribution build gcc and co). I am afraid there is not much we can do at this point - you should report the problem to your OS vendor, hoping someone knows about this, cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
So you think its a problem with gcc? im using version 4.3.1 shipped with the ubuntu 8.10 distro. Chris On Fri, Mar 27, 2009 at 11:56 AM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Chris Colbert wrote: David, The log was too big for the list, so I sent it to your email address directly. Hm, never saw this one. In the build log, one can see: ... compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/local/atlas/lib -llapack -lptf77blas -lptcblas -latlas -o _configtest /usr/bin/ld: _configtest: hidden symbol `__powidf2' in /usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO /usr/bin/ld: final link failed: Nonrepresentable section on output collect2: ld returned 1 exit status /usr/bin/ld: _configtest: hidden symbol `__powidf2' in /usr/lib/gcc/i486-linux-gnu/4.3.2/libgcc.a(_powidf2.o) is referenced by DSO /usr/bin/ld: final link failed: Nonrepresentable section on output collect2: ld returned 1 exit status failure. This does not look good. It may be a problem in your toolchain (i.e. how your distribution build gcc and co). I am afraid there is not much we can do at this point - you should report the problem to your OS vendor, hoping someone knows about this, cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
Chris Colbert wrote: So you think its a problem with gcc? That's my guess, yes. im using version 4.3.1 shipped with the ubuntu 8.10 distro. I thought you were using mint ? If you are using ubuntu, then it is very strange, because many people build and use numpy on this platform without any trouble. Is your OS 64 bits ? cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
mint is built from like 98% ubuntu. In this case, Mint 6 is built from ubuntu 8.10. Most repository access is through the Ubuntu repositories. gcc falls under this... 32 bit OS Thanks again for your patience! I'm wet behind the ears when it comes to this kind of stuff. Chris On Fri, Mar 27, 2009 at 12:05 PM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Chris Colbert wrote: So you think its a problem with gcc? That's my guess, yes. im using version 4.3.1 shipped with the ubuntu 8.10 distro. I thought you were using mint ? If you are using ubuntu, then it is very strange, because many people build and use numpy on this platform without any trouble. Is your OS 64 bits ? cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Numpy v1.3.0b1 on Linux w/ Intel compilers - unknown file type
2009/3/27 Christian Marquardt christ...@marquardt.sc Error messages? Sure;-) python -c 'import numpy; numpy.test()' Running unit tests for numpy NumPy version 1.3.0b1 NumPy is installed in /opt/apps/lib/python2.5/site-packages/numpy Python version 2.5.2 (r252:60911, Aug 31 2008, 15:16:34) [GCC Intel(R) C++ gcc 4.2 mode] nose version 0.10.4 ...K.FF..FF.. OK, the tests should be fixed in r6773. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Changeset 6729
2009/3/27 Stéfan van der Walt ste...@sun.ac.za Hi Chuck 2009/3/27 Charles R Harris charlesr.har...@gmail.com: Also, the test is buggy. Could you be a bit more specific? Which test, what is the problem, what would you like to see? I fixed it. You used assert_equal instead of assert_array_equal which caused the axis test to fail. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
2009/3/28 Chris Colbert sccolb...@gmail.com: mint is built from like 98% ubuntu. Ok. The problem is that fortran often falls into the bottom percent as far as support is concerned, since so few people care :) Note that on Ubuntu 8.10, you can just install atlas from the repositories - and 1.3.0 deb will be provided once it is released cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
forgive my ignorance, but wouldn't installing atlas from the repositories defeat the purpose of installing atlas at all, since the build process optimizes it to your own cpu timings? Chris On Fri, Mar 27, 2009 at 12:43 PM, David Cournapeau courn...@gmail.comwrote: 2009/3/28 Chris Colbert sccolb...@gmail.com: mint is built from like 98% ubuntu. Ok. The problem is that fortran often falls into the bottom percent as far as support is concerned, since so few people care :) Note that on Ubuntu 8.10, you can just install atlas from the repositories - and 1.3.0 deb will be provided once it is released cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Normalization of ifft
Hi Joe, Travis has freed his original book and large parts of it (e.g., the C API docs) are now being incorporated into the actively-maintained manuals at docs.scipy.org. Please go there for the latest docs. You'll find that the fft section gives the 1/n formula when discussing ifft. Thanks for the explanation. It sounds like the ebook Guide to Numpy is no longer being updated. If that is the case, it might be useful to maintain a list of errata. I can see where Lutz got the impression that Guide to Numpy was the doc to read. The descriptions of books on both numpy.scipy.org and docs.scipy.org do give that impression. That is indeed what happened. As someone who had never used Numpy before, I figured the mature documentation, while possibly not entirely up to date, would be the best start. I haven't checked in detail but much of the rest of Guide to Numpy is now included in the Reference Guide. Would it be ok to put some words on both sites to the effect that the RG is the place to go for routine, class, and module docs, or (possibly) just the place to go, period? That is probably a good idea. However, it seems to me that the reference guide might not be the best place to starts if one wants to learn Numpy from scratch. I guess the Numpy User Guide will eventually replace the Guide to Numpy in that role, but it looks rather incomplete for now. Thanks for clearing this up, Lutz ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
Chris Colbert wrote: forgive my ignorance, but wouldn't installing atlas from the repositories defeat the purpose of installing atlas at all, since the build process optimizes it to your own cpu timings? Yes and no. Yes, it will be slower than a cutom-build atlas, but it will be reasonably faster than blas/lapack. Please also keep in mind that this mostly matters for linear algebra and big matrices. Thinking from another POV: how many 1000x1000 matrices could have you inverted while wasting your time on this already :) cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
this is true. but not nearly as good of a learning experience :) I'm a mechanical engineer, so all of this computer science stuff is really new and interesting to me. So i'm trying my best to get a handle on exactly what is going on behind the scenes. Chris On Fri, Mar 27, 2009 at 12:36 PM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Chris Colbert wrote: forgive my ignorance, but wouldn't installing atlas from the repositories defeat the purpose of installing atlas at all, since the build process optimizes it to your own cpu timings? Yes and no. Yes, it will be slower than a cutom-build atlas, but it will be reasonably faster than blas/lapack. Please also keep in mind that this mostly matters for linear algebra and big matrices. Thinking from another POV: how many 1000x1000 matrices could have you inverted while wasting your time on this already :) cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
some other things I might mention, though I doubt they would have an effect: When i built Atlas, I had to force it to use a 32-bit pointer length (I assume this is correct for a 32-bit OS as gcc.stub_64 wasnt found on my system) in numpy's site.cfg I only linked to the pthread .so's. Should I have also linked to the single threaded counterparts in the section above? (I assumed one would be overridden by the other) Other than those, I followed closely the instructions on scipy.org. Chris On Fri, Mar 27, 2009 at 12:57 PM, Chris Colbert sccolb...@gmail.com wrote: this is true. but not nearly as good of a learning experience :) I'm a mechanical engineer, so all of this computer science stuff is really new and interesting to me. So i'm trying my best to get a handle on exactly what is going on behind the scenes. Chris On Fri, Mar 27, 2009 at 12:36 PM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Chris Colbert wrote: forgive my ignorance, but wouldn't installing atlas from the repositories defeat the purpose of installing atlas at all, since the build process optimizes it to your own cpu timings? Yes and no. Yes, it will be slower than a cutom-build atlas, but it will be reasonably faster than blas/lapack. Please also keep in mind that this mostly matters for linear algebra and big matrices. Thinking from another POV: how many 1000x1000 matrices could have you inverted while wasting your time on this already :) cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] A few more questions about build doc
Hi, I spent the whole evening on automating our whole release process on supported platforms. I am almost there, but I have a few relatively minor annoyances related to doc: - Is it ok to build the pdf doc using LANG=C ? If I run sphinx-build without setting LANG=C, I got some weird latex errors at the latex-pdf stage, which I am reluctant to track down :) - I modified doc/source/conf.py such as the reported numpy version is exactly the one used to build the doc. Am I right that building the numpy documentation requires numpy to be installed (for autodoc and co), or is this a wrong assumption ? I've realized this after the change, but I can of course revert it if that's a problem, cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is it ok to include GPL scripts in the numpy *repository* ?
2009/3/27 Stéfan van der Walt ste...@sun.ac.za: 2009/3/27 Alan G Isaac ais...@american.edu: On 3/27/2009 6:48 AM David Cournapeau apparently wrote: To build the numpy .dmg mac os x installer, I use a script from the adium project, which uses applescript and some mac os x black magic. The script seems to be GPL, as adium itself: It might be worth a query to see if the author would release just this script under the modified BSD license. http://trac.adiumx.com/wiki/ContactUs I don't see the need. This is just a tool, of which the source code, as well as our modifications, are available. We don't link to it, we don't derive anything in NumPy from it and we do not distribute it, so we are not in any disagreement with the GPL. I concur. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is it ok to include GPL scripts in the numpy *repository* ?
On Fri, Mar 27, 2009 at 3:48 AM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: To build the numpy .dmg mac os x installer, I use a script from the adium project, which uses applescript and some mac os x black magic. The script seems to be GPL, as adium itself: Why do you need to use the adium project? I am just curious why the scripts I was using aren't sufficient: http://projects.scipy.org/numpy/browser/trunk/tools/osxbuild Jarrod ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is it ok to include GPL scripts in the numpy *repository* ?
On Sat, Mar 28, 2009 at 5:25 AM, Jarrod Millman mill...@berkeley.edu wrote: On Fri, Mar 27, 2009 at 3:48 AM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: To build the numpy .dmg mac os x installer, I use a script from the adium project, which uses applescript and some mac os x black magic. The script seems to be GPL, as adium itself: Why do you need to use the adium project? I am just curious why the scripts I was using aren't sufficient: http://projects.scipy.org/numpy/browser/trunk/tools/osxbuild For fancy things like background images, fixing the windows size, etc... How mac os x does it is undocumented, and the only script I found to do it automatically was from adium. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy.ctypeslib.ndpointer and the restype attribute [patch]
Sturla Molden schrieb: On 3/26/2009 12:41 PM, Jens Rantil wrote: Wouldn't my code, or a tweak of it, be a nice feature in numpy.ctypeslib? Is this the wrong channel for proposing things like this? If you look at http://svn.scipy.org/svn/numpy/trunk/numpy/ctypeslib.py you will see that it does almost the same. I think it would be better to work out why ndpointer fails as restype and patch that. Thomas Heller schrieb: ndpointer(...), which returns an _nptr instance, does not work as restype because neither it is a base class of one of the ctypes base types like ctypes.c_void_p, also it is not callable with one argument. There are two ways to fix this. The first one is to make the _nptr callable [...] The other way is to make _nptr a subclass of ctypes.c_void_p, the result that the foreign function call returns will then be an instance of this class. Unfortunately, ctypes will not call __new__() to create this instance; so a custom __new__() implementation cannot return a numpy array and we are left with the _nptr instance. The only way to create and access the numpy array is to construct and return one from a method call on the _nptr instance, or a property on the _nptr instance. Ok, .errcheck could call that method and return the result. Well, looking into the ctypes sources trying to invent a new protocol for the restype attribute I found out that THERE IS ALREADY a mechanism for it, but I had totally forgotten that it exists. When the .restype attribute of a function is set to a SUBCLASS of a ctypes type (c_void_p for example), an instance of this subclass is created. After that, if this instance has a _check_retval_ method, this method is called and the result of this call is returned. So, it is indeed possible to create a class that can be assigned to .restype, and which can convert the return value of a function to whatever we like. I will prepare a patch for numpy.ctypeslib. -- Thanks, Thomas ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] array of matrices
I have a number of arrays of shape (N,4,4). I need to perform a vectorised matrix-multiplication between pairs of them I.e. matrix-multiplication rules for the last two dimensions, usual element-wise rule for the 1st dimension (of length N). (How) is this possible with numpy? thanks, BC ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] array of matrices
On Fri, Mar 27, 2009 at 17:38, Bryan Cole br...@cole.uklinux.net wrote: I have a number of arrays of shape (N,4,4). I need to perform a vectorised matrix-multiplication between pairs of them I.e. matrix-multiplication rules for the last two dimensions, usual element-wise rule for the 1st dimension (of length N). (How) is this possible with numpy? dot(a,b) was specifically designed for this use case. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] A few more questions about build doc
Sat, 28 Mar 2009 04:00:45 +0900, David Cournapeau wrote: [clip] - Is it ok to build the pdf doc using LANG=C ? If I run sphinx-build without setting LANG=C, I got some weird latex errors at the latex-pdf stage, which I am reluctant to track down :) LANG=C should be ok. - I modified doc/source/conf.py such as the reported numpy version is exactly the one used to build the doc. Am I right that building the numpy documentation requires numpy to be installed (for autodoc and co), or is this a wrong assumption ? I've realized this after the change, but I can of course revert it if that's a problem, It's the correct assumption. I thought about this too, but decided to leave it alone so that the version number reported in the docs would correspond to the major XX.YY and not the bugfix XX.YY.ZZ releases. The point was that there ought to be no API changes in the .ZZ, so we'd like docs for newer versions (possibly containing updates etc.) be labelled as compatible with all XX.YY. versions. -- Pauli Virtanen ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] A few more questions about build doc
On Sat, Mar 28, 2009 at 7:56 AM, Pauli Virtanen p...@iki.fi wrote: Sat, 28 Mar 2009 04:00:45 +0900, David Cournapeau wrote: [clip] - Is it ok to build the pdf doc using LANG=C ? If I run sphinx-build without setting LANG=C, I got some weird latex errors at the latex-pdf stage, which I am reluctant to track down :) LANG=C should be ok. Ok - it looks like the problem may have not been caused by this, though, but by some weird import stuff (I am pretty happy with the almost 100 % automation, but paver + virtualenv + setuptools interaction for imports can be mind blowing). It's the correct assumption. I thought about this too, but decided to leave it alone so that the version number reported in the docs would correspond to the major XX.YY and not the bugfix XX.YY.ZZ releases. The point was that there ought to be no API changes in the .ZZ, so we'd like docs for newer versions (possibly containing updates etc.) be labelled as compatible with all XX.YY. versions. Ok, I broke this, then - but this can be easily fixed by generating several version numbers in the version.py cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] array of matrices
On Fri, Mar 27, 2009 at 4:43 PM, Robert Kern robert.k...@gmail.com wrote: On Fri, Mar 27, 2009 at 17:38, Bryan Cole br...@cole.uklinux.net wrote: I have a number of arrays of shape (N,4,4). I need to perform a vectorised matrix-multiplication between pairs of them I.e. matrix-multiplication rules for the last two dimensions, usual element-wise rule for the 1st dimension (of length N). (How) is this possible with numpy? dot(a,b) was specifically designed for this use case. I think maybe he wants to treat them as stacked matrices. In [13]: a = arange(8).reshape(2,2,2) In [14]: (a[:,:,:,newaxis]*a[:,newaxis,:,:]).sum(-2) Out[14]: array([[[ 2, 3], [ 6, 11]], [[46, 55], [66, 79]]]) In [15]: for i in range(2) : dot(a[i],a[i]) : Out[15]: array([[ 2, 3], [ 6, 11]]) Out[15]: array([[46, 55], [66, 79]]) Although it might be easier to keep them in a list. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
Ok, im getting the same error on an install of straight ubuntu 8.10 the guy in this thread got the same error as me, but its not clear how he worked it out: http://www.mail-archive.com/numpy-discussion@scipy.org/msg13565.html from googling here: http://sources.redhat.com/ml/binutils/2004-12/msg00033.html it says that the library was not built correctly. does this mean my atlas .so's (which i built via - make ptshared) are incorrect? I suppose I could just grab atlas from the repositories, but that would be admitting defeat. Chris On Fri, Mar 27, 2009 at 1:09 PM, Chris Colbert sccolb...@gmail.com wrote: some other things I might mention, though I doubt they would have an effect: When i built Atlas, I had to force it to use a 32-bit pointer length (I assume this is correct for a 32-bit OS as gcc.stub_64 wasnt found on my system) in numpy's site.cfg I only linked to the pthread .so's. Should I have also linked to the single threaded counterparts in the section above? (I assumed one would be overridden by the other) Other than those, I followed closely the instructions on scipy.org. Chris On Fri, Mar 27, 2009 at 12:57 PM, Chris Colbert sccolb...@gmail.comwrote: this is true. but not nearly as good of a learning experience :) I'm a mechanical engineer, so all of this computer science stuff is really new and interesting to me. So i'm trying my best to get a handle on exactly what is going on behind the scenes. Chris On Fri, Mar 27, 2009 at 12:36 PM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Chris Colbert wrote: forgive my ignorance, but wouldn't installing atlas from the repositories defeat the purpose of installing atlas at all, since the build process optimizes it to your own cpu timings? Yes and no. Yes, it will be slower than a cutom-build atlas, but it will be reasonably faster than blas/lapack. Please also keep in mind that this mostly matters for linear algebra and big matrices. Thinking from another POV: how many 1000x1000 matrices could have you inverted while wasting your time on this already :) cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Built Lapack, Atlas from source.... now numpy.linalg.eig() hangs at 100% CPU
Alright, building numpy against atlas from the repositories works, but this atlas only contains the single threaded libraries. So i would like to get my build working completely. I think the problem has to do with how im making the atlas .so's from the .a files. I am simply calling the command 'make ptshared' in the atlas lib directory. The LDFLAGS of that particular makefile is set to '-melf_i386'. I have no idea what this means, the only thing I know is that LDFLAGS has something to do with linking, and from what I read on google, the error I am getting is do to improperly created .so files. I've attached both makefiles to this message, if anyone could take a look and see if something obvious is amiss. Thanks, Chris On Fri, Mar 27, 2009 at 10:32 PM, Chris Colbert sccolb...@gmail.com wrote: Ok, im getting the same error on an install of straight ubuntu 8.10 the guy in this thread got the same error as me, but its not clear how he worked it out: http://www.mail-archive.com/numpy-discussion@scipy.org/msg13565.html from googling here: http://sources.redhat.com/ml/binutils/2004-12/msg00033.html it says that the library was not built correctly. does this mean my atlas .so's (which i built via - make ptshared) are incorrect? I suppose I could just grab atlas from the repositories, but that would be admitting defeat. Chris On Fri, Mar 27, 2009 at 1:09 PM, Chris Colbert sccolb...@gmail.comwrote: some other things I might mention, though I doubt they would have an effect: When i built Atlas, I had to force it to use a 32-bit pointer length (I assume this is correct for a 32-bit OS as gcc.stub_64 wasnt found on my system) in numpy's site.cfg I only linked to the pthread .so's. Should I have also linked to the single threaded counterparts in the section above? (I assumed one would be overridden by the other) Other than those, I followed closely the instructions on scipy.org. Chris On Fri, Mar 27, 2009 at 12:57 PM, Chris Colbert sccolb...@gmail.comwrote: this is true. but not nearly as good of a learning experience :) I'm a mechanical engineer, so all of this computer science stuff is really new and interesting to me. So i'm trying my best to get a handle on exactly what is going on behind the scenes. Chris On Fri, Mar 27, 2009 at 12:36 PM, David Cournapeau da...@ar.media.kyoto-u.ac.jp wrote: Chris Colbert wrote: forgive my ignorance, but wouldn't installing atlas from the repositories defeat the purpose of installing atlas at all, since the build process optimizes it to your own cpu timings? Yes and no. Yes, it will be slower than a cutom-build atlas, but it will be reasonably faster than blas/lapack. Please also keep in mind that this mostly matters for linear algebra and big matrices. Thinking from another POV: how many 1000x1000 matrices could have you inverted while wasting your time on this already :) cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion Makefile Description: Binary data Make.inc Description: Binary data ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion