Re: [Numpy-discussion] [help needed] associativity and precedence of '@'
In article CAPJVwBkLww7-ysZB76LMRZ+mmbyN_5T=ym_vu1pjgakrlbq...@mail.gmail.com, Nathaniel Smith n...@pobox.com wrote: OPTION 1 FOR @: Precedence: same as * Associativity: left My shorthand name for it: same-left (yes, very creative) This means that if you don't use parentheses, you get: a @ b @ c - (a @ b) @ c a * b @ c - (a * b) @ c a @ b * c - (a @ b) * c OPTION 2 FOR @: Precedence: more-weakly-binding than * Associativity: right My shorthand name for it: weak-right This means that if you don't use parentheses, you get: a @ b @ c - a @ (b @ c) a * b @ c - (a * b) @ c a @ b * c - a @ (b * c) OPTION 3 FOR @: Precedence: more-tightly-binding than * Associativity: right My shorthand name for it: tight-right This means that if you don't use parentheses, you get: a @ b @ c - a @ (b @ c) a * b @ c - a * (b @ c) a @ b * c - (a @ b) * c We need to pick which of which options we think is best, based on whatever reasons we can think of, ideally more than hmm, weak-right gives me warm fuzzy feelings ;-). (In principle the other 2 possible options are tight-left and weak-left, but there doesn't seem to be any argument in favor of either, so we'll leave them out of the discussion.) After seeing all the traffic on this thread, I am in favor of same-left because it is easiest to remember: - It introduces no new rules. - It is unambiguous. If we pick option 2 or 3 we have no strong reason to favor one over the other, leaving users to guess. To my mind, being able to easily reason about code you are reading is more important that hoping to increase efficiency for one common case when not using parenthesis. It also has the advantage that it needs the least justification. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Binary releases
In article 8e95a257-3f06-43b7-8407-95916d284...@mac.com, William Ray Wing w...@mac.com wrote: On Sep 15, 2013, at 9:04 PM, Charles R Harris charlesr.har...@gmail.com wrote: Hi All, Numpy 1.8 is about ready for an rc1, which brings up the question of which binary builds so put up on sourceforge. For Windows maybe [byte] For Mac there is first the question of OS X versions, (10.5?), 10.6, 10.7, 10.8. I don't know if some builds work on more than one OS X version. The 10.5 version is a bit harder to come by than 10.6 and up. It looks like 10.9 is coming up, but it isn't out yet. I have no idea what Python version to match up these, but assuming all of them, then OS X 10.6 python 2.6, 2.7, 3.2, 3.3, compiled with native compiler, linked with Accelerate. OS X 10.7 python 2.6, 2.7, 3.2, 3.3, compiled with native compiler, linked with Accelerate. OS X 10.8 python 2.6, 2.7, 3.2, 3.3, compiled with native compiler, linked with Accelerate. That seems like a lot. It is fairly easy to compile from source on the mac these days, are all those binary packages really needed? I don't know what I am doing with the binary stuff, so any suggestions are welcome. If you will forgive an observation from a Mac user and (amateur) developer. I have twice tried to build Numpy from source and both times failed. The problem was that I couldn't find a single comprehensive set of directions that started from a virgin system (nothing but Apple's python and Xcode) and proceed to working copies of Numpy (and of course Matplotlib). Long time users know all about the differences between SourceForge, Github, and such. But bootstrapping pip, homebrew, macports, and similar was totally opaque to me. Sorry for the rant, but what I'm trying to say is that if there were such a recipe and it was clearly pointed to, then the need for a lengthy list of binaries would be pretty much moot. Thanks for listening, Bill I sympathize. Unfortunately it changes all the time so it's hard to keep up to date. The usual suggestion is to either install a self-contained python distribution such as Anaconda, which contains all sorts of useful packages, or use the the binary installer (which requires python.org python). For the record: binary installers offer a very important additional benefit: the resulting package can be included in an application with some assurance about what versions of MacOS X it supports. If you build from source you probably have no idea what versions of MacOS X the package will support -- which is fine for personal use, but not for distribution. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Binary releases
In article cabl7cqg5vv_vnp0hbdx+ys6gt0npwqehth3mwb6j65ow9+1...@mail.gmail.com, Ralf Gommers ralf.gomm...@gmail.com wrote: On Mon, Sep 16, 2013 at 2:45 AM, josef.p...@gmail.com wrote: On Sun, Sep 15, 2013 at 9:04 PM, Charles R Harris charlesr.har...@gmail.com wrote: Hi All, Numpy 1.8 is about ready for an rc1, which brings up the question of which binary builds so put up on sourceforge. For Windows maybe ... OS X 10.6 python 2.6, 2.7, 3.2, 3.3, compiled with native compiler, linked with Accelerate. OS X 10.7 python 2.6, 2.7, 3.2, 3.3, compiled with native compiler, linked with Accelerate. OS X 10.8 python 2.6, 2.7, 3.2, 3.3, compiled with native compiler, linked with Accelerate. That seems like a lot. It is fairly easy to compile from source on the mac these days, are all those binary packages really needed? That's not exactly the right list - the same installers built on 10.6 also work on 10.7 and 10.8. I agree. I'll chime in and give my recommendations, though Ralf is the expert: For MacOS X I suggest building binary installers for python.org's python 2.7, 3.2 and 3.3 (the 64-bit versions). The result will run on 10.6 and later. It is safest to build these on MacOS X 10.6; it may work to build on a later MacOS X, but it sure doesn't for some packages. You will have to update to the latest bdist_mpkg to build Mac binary installers for python 3. I've not tried it yet. I don't think users expect a binary installer for Apple's python; I don't recall ever seeing these for numpy, scipy, matplotlib But if you do want to supply one, Apple provides Python 2.5, 2.6 and 2.7 but no 3.x (at least in MacOS X 10.8). -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] OS X binaries for releases
In article CAH6Pt5o32Otdhk2Ms5Cy5Zo=mn48h8x2wbswk92etub4mmr...@mail.gmail.com, Matthew Brett matthew.br...@gmail.com wrote: Hi, On Thu, Aug 22, 2013 at 12:14 PM, Russell E. Owen ro...@uw.edu wrote: In article cabl7cqjacxp2grtt8hvmayajrm0xmtn1qt71wkdnbgq7dlu...@mail.gmail.com, Ralf Gommers ralf.gomm...@gmail.com wrote: Hi all, Building binaries for releases is currently quite complex and time-consuming. For OS X we need two different machines, because we still provide binaries for OS X 10.5 and PPC machines. I propose to not do this anymore. It doesn't mean we completely drop support for 10.5 and PPC, just that we don't produce binaries. PPC was phased out in 2006 and OS X 10.6 came out in 2009, so there can't be a lot of demand for it (and the download stats at http://sourceforge.net/projects/numpy/files/NumPy/1.7.1/confirm this). Furthermore I propose to not provide 2.6 binaries anymore. Downloads of 2.6 OS X binaries were 5% of the 2.7 ones. We did the same with 2.4 for a long time - support it but no binaries. So what we'd have left at the moment is only the 64-bit/32-bit universal binary for 10.6 and up. What we finally need to add is 3.x OS X binaries. We can make an attempt to build these on 10.8 - since we have access to a hosted 10.8 Mac Mini it would allow all devs to easily do a release (leaving aside the Windows issue). If anyone has tried the 10.6 SDK on 10.8 and knows if it actually works, that would be helpful. Any concerns, objections? I am in strong agreement. I'll be interested to learn how you make binary installers for python 3.x because the standard version of bdist_mpkg will not do it. I have heard of two other projects (forks or variants of bdist_mpkg) that will, but I have no idea of either is supported. I think I'm the owner of one of the forks; I supporting it, but I should certainly make a release soon too. That sounds promising. Can you suggest a non-released commit that is stable enough to try, or should we wait for a release? Also, is there a way to combine multiple packages into one binary installer? (matplotib used to include python-dateutil, pytz and six, but 1.3 does not). I have been able to building packages on 10.8 using MACOSX_DEPLOYMENT_TARGET=10.6 that will run on 10.6, so it will probably work. However I have run into several odd problems over the years building a binary installer on a newer system only to find it won't work on older systems for various reasons. Thus my personal recommendation is that you build on 10.6 if you want an installer that reliably works for 10.6 and later. I keep an older computer around for this reason. In fact that is one good reason to drop support for ancient operating systems and PPC. I'm sitting next to a 10.6 machine you are welcome to use; just let me know, I'll give you login access. Thank you. Personally I keep an older laptop I keep around that can run 10.6 (and even 10.4 and 10.5, which was handy when I made binaries that supported 10.3.9 and later -- no need for that these days), so I don't need it, but somebody else working on matplotlib binaries might. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] OS X binaries for releases
In article cabl7cqjacxp2grtt8hvmayajrm0xmtn1qt71wkdnbgq7dlu...@mail.gmail.com, Ralf Gommers ralf.gomm...@gmail.com wrote: Hi all, Building binaries for releases is currently quite complex and time-consuming. For OS X we need two different machines, because we still provide binaries for OS X 10.5 and PPC machines. I propose to not do this anymore. It doesn't mean we completely drop support for 10.5 and PPC, just that we don't produce binaries. PPC was phased out in 2006 and OS X 10.6 came out in 2009, so there can't be a lot of demand for it (and the download stats at http://sourceforge.net/projects/numpy/files/NumPy/1.7.1/confirm this). Furthermore I propose to not provide 2.6 binaries anymore. Downloads of 2.6 OS X binaries were 5% of the 2.7 ones. We did the same with 2.4 for a long time - support it but no binaries. So what we'd have left at the moment is only the 64-bit/32-bit universal binary for 10.6 and up. What we finally need to add is 3.x OS X binaries. We can make an attempt to build these on 10.8 - since we have access to a hosted 10.8 Mac Mini it would allow all devs to easily do a release (leaving aside the Windows issue). If anyone has tried the 10.6 SDK on 10.8 and knows if it actually works, that would be helpful. Any concerns, objections? I am in strong agreement. I'll be interested to learn how you make binary installers for python 3.x because the standard version of bdist_mpkg will not do it. I have heard of two other projects (forks or variants of bdist_mpkg) that will, but I have no idea of either is supported. I have been able to building packages on 10.8 using MACOSX_DEPLOYMENT_TARGET=10.6 that will run on 10.6, so it will probably work. However I have run into several odd problems over the years building a binary installer on a newer system only to find it won't work on older systems for various reasons. Thus my personal recommendation is that you build on 10.6 if you want an installer that reliably works for 10.6 and later. I keep an older computer around for this reason. In fact that is one good reason to drop support for ancient operating systems and PPC. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: matplotlib 1.3.0 released
In article 51faa3ab.6020...@stsci.edu, Michael Droettboom md...@stsci.edu wrote: On behalf of a veritable army of super coders, I'm pleased to announce the release of matplotlib 1.3.0. Downloads Downloads are available here: http://matplotlib.org/downloads.htmlhttp://matplotlib.org/downloads.html as well as through |pip|. Check with your distro for when matplotlib 1.3.0 will become packaged for your environment. (Note: Mac .dmg installers are still forthcoming due to some issues with the new installation approach.) Important known issues matplotlib no longer ships with its Python dependencies, including dateutil, pytz, pyparsing and six. When installing from source or |pip|, |pip| will install these for you automatically. When installing from packages (on Linux distributions, MacPorts, homebrew etc.) these dependencies should also be handled automatically. The Windows binary installers do not include or install these dependencies. An unofficial Mac binary is available from here: http://www.astro.washington.edu/users/rowen/python/ Known issues: - This may break existing installations of pytz and python-dateutil (especially if those were installed by the matplotlib 1.2.1 Mac binary installer). For safety, reinstall those after installing matplotlib. - Like the Windows binaries, it does not include pytz, python-dateutil, six or pyparsing. You will have to install those manually (e.g. with pip or easy_install). - Much of the test code is missing, for unknown reasons. Thus I was not able to run most of its unit tests. So...use at your own risk. At this point I have no idea if or when there will be an official Mac binary installer. I'm afraid I don't have time to track down the issues right now. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] install numpy 1.6.2 .dmg on macosx 10.7, check for python 2.7
In article loom.20121129t124459-...@post.gmane.org, denis denis-bz...@t-online.de wrote: Trying to install numpy 1.6.2 on a mac osx 10.7.4 from this .dmg 9323135 numpy-1.6.2-py2.7-python.org-macosx10.3.dmg gives numpy 1.6.2 can't be installed on this disk. numpy requires python.org Python 2.7 to install. But python 2.7.3 *is* installed from python.org 18761950 python-2.7.3-macosx10.6.dmg and /usr/bin/python is linked as described in http://wolfpaulus.com/journal/mac/installing_python_osx python -c 'import sys; print sys.version' 2.7.3 (v2.7.3:70274d53c1dd, Apr 9 2012, 20:52:43) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] These are not compatible. You have installed python.org's macosx10.6 installed (a very reasonable choice). But you must install binary packages labelled 10.6, never packages labelled 10.3 (which are only for the 10.3 version of python.org's python). I'm glad you got an error message (however opaque), since if the install had succeeded the results would not have worked. -- Russell P.S. the difference is: - 10.6 (which requires MacOS X 10.6 or later) is 64-bit and requires intel - 10.3 (which requires MacOS X 10.3.9 or later) is 32-bit and includes PPC support ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Code Freeze for NumPy 1.7
In article 1342393528.28368.3.ca...@esdceeprjpstudent1.mit.edu, Paul Natsuo Kishimoto m...@paul.kishimoto.name wrote: On Sat, 2012-07-14 at 17:45 -0500, Travis Oliphant wrote: Hey all, We are nearing a code-freeze for NumPy 1.7. Are there any last-minute changes people are wanting to push into NumPy 1.7? We should discuss them as soon as possible. I'm proposing a code-freeze at midnight UTC on July 18th (7:00pm CDT on July 17th). This will allow the creation of beta releases of NumPy on the 18th of July. This is a few days later than originally hoped for --- largely due to unexpected travel schedules of Ondrej and I, but it does give people a few more days to get patches in. Of course, we will be able to apply bug-fixes to the 1.7.x branch once the tag is made. If you have a pull-request that is not yet applied and would like to discuss it for inclusion, the time to do it is now. Best, -Travis Bump for: https://github.com/numpy/numpy/pull/351 As requested by njsmith, I gave a more detailed explanation and asked the list for input at: http://www.mail-archive.com/numpy-discussion@scipy.org/msg38306.html There was one qualified negative reply and nothing (yet) further. I'd appreciate if some other devs could weigh in. My personal opinion is that the improvement is not sufficient to justify breaking backword compatibility. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Proposed Roadmap Overview
In article cagy4rcxxl8pos5zcwa4thcg0dhkyesoepjso4z05sz_pqjv...@mail.gmail.com, David Cournapeau courn...@gmail.com wrote: On Sat, Feb 18, 2012 at 10:50 PM, Sturla Molden stu...@molden.no wrote:  In an ideal world, we would have a better language than C++ that can be spit out as C for portability. What about a statically typed Python? (That is, not Cython.) We just need to make the compiler :-) There are better languages than C++ that has most of the technical benefits stated in this discussion (rust and D being the most obvious ones), but whose usage is unrealistic today for various reasons: knowledge, availability on esoteric platforms, etc⦠A new language is completely ridiculous. I just want to say that C++ has come a long way. I used to hate it, but now that it has matured, and using some basic features of boost (especially shared_ptr) can turn it into a really nice language. The next version will be even better, but one can write nice C++ today. shared_ptr allows objects that easily manage their own memory (basic automatic garbage collection). Generic programming seems like a really good fit to numpy's array types. I am part of a large project that codes in C++ and Python and we find it works very well for us. I can't imagine working in C anymore and doing without exception handling and namespaces. So I'm sorry to hear that C++ is not being considered for a numpy rewrite. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] trouble building numpy 1.6.1 on Scientific Linux 5
In article CABL7CQi_jQZgHa5rL8aSsb_PEmAPTNXyUyQutgQtz=_ljux...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: On Tue, Dec 20, 2011 at 10:52 PM, Russell E. Owen ro...@uw.edu wrote: In article rowen-74bafa.11292712122...@news.gmane.org, Russell E. Owen ro...@uw.edu wrote: In article cabl7cqjezmtswcupj0kgfjz4xc4arrwn24bi3svzjwcc2t9...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: On Fri, Dec 9, 2011 at 8:02 PM, Russell E. Owen ro...@uw.edu wrote: I'm trying to build numpy 1.6.1 on Scientific Linux 5 but the unit tests claim the wrong version of fortran was used. I thought I knew how to avoid that, but it's not working. ...(elided text that suggests numpy is building using g77 even though I asked for gfortran)... Any suggestions on how to fix this? I assume you have g77 installed and on your PATH. If so, try moving it off your path. Yes. I would have tried that if I had known how to do it (though I'm puzzled why it would be wanted since I told the installer to use gfortran). The problem is that g77 is in /usr/bin/ and I don't have root privs on this system. The explanation of why g77 is still picked up, and a possible solution: http://thread.gmane.org/gmane.comp.python.numeric.general/13820/focus=13826 Thank you. I assume you are referring to this answer: You mean g77? Anyways, I think I know why you are having problems. Passing --fcompiler to the config command only affects the Fortran compiler that is used during configuration phase (where we compile small C programs to determine what your platform supports, like isnan() and the like). It does not propagate to the rest of the build_ext phase where you want it. Use config_fc to set up your Fortran compiler for all of the phases: $ python setup.py config_fc --fcompiler=gnu95 build Fascinating. However, there are two things I don't understand: 1) Is my build actually broken? The ldd output for lapack_lite has no sign of libg2c.so (see quote from build instructions below). If it's just a false report from the unit then I don't need to rebuild (and there are a lot of packages built against it -- a rebuild will take much of a day). 2) This advice seems to contradict the build documentation (see below). Does this indicate a bug in the docs? In setup.py? Some other issue? I don't remember ever having this problem building numpy before. Quote from the build docs: http://docs.scipy.org/doc/numpy/user/install.html Choosing the fortran compiler To build with g77: python setup.py build --fcompiler=gnu To build with gfortran: python setup.py build --fcompiler=gnu95 For more information see: python setup.py build --help-fcompiler How to check the ABI of blas/lapack/atlas One relatively simple and reliable way to check for the compiler used to build a library is to use ldd on the library. If libg2c.so is a dependency, this means that g77 has been used. If libgfortran.so is a a dependency, gfortran has been used. If both are dependencies, this means both have been used, which is almost always a very bad idea. - ldd on my numpy/linalg/lapack_lite.so (I don't see libg2c.so): -bash-3.2$ ldd /astro/users/rowen/local/lib/python/numpy/linalg/lapack_lite.so linux-vdso.so.1 = (0x7fff0cff) liblapack.so.3 = /usr/lib64/liblapack.so.3 (0x2acadd738000) libblas.so.3 = /usr/lib64/libblas.so.3 (0x2acadde42000) libgfortran.so.3 = /usr/lib64/libgfortran.so.3 (0x2acade096000) libm.so.6 = /lib64/libm.so.6 (0x2acade38) libgcc_s.so.1 = /lib64/libgcc_s.so.1 (0x2acade604000) libc.so.6 = /lib64/libc.so.6 (0x2acade812000) libgfortran.so.1 = /usr/lib64/libgfortran.so.1 (0x2acadeb6a000) /lib64/ld-linux-x86-64.so.2 (0x003b2ba0) ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] trouble building numpy 1.6.1 on Scientific Linux 5
In article CABL7CQi_jQZgHa5rL8aSsb_PEmAPTNXyUyQutgQtz=_ljux...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: On Tue, Dec 20, 2011 at 10:52 PM, Russell E. Owen ro...@uw.edu wrote: In article rowen-74bafa.11292712122...@news.gmane.org, Russell E. Owen ro...@uw.edu wrote: In article cabl7cqjezmtswcupj0kgfjz4xc4arrwn24bi3svzjwcc2t9...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: On Fri, Dec 9, 2011 at 8:02 PM, Russell E. Owen ro...@uw.edu wrote: I'm trying to build numpy 1.6.1 on Scientific Linux 5 but the unit tests claim the wrong version of fortran was used. I thought I knew how to avoid that, but it's not working. ...(elided text that suggests numpy is building using g77 even though I asked for gfortran)... Any suggestions on how to fix this? I assume you have g77 installed and on your PATH. If so, try moving it off your path. Yes. I would have tried that if I had known how to do it (though I'm puzzled why it would be wanted since I told the installer to use gfortran). The problem is that g77 is in /usr/bin/ and I don't have root privs on this system. The explanation of why g77 is still picked up, and a possible solution: http://thread.gmane.org/gmane.comp.python.numeric.general/13820/focus=13826 OK. I tried this: - clear out old numpy from ~/local - unpack fresh numpy 1.6.1 in a build directory and cd into it $ python setup.py config_fc --fcompiler=gnu95 build $ python setup.py install --home=~/local $ cd $ python $ import numpy $ numpy.__file__ # to make sure it picked up the newly build version $ numpy.test() Again the unit test fails with: FAIL: test_lapack (test_build.TestF77Mismatch) -- Traceback (most recent call last): File /astro/users/rowen/local/lib/python/numpy/testing/decorators.py, line 146, in skipper_func return f(*args, **kwargs) File /astro/users/rowen/local/lib/python/numpy/linalg/tests/test_build.py, line 50, in test_lapack information.) AssertionError: Both g77 and gfortran runtimes linked in lapack_lite ! This is likely to cause random crashes and wrong results. See numpy INSTALL.txt for more information. -- Russell P.S. I'm using nose 0.11.4 because the current version requires distrib. Surely that won't affect this? ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] trouble building numpy 1.6.1 on Scientific Linux 5
In article cafxk4brjwx_whsh7v_b62ug+3q2ctqvewgctf-p-atfe4hq...@mail.gmail.com, Olivier Delalleau sh...@keba.be wrote: 2011/12/12 Russell E. Owen ro...@uw.edu In article cabl7cqjezmtswcupj0kgfjz4xc4arrwn24bi3svzjwcc2t9...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: On Fri, Dec 9, 2011 at 8:02 PM, Russell E. Owen ro...@uw.edu wrote: I'm trying to build numpy 1.6.1 on Scientific Linux 5 but the unit tests claim the wrong version of fortran was used. I thought I knew how to avoid that, but it's not working. ...(elided text that suggests numpy is building using g77 even though I asked for gfortran)... Any suggestions on how to fix this? I assume you have g77 installed and on your PATH. If so, try moving it off your path. Yes. I would have tried that if I had known how to do it (though I'm puzzled why it would be wanted since I told the installer to use gfortran). The problem is that g77 is in /usr/bin/ and I don't have root privs on this system. -- Russell You could create a link g77 - gfortran and make sure this link comes first in your PATH. (That's assuming command lines for g77 and gfortran are compatible -- I don't know if that's the case). Interesting idea. I gave it a try (see P.S.), but it didn't help. I get the same error in the unit test. -- Russell P.S. -bash-3.2$ which g77 ~/local/bin/g77 -bash-3.2$ ls -l ~/local/bin/g77 lrwxrwxrwx 1 rowen astro 19 Dec 20 10:59 /astro/users/rowen/local/bin/g77 - /usr/bin/gfortran44 -bash-3.2$ g77 --version GNU Fortran (GCC) 4.4.0 20090514 (Red Hat 4.4.0-6) Copyright (C) 2009 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] trouble building numpy 1.6.1 on Scientific Linux 5
In article rowen-74bafa.11292712122...@news.gmane.org, Russell E. Owen ro...@uw.edu wrote: In article cabl7cqjezmtswcupj0kgfjz4xc4arrwn24bi3svzjwcc2t9...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: On Fri, Dec 9, 2011 at 8:02 PM, Russell E. Owen ro...@uw.edu wrote: I'm trying to build numpy 1.6.1 on Scientific Linux 5 but the unit tests claim the wrong version of fortran was used. I thought I knew how to avoid that, but it's not working. ...(elided text that suggests numpy is building using g77 even though I asked for gfortran)... Any suggestions on how to fix this? I assume you have g77 installed and on your PATH. If so, try moving it off your path. Yes. I would have tried that if I had known how to do it (though I'm puzzled why it would be wanted since I told the installer to use gfortran). The problem is that g77 is in /usr/bin/ and I don't have root privs on this system. I'm starting to suspect this is a bug in the unit test, not the building of numpy. The unit test complains: Traceback (most recent call last): File /astro/users/rowen/local/lib/python/numpy/testing/decorators.py, line 146, in skipper_func return f(*args, **kwargs) File /astro/users/rowen/local/lib/python/numpy/linalg/tests/test_build.py, line 50, in test_lapack information.) AssertionError: Both g77 and gfortran runtimes linked in lapack_lite ! This is likely to but when I run ldd on numpy/linalg/lapack_lite.so I get: -bash-3.2$ ldd /astro/users/rowen/local/lib/python/numpy/linalg/lapack_lite.so linux-vdso.so.1 = (0x7fff0cff) liblapack.so.3 = /usr/lib64/liblapack.so.3 (0x2acadd738000) libblas.so.3 = /usr/lib64/libblas.so.3 (0x2acadde42000) libgfortran.so.3 = /usr/lib64/libgfortran.so.3 (0x2acade096000) libm.so.6 = /lib64/libm.so.6 (0x2acade38) libgcc_s.so.1 = /lib64/libgcc_s.so.1 (0x2acade604000) libc.so.6 = /lib64/libc.so.6 (0x2acade812000) libgfortran.so.1 = /usr/lib64/libgfortran.so.1 (0x2acadeb6a000) /lib64/ld-linux-x86-64.so.2 (0x003b2ba0) The build instructions say (sic): One relatively simple and reliable way to check for the compiler used to build a library is to use ldd on the library. If libg2c.so is a dependency, this means that g77 has been used. If libgfortran.so is a a dependency, gfortran has been used. If both are dependencies, this means both have been used, which is almost always a very bad idea. I don't see any sign of libg2c.so. Is there some other evidence that numpy/linalg/lapack_lite.so is build against both g77 and gfortran, or is the unit test result wrong or...? -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] trouble building numpy 1.6.1 on Scientific Linux 5
In article cabl7cqjezmtswcupj0kgfjz4xc4arrwn24bi3svzjwcc2t9...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: On Fri, Dec 9, 2011 at 8:02 PM, Russell E. Owen ro...@uw.edu wrote: I'm trying to build numpy 1.6.1 on Scientific Linux 5 but the unit tests claim the wrong version of fortran was used. I thought I knew how to avoid that, but it's not working. ...(elided text that suggests numpy is building using g77 even though I asked for gfortran)... Any suggestions on how to fix this? I assume you have g77 installed and on your PATH. If so, try moving it off your path. Yes. I would have tried that if I had known how to do it (though I'm puzzled why it would be wanted since I told the installer to use gfortran). The problem is that g77 is in /usr/bin/ and I don't have root privs on this system. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] trouble building numpy 1.6.1 on Scientific Linux 5
I'm trying to build numpy 1.6.1 on Scientific Linux 5 but the unit tests claim the wrong version of fortran was used. I thought I knew how to avoid that, but it's not working. I don't have atlas (this needs to run on a lot of similar-but-not-identical machines). I believe blas and lapack were built against gfortran: -bash-3.2$ ldd /usr/lib64/libblas.so linux-vdso.so.1 = (0x7fff4bffd000) libm.so.6 = /lib64/libm.so.6 (0x2ab26a0c8000) libgfortran.so.1 = /usr/lib64/libgfortran.so.1 (0x2ab26a34c000) libc.so.6 = /lib64/libc.so.6 (0x2ab26a5e3000) /lib64/ld-linux-x86-64.so.2 (0x003b2ba0) -bash-3.2$ ldd /usr/lib64/liblapack.so linux-vdso.so.1 = (0x7fffe97fd000) libblas.so.3 = /usr/lib64/libblas.so.3 (0x2b6438d75000) libm.so.6 = /lib64/libm.so.6 (0x2b6438fca000) libgfortran.so.1 = /usr/lib64/libgfortran.so.1 (0x2b643924d000) libc.so.6 = /lib64/libc.so.6 (0x2b64394e4000) /lib64/ld-linux-x86-64.so.2 (0x003b2ba0) The sysadmins have provided a gcc 4.4.0 compiler that I access using symlinks on my $PATH: -bash-3.2$ which gcc g++ gfortran ~/local/bin/gcc ~/local/bin/g++ ~/local/bin/gfortran -bash-3.2$ ls -l ~/local/bin lrwxrwxrwx 1 rowen astro 14 Oct 28 2010 g++ - /usr/bin/g++44 lrwxrwxrwx 1 rowen astro 14 Oct 28 2010 gcc - /usr/bin/gcc44 lrwxrwxrwx 1 rowen astro 19 Dec 5 16:40 gfortran - /usr/bin/gfortran44 -bash-3.2$ gfortran --version GNU Fortran (GCC) 4.4.0 20090514 (Red Hat 4.4.0-6) Copyright (C) 2009 Free Software Foundation, Inc. For this log I used a home-bulit python 2.6.5 that is widely used. However, I've tried it with other builds of python that are on our system, as well, with no better success (including a Python 2.7.2). -bash-3.2$ which python /astro/apps/pkg/python64/bin/python -bash-3.2$ python Python 2.6.5 (r265:79063, Aug 4 2010, 11:27:53) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2 Type help, copyright, credits or license for more information. numpy seems to see gfortran when it builds: -bash-3.2$ python setup.py build --fcompiler=gnu95 Running from numpy source directory.non-existing path in 'numpy/distutils': 'site.cfg' F2PY Version 2 blas_opt_info: blas_mkl_info: ... NOT AVAILABLE atlas_blas_threads_info: ... NOT AVAILABLE atlas_blas_info: ... NOT AVAILABLE blas_info: ... FOUND: libraries = ['blas'] library_dirs = ['/usr/lib64'] language = f77 FOUND: libraries = ['blas'] library_dirs = ['/usr/lib64'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 lapack_opt_info: lapack_mkl_info: mkl_info: ... NOT AVAILABLE NOT AVAILABLE atlas_threads_info: ... NOT AVAILABLE atlas_info: ... NOT AVAILABLE /astro/users/rowen/build/numpy-1.6.1/numpy/distutils/system_info.py:1330: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) lapack_info: libraries lapack not found in /astro/apps/lsst_w12_sl5/Linux64/external/python/2.7.2+2/lib ... FOUND: libraries = ['lapack'] library_dirs = ['/usr/lib64'] language = f77 FOUND: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib64'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building py_modules sources creating build creating build/src.linux-x86_64-2.7 creating build/src.linux-x86_64-2.7/numpy creating build/src.linux-x86_64-2.7/numpy/distutils building library npymath sources customize Gnu95FCompiler Found executable /astro/users/rowen/local/bin/gfortran # I install it in an out-of-the-way location just so I can test it -bash-3.2$ python setup.py install --home=~/local ... -bash-3.2$ cd -bash-3.2$ python import numpy numpy.__path__ ['/astro/users/rowen/local/lib/python/numpy'] numpy.test() Running unit tests for numpy NumPy version 1.6.1 NumPy is installed in /astro/users/rowen/local/lib/python/numpy Python version 2.6.5 (r265:79063, Aug 4 2010, 11:27:53) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] nose version 0.11.4 == FAIL: test_lapack (test_build.TestF77Mismatch) -- Traceback (most recent call last): File /astro/users/rowen/local/lib/python/numpy/testing/decorators.py, line 146, in skipper_func return f(*args, **kwargs) File /astro/users/rowen/local/lib/python/numpy/linalg/tests/test_build.py, line 50, in test_lapack information.) AssertionError: Both g77 and gfortran runtimes linked in lapack_lite ! This is likely to cause random crashes and
Re: [Numpy-discussion] wanted: decent matplotlib alternative
In article 8739ew90ry@falma.de, Christoph Groth c...@falma.de wrote: Hello, Is it just me who thinks that matplotlib is ugly and a pain to use? So far I haven't found a decent alternative usable from within python. (I haven't tried all the packages out there.) I'm mostly interested in 2d plots. Who is happy enough with a numpy-compatible plotting package to recommend it? I know folks who like HippoDraw and use it instead of matplotlib due to its speed. Veusz sounds promising. Both use Qt as a back end. I've not used either because I need Tcl/TK as a back end for much of my work. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Efficient way to load a 1Gb file?
In article 781af0c6-b761-4abb-9798-938558253...@astro.physik.uni-goettingen.de, Derek Homeier de...@astro.physik.uni-goettingen.de wrote: On 11.08.2011, at 8:50PM, Russell E. Owen wrote: It seems a shame that loadtxt has no argument for predicted length, which would allow preallocation and less appending/copying data. And yes...reading the whole file first to figure out how many elements it has seems sensible to me -- at least as a switchable behavior, and preferably the default. 1Gb isn't that large in modern systems, but loadtxt is filing up all 6Gb of RAM reading it! 1 GB is indeed not much in terms of disk space these days, but using text files for such data amounts is nonetheless very much non-state-of-the-art ;-) That said, of course there is no justification to use excessive amounts of memory where it could be avoided! Implementing the above scheme for npyio is not quite as straightforward as in the example I gave before, mainly for the following reasons: loadtxt also has to deal with more complex data like structured arrays, plus comments, empty lines etc., meaning it has to find and count the actual valid data lines. Ideally, genfromtxt, which offers yet more functionality to deal with missing data, should offer the same options, but they would be certainly more difficult to implement there. More than 6 GB is still remarkable - from what info I found in the web, lists seem to consume ~24 Bytes/element, i.e. 3 times more than a final float64 array. The text representation would typically take 10-20 char's for one float (though with 12 digits, they could usually be read as float32 without loss of precision). Thus a factor 6 seems quite extreme, unless the file is full of (relatively) short integers... But this also means copying of the final array would still have a relatively low memory footprint compared to the buffer list, thus using some kind of mutable array type for reading should be a reasonable solution as well. Unfortunately fromiter is not of that much use here since it only reads 1D-arrays. I haven't tried to use Chris' accumulator class yet, so for now I did go the 2x read approach with loadtxt, it turned out to add only ~10% to the read-in time. For compressed files this goes up to 30-50%, but once physical memory is exhausted it should probably actually become faster. I've made a pull request https://github.com/numpy/numpy/pull/144 implementing that option as a switch 'prescan'; could you review it in particular regarding the following: Is the option reasonably named and documented? In the case the allocated array does not match the input data (which really should never happen), right now just a warning is issued, filling any excess buffer with zeros or discarding remaining input data - should this rather raise an IndexError? No prediction if/when I might be able to provide this for genfromtxt, sorry! Cheers, Derek This looks like a great improvement to me! I think the name is well chosen and the help is very clear. A few comments: - Might you rename the variable l? It is easily confused with the digit 1. - I don't understand the l n_valid test, so this may be off base, but I'm surprised that you first massage the data and then raise an exception. Is the massaged data any use after the exception is raised? Naively I would expect you to issue a warning instead of raising an exception if you are going to handle the error by massaging the data. (It is a pity that your patch duplicates so much parsing code, but I don't see a better way to do it. Putting conditionals in the parsing loop to decide how to handle each line based on prescan would presumably slow things down too much.) Regards, -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Efficient way to load a 1Gb file?
In article ca+rwobwjyy_abjijnxepkseraeom608uimywffgag-6xdgs...@mail.gmail.com, Torgil Svensson torgil.svens...@gmail.com wrote: Try the fromiter function, that will allow you to pass an iterator which can read the file line by line and not preload the whole file. file_iterator = iter(open('filename.txt') line_parser = lambda x: map(float,x.split('\t')) a=np.fromiter(itertools.imap(line_parser,file_iterator),dtype=float) You have also the option to iterate the file twice and pass the count argument. Thanks. That sounds great! -- RUssell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Efficient way to load a 1Gb file?
In article CANm_+Zqmsgo8Q+Oz_0RCya-hJv4Q7PqynDb=lzrgvbtxgy3...@mail.gmail.com, Anne Archibald aarch...@physics.mcgill.ca wrote: There was also some work on a semi-mutable array type that allowed appending along one axis, then 'freezing' to yield a normal numpy array (unfortunately I'm not sure how to find it in the mailing list archives). One could write such a setup by hand, using mmap() or realloc(), but I'd be inclined to simply write a filter that converted the text file to some sort of binary file on the fly, value by value. Then the file can be loaded in or mmap()ed. A 1 Gb text file is a miserable object anyway, so it might be desirable to convert to (say) HDF5 and then throw away the text file. Thank you and the others for your help. It seems a shame that loadtxt has no argument for predicted length, which would allow preallocation and less appending/copying data. And yes...reading the whole file first to figure out how many elements it has seems sensible to me -- at least as a switchable behavior, and preferably the default. 1Gb isn't that large in modern systems, but loadtxt is filing up all 6Gb of RAM reading it! I'll suggest the HDF5 solution to my colleague. Meanwhile I think he's hacked around the problem by reading the file through once to figure out the array length, allocating that, and reading the data in with a Python loop. Sounds slow, but it's working. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Efficient way to load a 1Gb file?
A coworker is trying to load a 1Gb text data file into a numpy array using numpy.loadtxt, but he says it is using up all of his machine's 6Gb of RAM. Is there a more efficient way to read such text data files? -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy.sqrt behaving differently on MacOS Lion
In article cabl7cqj4i6stf_qjndvch66fsfc5bjq9etpx3ukczaxyyuw...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: On Wed, Jul 27, 2011 at 7:17 PM, Ilan Schnell ischn...@enthought.comwrote: MacOS Lion: numpy.sqrt([complex(numpy.nan, numpy.inf)]) array([ nan+infj]) Other all system: array([ inf+infj]) This causes a few numpy tests to fail on Lion. The numpy was not compiled using the new LLVM based gcc, it is the same numpy binary I used on other MacOS systems, which was compiled using gcc-4.0.1. However on Lion it is linked to Lions LLVM based gcc runtime, which apparently has some different behavior when it comes to such strange complex values. These type of complex corner cases fail on several other platforms, there they are marked as skipped. I propose not to start changing this yet - the compiler change is causing problems with scipy ( http://projects.scipy.org/scipy/ticket/1476) and it's not yet clear what the recommended build setup on Lion should be. Regarding binaries, it may be better to distribute separate ones for each version of OS X from numpy 1.7 / 2.0 (we already do for python 2.7). In that case this particular failure will not occur. Please don't distribute a different numpy binary for each version of MacOS X. That makes it very difficult to distribute bundled applications. The current situation is very reasonable, in my opinion: numpy has two Mac binary distributions for Python 2.7: 32-bit 10.3-and-up and 64-bit 10.6-and-up. These match the python.org python distributions. I can't see wanting any more than one per python.org Mac binary. Note that the numpy Mac binaries are not listed next to each other on the numpy sourceforge download page, so some folks are installing the wrong one. If you add even more os-specific flavors the problem is likely to get worse. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Problems with numpy binary for Python2.7 + OS X 10.6
In article 4e2dcb72.3070...@hkl.hms.harvard.edu, Ian Stokes-Rees ijsto...@hkl.hms.harvard.edu wrote: As best I can tell, I have Python 2.7.2 for my system Python: [ijstokes@moose ~]$ python -V Python 2.7.2 [ijstokes@moose ~]$ which python /Library/Frameworks/Python.framework/Versions/2.7/bin/python however when I attempt to install the recent numpy binary python-2.7.2-macosx10.6.dmg I get stopped at the first stage of the install procedure with the error: numpy 1.6.1 can't be installed on this disk. numpy requires System Python 2.7 to install. Any idea what I might be doing wrong? Is it looking for /usr/bin/python2.7? For that, I only have up to 2.6 available. (and 2.5) Cheers, Ian I believe the error message is misleading (a known bug). From the path you are probably running python.org python (though it could be ActiveState or built from source). Assuming it really is python.org, the next question is: which of the two flavors of python.org Python do you have: the 10.3 version (which is 32-bit only, but very backward compatible), or the 10.6 version (which includes 64-bit support but requires MacOS X 10.6 or later)? There is a separate numpy installer for each (and unfortunately they are not listed near each other in the file list). Maybe you got that match wrong? If in doubt you could reinstall python from python.org. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: NumPy 1.6.1 release candidate 2
In article cabl7cqhnnjkzk9xnrlvdarsdknwrm4ev0mxdurjsaxq73eb...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: On Tue, Jul 5, 2011 at 11:41 PM, Russell E. Owen ro...@uw.edu wrote: In article BANLkTi=LXiTcrv1LgMtP=p9nF8eMr8=+h...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: https://sourceforge.net/projects/numpy/files/NumPy/1.6.1rc2/ Will there be a Mac binary for 32-bit pythons (one that is compatible with older versions of MacOS X)? At present I only see a 64-bit 10.6-only version. Yes there will be for the final release (10.4-10.6 compatible). I can't create those on my own computer, so sometimes I don't make them for RCs. I'm glad they will be present for the final release. FYI: I built my own 1.6.1rc2 against Python 2.7.2 (the 32-bit Mac version from python.org). I reproduced a memory error that I've been trying to narrow down. This is ticket 1896: http://projects.scipy.org/numpy/ticket/1896 and the problem is also in 1.6.0. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: NumPy 1.6.1 release candidate 2
In article BANLkTi=LXiTcrv1LgMtP=p9nF8eMr8=+h...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: https://sourceforge.net/projects/numpy/files/NumPy/1.6.1rc2/ Will there be a Mac binary for 32-bit pythons (one that is compatible with older versions of MacOS X)? At present I only see a 64-bit 10.6-only version. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy build: automatic fortran detection
In article banlktikodians0ujrdkpudffo8agpnx...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: On Thu, Jun 9, 2011 at 11:46 PM, Russell E. Owen ro...@uw.edu wrote: What would it take to automatically detect which flavor of fortran to use to build numpy on linux? You want to figure out which compiler was used to build BLAS/LAPACK/ATLAS and check that the numpy build uses the same, right? Assuming you only care about g77/gfortran, you can try this (from http://docs.scipy.org/doc/numpy/user/install.html): How to check the ABI of blas/lapack/atlas - One relatively simple and reliable way to check for the compiler used to build a library is to use ldd on the library. If libg2c.so is a dependency, this means that g77 has been used. If libgfortran.so is a a dependency, gfortran has been used. If both are dependencies, this means both have been used, which is almost always a very bad idea. You could do something similar for other compilers if needed. It would help to know exactly what problem you are trying to solve. I'm trying to automate the process of figuring out which fortran to use because we have a build system that should install numpy for users, without user interaction. We found this out the hard way by not specifying a compiler and ending up with a numpy that was mis-built (according to its helpful unit tests). However, it appears that g77 is very old so I'm now wondering if it would make sense to switch to gfortran for the default? I think our own procedure will be to assume gfortran and complain to users if it's not right. (We can afford the time to run the test). The unit tests are clever enough to detect a mis-build (though surprisingly that is not done as part of the build process), so surely it can be done. The test suite takes some time to run. It would be very annoying if it ran by default on every rebuild. It's easy to write a build script that builds numpy, then runs the tests, if that's what you need. In this case the only relevant test is the test for the correct fortran compiler. That is the only test I was proposing be performed. -= RUssell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] numpy build: automatic fortran detection
What would it take to automatically detect which flavor of fortran to use to build numpy on linux? The unit tests are clever enough to detect a mis-build (though surprisingly that is not done as part of the build process), so surely it can be done. Even if there is no interest in putting this into numpy's setup.py, we have a need for it for our own system. Any suggestions on how to do this robustly would be much appreciated. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: Numpy 1.6.0 release candidate 2
In article BANLkTinvVxwmo7t7itxxwZRtp4UY=1e...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: Hi, I am pleased to announce the availability of the second release candidate of NumPy 1.6.0. ... Sources and binaries can be found at http://sourceforge.net/projects/numpy/files/NumPy/1.6.0rc2/ For (preliminary) release notes see below. Thanks! It tests fine built from source on my Mac OS X 10.5.8 with python.org Python 2.6.6: Ran 3537 tests in 24.833s OK (KNOWNFAIL=3, SKIP=1) nose.result.TextTestResult run=3537 errors=0 failures=0 -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Question about numpy.random, especially .poisson
I stumbled across code that looks like this: imageArr = # a 2-d array of floats noiseArr = numpy.random.poisson(imageArr) This works fine in numpy 1.5.1 and seems to do what I would hope: return an array of random ints whose expectation of interval is set by the corresponding element of the input array. Very nice! However, I can't find any documentation supporting this usage. The standard help says: poisson(lam=1.0, size=None) Draw samples from a Poisson distribution. The Poisson distribution is the limit of the Binomial distribution for large N. Parameters -- lam : float Expectation of interval, should be = 0. size : int or tuple of ints, optional Output shape. If the given shape is, e.g., ``(m, n, k)``, then ``m * n * k`` samples are drawn. Which suggest that Iam must be a scalar. So... is the usage of passing in an array for Iam actually supported/safe to use? And is there some general rule I could have used to predict that? I'm not complaining -- quite the opposite. But I'd hate to code up something that uses an unsafe API, and I'd also like to be able to predict nifty features like this to get the most out of numpy. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: Numpy 1.6.0 beta 1
In article AANLkTi=eeg8kl7639imrtl-ihg1ncqyolddsid5tf...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: Hi, I am pleased to announce the availability of the first beta of NumPy 1.6.0. Due to the extensive changes in the Numpy core for this release, the beta testing phase will last at least one month. Please test this beta and report any problems on the Numpy mailing list. Sources and binaries can be found at: http://sourceforge.net/projects/numpy/files/NumPy/1.6.0b1/ For (preliminary) release notes see below. Great! FYI: it works for me on MacOS X 10.5.8 with python.org python 2.6.6: python setup.py build --fcompiler=gnu95 python setup.py install cd .. python -Wd -c 'import numpy; numpy.test()' NumPy version 1.6.0b1 NumPy is installed in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-pack ages/numpy Python version 2.6.6 (r266:84374, Aug 31 2010, 11:00:51) [GCC 4.0.1 (Apple Inc. build 5493)] nose version 0.11.4 ... Ran 3399 tests in 25.474s OK (KNOWNFAIL=3, SKIP=1) -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Request for a bit more info on structured arrays in the basics page
In article AANLkTinuGy=aof-s-lzxbguu+sikvzxctxzrccc4n...@mail.gmail.com, Skipper Seabold jsseab...@gmail.com wrote: The page http://docs.scipy.org/doc/numpy/user/basics.rec.html gives a good introduction to structured arrays. However, it says nothing about how to set a particular element (all fields at once) from a collection of data. ... I added a bit at the end here, though it is mentioned briefly above. Feel free to expand. It's a wiki. You just need edit rights. http://docs.scipy.org/numpy/docs/numpy.doc.structured_arrays/ Thank you very much. I had already tried to update the page http://docs.scipy.org/doc/numpy/user/basics.rec.html but it's auto-generated from docs and I had no idea where to find those docs. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Request for a bit more info on structured arrays in the basics page
The page http://docs.scipy.org/doc/numpy/user/basics.rec.html gives a good introduction to structured arrays. However, it says nothing about how to set a particular element (all fields at once) from a collection of data. For instance: stArr = numpy.zeros([4,5], dtype=[(pos, float, (2,)), (rot, float)]) The question is how to set stArr[0]? From experimentation it appears that you can provide a tuple, but not a list. Hence the following works just fine (and that the tuple can contain a list): strArr[0,0] = ([1.0, 1.1], 2.0) but the following fails: strArr[0,0] = [[1.0, 1.1], 2.0] with an error: TypeError: expected a readable buffer object This is useful information if one is trying to initialize a structured array from a collection of data, such as that returned from a database query. So this is a request to add a bit to the documentation. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] updated 1.5.1 release schedule
In article aanlktimfgckbg8cprygukcvwvqzxqycykgexvx_=8...@mail.gmail.com, Ralf Gommers ralf.gomm...@googlemail.com wrote: On Mon, Nov 8, 2010 at 5:16 AM, Vincent Davis vinc...@vincentdavis.net wrote: On Sun, Nov 7, 2010 at 1:51 AM, Ralf Gommers ralf.gomm...@googlemail.com wrote: Hi, Since we weren't able to stick to the original schedule for 1.5.1, here's a short note about the current status. There are two changes that need to go in before RC2: https://github.com/numpy/numpy/pull/11 https://github.com/numpy/numpy/pull/9 If no one has time for a review I'll commit those to 1.5.x by Tuesday and tag RC2. Final release should be one week after that, unless an RC3 is necessary. Since we will have 2 different dmgs for python2.7 (osx10.3 and osx10.5) and I don't think there is any check in the installer to make sure the right python2.7 version is present when installing. The installer only check that python2.7 is present. I think a check should be added. I am missing somthing or there are other suggestions I would like to get this in 1.5.1rc2. I not sure the best way to make this check but I think I can come up with a solution. Also would need a useful error message. Vincent To let the user know if there's a mismatch may be helpful, but we shouldn't prevent installation. In many cases mixing installers will just work. If you have a patch it's welcome, but I think it's not critical for this release. I am strongly in favor of such a check and refusing to install on the mismatched version of Python. I am be concerned about hidden problems that emerge later -- for instance when the user bundles an application and somebody else tries to use it. That said, I don't know how to perform such a test. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] help from OS X 10.5 users wanted
In article aanlktikgou-ics2cgnnprozcbgldxnxfd+mgbok71...@mail.gmail.com, Friedrich Romstedt friedrichromst...@gmail.com wrote: 2010/10/15 Christopher Barker chris.bar...@noaa.gov: On 10/15/10 10:54 AM, Russell E. Owen wrote: I have a 10.4 Intel machine I keep around for the sole purpose of building backward-compatible binary installers for PIL and matplotlib. If help is still wanted I'm happy to build numpy as well. I'll let Ralf and Friedrich and Vincent respond, but that sounds like a great option. Thanks for waiting for us! Here is our status: Vincent is currently setting up the machine with a crucial carbon-copy-clone, reformat, and ccc back. This is because we had only around 50 MB left in the end on the partition, and according to Vincent enlarging the partition wasn't possible for some reason be both don't know. Well, technicalities. We're having to set up py2.6 py2.7, what shouldn't be an issue, and will have to compile mpl for both of them since the build process should not change Python version while its running, i.e., when building the py2.6 dmg, the docs will be built using py2.6, and when py2.5 dmg, then py2.5, that's what I mean. Also LaTeX is needed, Russell, as you might know. MacTeX-2010 is about 1.6 GB download. Interesting. I had no idea but I can install that. Here come now the interesting facts: 1) Some tests of numpy failed in 2.0.0.dev. When the machine is running again I can send the logs. All some strange-looking typecode string tests with dtype('...') iirc. I have no idea what to make of this. 2) I noticed that the paver at some late point tried to switch from py2.5 to py2.6, what is rather strange to me. I must have a look where precisely the build failed for this reason. py2.6 is the DEFAULT_PYTHON_VERSION (iirc) in pavement.py, and changing it to 2.5 fixed it. Strange. 3) I found no v1.5.1 tag yet (yesterday). Will the HEAD become 1.5.1? Here a comparison of our two systems (Russell's and our's): * We will have 10.4 (?), 10.5, 10.6 available on the same machine with vpn access for everyone who wants a cert. Mine has 10.4 and 10.5 available (via an old two-partition external hard drive dedicated to building python packages). It has no public access and I'd prefer to keep it that way. It's also off most of the time and there are times when I will not have it at all (since I have it on long-term loan). * But we need time to set it up properly. We're unwilling to do half-baken things, so I agree that Vincent it installing 10.6 right now (I just got the message), but time is rare this weekend. * So my suggestion would be, Russell, if you could do the build more easily then we can, just feel free, I was hoping Vincent and me would get the credits though ;-), but first it must succeed on time. Vincent, what do you think? * For future, I would prefer Vincent's machine. We have dyndns and the machine can be dedicated for building this stuff. If we get a 10.4, we have 10.4 to 10.6 all together on a single place, and we could do with it whatever we feel like. ... It sounds like you have things under control. I propose to leave it to you. It sounds like you are doing a great and very thorough job. If you need a confirming build for some reason I'm happy to do that. Are you also building matplotlib then? If you are then please install ActiveState Tcl/Tk so the matplotlib will be compatible with 3rd party Tcl/Tk (as well as Apple's built-in Tcl/Tk). The best version for python.org's Python 2.6 is 8.4.19. I'm betting the same is true of the 32-bit Python 2.7. I'm not sure what version of Tcl/Tk the 64-bit version of Python 2.7 was built against, but that's the one to match. I'm hoping to build PIL and matplotlib for Python 2.7 in the next month or so, depending if I can figure out how to do it. (The 32-bit version should be easy; it's the 64-bit version I'm worried about). -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] numpy mac binary for Python 2.7: which version is it for?
There are two Python 2.7 installers available at python.org a 32 bit version for MacOS X 10.3.9 and later and a 64 bit version for Mac OS X 10.5 and later. There is one numpy 1.5.0 binary installer for Mac Python 2.7. Which Mac python was it built for? (Or if it is compatible with both, how did you manage that?). -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] numpy macosx10.5 binaries: compatible with 10.4?
All the official numpy 1.3.0 Mac binaries are labelled macosx10.5. Does anyone know if these are backwards compatible with MacOS X 10.4 or 10.3.9? -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] record array question
Is it straightforward to generate a record array (preferably a standard numpy.ndarray, not the numpy.rec variant) where some named fields contain pairs of numbers, for example: named field pos contains pairs of floats named field rot contains floats Any pointers to relevant documentation would be appreciated. I found a basic intro to record arrays on the scipy web site (which was quite useful for generating simple record arrays), and the numpy book has a bit of info, but I've not found anything very comprehensive. -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] record array question
In article rowen-1ff89a.11051515072...@news.gmane.org, Russell E. Owen ro...@uw.edu wrote: Is it straightforward to generate a record array (preferably a standard numpy.ndarray, not the numpy.rec variant) where some named fields contain pairs of numbers, for example: named field pos contains pairs of floats named field rot contains floats Any pointers to relevant documentation would be appreciated. I found a basic intro to record arrays on the scipy web site (which was quite useful for generating simple record arrays), and the numpy book has a bit of info, but I've not found anything very comprehensive. -- Russell Never mind. I found it today based on a posting; the kind of array I was interested in is a structured array and using that term the docs are easy to find, e.g.: http://docs.scipy.org/doc/numpy/user/basics.rec.html In particular: numpy.zeros(shape, dtype=[(pos, float, (2,)), (rot, float)]) -- Russell ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Advice on converting Numarray C extension?
In article e06186140906291710s34865590p38032012f12d0...@mail.gmail.com, Charles R Harris charlesr.har...@gmail.com wrote: On Mon, Jun 29, 2009 at 4:17 PM, Russell E. Owen ro...@u.washington.eduwrote: In article e06186140906291429m3cb339e8ge298f179d811e...@mail.gmail.com, Charles R Harris charlesr.har...@gmail.com wrote: On Mon, Jun 29, 2009 at 3:03 PM, Russell E. Owen ro...@u.washington.eduwrote: I have an old Numarray C extension (or, rather, a Python package containing a C extension) that I would like to convert to numpy (in a way that is likely to be supported long-term). How big is the extension and what does it do? It basically contains 2 functions: 1: radProfile: given a masked image (2d array), a radius and a desired center: compute a new 1d array whose value at index r is the sum of all unmasked pixels at radius r. 2: radAsymm: given the same inputs as radProfile, return a (scalar) measure of radial asymmetry by computing the variance of unmasked pixels at each radius and combining the results. The original source file is about 1000 lines long, of which 1/3 to 1/2 is the basic C code and the rest is Python wrapper. It sounds small enough that you should be able to update it to the numpy interface. What functions do you need? You should also be able to attach a copy (zipped) if it is small enough, which might help us help you. It is the PyGuide package http://www.astro.washington.edu/rowen/PyGuide/files/PyGuide.zip a 525k zip file. The extension code is in the src directory. I would certainly be grateful for any pointers to how the old numarray C API functions map to the new numpy ones. I would prefer to use the new numpy API if I can figure out what to do. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Advice on converting Numarray C extension?
I have an old Numarray C extension (or, rather, a Python package containing a C extension) that I would like to convert to numpy (in a way that is likely to be supported long-term). Options I have found include: - Use the new numpy extension. This seems likely to be fast and future-proof. But I am finding the documentation slow going. Does anyone know of a simple example (e.g. read in an array, create a new array)? - Use the Numarray compatible C API. Simple (and takes advantage of the nice Numarray tutorial example for documentation), but will this be supported in the long term? - Switch to ctypes. Simple in concept. But I'm wondering if I can get distutils to build the resulting package. - Use SWIG. I have some experience with it, but not with numpy arrays. - Use Cython to replace the C code. No idea if this is a long-term supported package. Another option is to try to rewrite in pure python. Perhaps the numpy indexing is sophisticated enough to allow an efficient solution. The C extension computes a radial profile from a 2-d masked array: radProf(r)= sum of all unmasked pixels at radius r about some specified center index I can easily generate (and cache) a 2-d array of radius index, but is it possible to use that to efficiently generate the desired sum? Any opinions? -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Advice on converting Numarray C extension?
In article e06186140906291429m3cb339e8ge298f179d811e...@mail.gmail.com, Charles R Harris charlesr.har...@gmail.com wrote: On Mon, Jun 29, 2009 at 3:03 PM, Russell E. Owen ro...@u.washington.eduwrote: I have an old Numarray C extension (or, rather, a Python package containing a C extension) that I would like to convert to numpy (in a way that is likely to be supported long-term). Options I have found include: - Use the new numpy extension. This seems likely to be fast and future-proof. But I am finding the documentation slow going. Does anyone know of a simple example (e.g. read in an array, create a new array)? - Use the Numarray compatible C API. Simple (and takes advantage of the nice Numarray tutorial example for documentation), but will this be supported in the long term? - Switch to ctypes. Simple in concept. But I'm wondering if I can get distutils to build the resulting package. - Use SWIG. I have some experience with it, but not with numpy arrays. - Use Cython to replace the C code. No idea if this is a long-term supported package. Another option is to try to rewrite in pure python. Perhaps the numpy indexing is sophisticated enough to allow an efficient solution. The C extension computes a radial profile from a 2-d masked array: radProf(r)= sum of all unmasked pixels at radius r about some specified center index I can easily generate (and cache) a 2-d array of radius index, but is it possible to use that to efficiently generate the desired sum? Any opinions? How big is the extension and what does it do? It basically contains 2 functions: 1: radProfile: given a masked image (2d array), a radius and a desired center: compute a new 1d array whose value at index r is the sum of all unmasked pixels at radius r. 2: radAsymm: given the same inputs as radProfile, return a (scalar) measure of radial asymmetry by computing the variance of unmasked pixels at each radius and combining the results. The original source file is about 1000 lines long, of which 1/3 to 1/2 is the basic C code and the rest is Python wrapper. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Advice on converting Numarray C extension?
In article 4d2b04ed-4612-4244-a8b8-3ff0c8659...@stsci.edu, Perry Greenfield pe...@stsci.edu wrote: Hi Russell, Have you looked at the example in our interactive data analysis tutorial where we compute radial profiles in Python? It's not as fast as C because of the sort, but perhaps that's fast enough for your purposes. I wasn't sure if you had already seen that approach or not. (I think it is in the 3rd chapter, but I can look it up if you need me to). I have not seen this. I'll give it a look. Thanks! But I suspect the sort will add unacceptable overhead because this radial profile is computed as part of an iteration (to find the point of maximum radial symmetry). -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Please add Mac binary for numpy 1.3.0 and Python 2.6
If you don't want to build one then you are welcome to serve one I built. Several people have tried it and reported that it works. Contact me for a URL. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy Mac binary for Python 2.6
In article 49ecf2aa.8080...@noaa.gov, Christopher Barker chris.bar...@noaa.gov wrote: Russell E. Owen wrote: http://www.pymvpa.org/devguide.html The patch at the end of this document worked. Has anyone submitted these patches so they'll get into bdist_mpkg? I'm guessing Ronald Oussoren would be the person to accept them, but you can post to the MacPython list to be sure. -Chris I have not, but would be happy to -- but where? The bdist_mpkg home pagemhttp://undefined.org/python/#bdist_mpkg doesn't seem to list a bug tracker. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy Mac binary for Python 2.6
In article 5b8d13220904161842k2f2f76c9v1dde62f4655c2...@mail.gmail.com, David Cournapeau courn...@gmail.com wrote: On Fri, Apr 17, 2009 at 4:55 AM, Russell E. Owen ro...@u.washington.edu wrote: Does anyone have a binary installer for numpy 1.3.0 and Python 2.6? I've been able to install from source and all tests passed, but I prefer official binaries because I have some confidence that there are no hidden dependencies (important for distributing self-contained apps). I tried to build a binary myself with bdist_mpkg, but it failed with the following (I appended the full traceback, this is just a summary): It is a bug of bdist_mpkg on leopard (the error message is a bit misleading - if you look at the code, you will see it calls for a command line utility which does not exist on leopard). See: http://www.pymvpa.org/devguide.html David Thank you very much for the patch. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy Mac binary for Python 2.6
In article 5b8d13220904161842k2f2f76c9v1dde62f4655c2...@mail.gmail.com, David Cournapeau courn...@gmail.com wrote: It is a bug of bdist_mpkg on leopard (the error message is a bit misleading - if you look at the code, you will see it calls for a command line utility which does not exist on leopard). See: http://www.pymvpa.org/devguide.html The patch at the end of this document worked. I have a numpy 1.3.0 binary installer for Mac Python 2.6; if anyone wants to test it or serve it please contact me via email. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] numpy Mac binary for Python 2.6
Does anyone have a binary installer for numpy 1.3.0 and Python 2.6? I've been able to install from source and all tests passed, but I prefer official binaries because I have some confidence that there are no hidden dependencies (important for distributing self-contained apps). I tried to build a binary myself with bdist_mpkg, but it failed with the following (I appended the full traceback, this is just a summary): /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-pac kages/bdist_mpkg/tools.py, line 90, in get_gid raise ValueError('group %s not found' % (name,)) ValueError: group admin not found This is with bdist_mpkg 0.4.3 (official release), machllib and py2app from svn trunk (but same error with the official releases), svn 1.6.1, MacOS X 10.5.6 and export MACOSX_DEPLOYMENT_TARGET=10.4. Not sure why get_gid would be failing. The admin group certainly appears to exist and is the owner of /Library/Frameworks. -- Russell /usr/bin/mkbom build/bdist.macosx-10.3-fat/mpkg/platlib/Library/Frameworks/Python.framew ork/Versions/2.6/lib/python2.6/site-packages dist/numpy-1.3.0-py2.6-macosx10.5.mpkg/./Contents/Packages/numpy-platlib- 1.3.0-py2.6-macosx10.5.pkg/Contents/Archive.bom sh: /usr/bin/nidump: No such file or directory Traceback (most recent call last): File /Library/Frameworks/Python.framework/Versions/2.6/bin/bdist_mpkg, line 8, in module load_entry_point('bdist-mpkg==0.4.3', 'console_scripts', 'bdist_mpkg')() File /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-pac kages/bdist_mpkg/script_bdist_mpkg.py, line 27, in main execfile(sys.argv[0], g, g) File setup.py, line 172, in module setup_package() File setup.py, line 165, in setup_package configuration=configuration ) File /Users/rowen/Archives/PythonPackages/numpy-1.3.0/numpy/distutils/core.p y, line 184, in setup return old_setup(**new_attr) File /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutil s/core.py, line 152, in setup dist.run_commands() File /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutil s/dist.py, line 975, in run_commands self.run_command(cmd) File /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/distutil s/dist.py, line 995, in run_command cmd_obj.run() File /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-pac kages/bdist_mpkg/cmd_bdist_mpkg.py, line 443, in run self.make_scheme_package(scheme) File /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-pac kages/bdist_mpkg/cmd_bdist_mpkg.py, line 385, in make_scheme_package self.get_scheme_description(scheme), File /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-pac kages/bdist_mpkg/pkg.py, line 157, in make_package admin = tools.admin_writable(prefix) File /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-pac kages/bdist_mpkg/tools.py, line 106, in admin_writable gid = get_gid('admin') File /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-pac kages/bdist_mpkg/tools.py, line 90, in get_gid raise ValueError('group %s not found' % (name,)) ValueError: group admin not found ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] survey of freely available software for the solution of linear algebra problems
In article web-118971...@uni-stuttgart.de, Nils Wagner nwag...@iam.uni-stuttgart.de wrote: http://www.netlib.org/utk/people/JackDongarra/la-sw.html You might add Eigen: http://eigen.tuxfamily.org/index.php?title=Main_Page We are finding it to be a very nice package (though the name is unfortunate from the perspective of internet search engines). It is a pure C++ template library, which is brilliant. That makes it much easier to build a package using Eigen than one using lapack/blas/etc.. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] A basic question about swig and Numeric
In article [EMAIL PROTECTED], Michel Dupront [EMAIL PROTECTED] wrote: Hello, I am trying to use Numeric and swig but it seems that there are few points that I don't understand. The only excuse I have is that I am new to these tools. I have a simple example that I cannot make work the way I would like. I have a c++ function that take as argument a std::vector. From python I want to call the c++ function with an array object. For that purpose I want to write a typemap. Would this suffice: %include std_vector.i %template(vectorF) std::vectorfloat; %template(vectorD) std::vectordouble; This will certainly make a SWIGged function accept a list where a vector of floats or doubles is expected (and you can expand the types of course). I'm not sure it'll take a numpy or Numeric array of float or double, but if not then perhaps you can read the code in std_vector.i and see how it works. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Recommendations for using numpy ma?
In article [EMAIL PROTECTED], Pierre GM [EMAIL PROTECTED] wrote: Russell, What used to be numpy.core.ma is now numpy.oldnumeric.ma, but this latter isd no longer supported and will disappear soon as well. Just use numpy.ma If you really need support to ancient versions of numpy, just check the import try: import numpy.core.ma as ma except ImportError: import numpy as ma (I assume you mean the last line to be import numpy .ma as ma?) Thanks! I was afraid I would have to do that, but not having ready access to ancient versions of numpy I was hoping I was wrong and that numpy.ma would work for those as well. However, I plan to assume a modern numpy first, as in: try: import numpy.ma as ma except ImportError: import numpy.core.ma as ma Then, you need to replace every mention of numpy.core.ma in your code by ma. Your example would then become: unmaskedArr = numpy.array( ma.array( ^^ dataArr, mask = mask self.stretchExcludeBits, dtype = float, ).compressed()) On another note: wha't the problem with 'compressed' ? It should return a ndarray, why/how doesn't it work ? The problem is that the returned array does not support the sort method. Here's an example using numpy 1.0.4: import numpy z = numpy.zeros(10, dtype=float) m = numpy.zeros(10, dtype=bool) m[1] = 1 mzc = numpy.ma.array(z, mask=m).compressed() mzc.sort() the last statement fails witH: Traceback (most recent call last): File stdin, line 1, in module File /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-pac kages/numpy/core/ma.py, line 2132, in not_implemented raise NotImplementedError, not yet implemented for numpy.ma arrays NotImplementedError: not yet implemented for numpy.ma arrays This seems like a bug to me. The returned object is reported by repr to be a normal numpy array; there is no obvious way to tell that it is anything else. Also I didn't see any reason for compressed to return anything except an ordinary array. Oh well. I reported this on the mailing list awhile ago when I first stumbled across it, but nobody seemed interested at the time. It wasn't clear to me whether it was a bug so I dropped it without reporting it formally (and I've still not reported it formally). -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Recommendations for using numpy ma?
I have some code that does this: # an extra array cast is used because compressed returns what *looks* like an array # but is actually something else (I'm not sure exactly what) unmaskedArr = numpy.array( numpy.core.ma.array( dataArr, mask = mask self.stretchExcludeBits, dtype = float, ).compressed()) That was working fine in numpy 1.0.4 but I've just gotten a report that it fails in 1.1. So...is there a notation that is safer: compatible with the widest possible range of versions? If I replace numpy.core.ma with numpy.ma this weems to work in 1.0.4 (I'm not sure about 1.1). But I fear it might not work with older versions of numpy. This software is used by a wide range of users with a wide range of versions of numpy. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] numpy masked array oddity
The object returned by maskedArray.compressed() appears to be a normal numpy array (based on repr output), but in reality it has some surprising differences: import numpy a = numpy.arange(10, dtype=int) b = numpy.zeros(10) b[1] = 1 b[3] = 1 ma = numpy.core.ma.array(a, mask=b, dtype=float) print ma # [0.0 -- 2.0 -- 4.0 5.0 6.0 7.0 8.0 9.0] c = ma.compressed() print repr(c) # array([ 0. 2. 4. 5. 6. 7. 8. 9.]) c.sort() #Traceback (most recent call last): # File stdin, line 1, in module # File /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-pac kages/#numpy/core/ma.py, line 2132, in not_implemented #raise NotImplementedError, not yet implemented for numpy.ma arrays #NotImplementedError: not yet implemented for numpy.ma arrays d = numpy.array(c) d.sort() # this works fine, as expected Why is c in the example above not just a regular numpy array? It is not a live view (based on a quick test), which seems sensible to me. I've worked around the problem by making a copy (d in the example above), but it seems most unfortunate to have to copy the data twice. -- Russsell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Moving away from svn ?
In article [EMAIL PROTECTED], David Cournapeau [EMAIL PROTECTED] wrote: On Jan 5, 2008 1:30 AM, Charles R Harris [EMAIL PROTECTED] wrote: I like Mercurial and use it a lot, but I'm not convinced we have enough developers and code to justify the pain of changing the VCS at this time. I don't understand the number of developers argument: on most of the projects I am working on, I am the only developer, and I much prefer bzr to svn, although for reasons which are not really relevant to a project like numpy/scipy. SVN g!enerally works well and has good support on Windows through tortoise. That's where I don't agree: I don't think svn works really well. As long as you use it as an history backup, it works ok, but that's it. The non functional merge makes branching almost useless, reverting back in time is extremely cumbersome, I am a bit puzzled by the vitriol about merging with svn. svn's built in merge is a joke but svnmerge.py works reasonably well (especially newer versions of svnmerge.py; I use rev 26317 and the version included in the current svn 1.4.6 should be even more recent) I agree that reverting a file to an older versions is clumsy using svn. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Anyone have a well-tested SWIG-based C++ STL valarray = numpy.array typemap to share?
In article [EMAIL PROTECTED], Bill Spotz [EMAIL PROTECTED] wrote: I have been considering adding some C++ STL support to numpy/doc/swig/ numpy.i. Probably std::vectorTYPE = PyArrayObject (and some std::complexTYPE support as well). Is this what you had in mind? That sounds very useful, but how did you get it to work? std::vectors are resizable and numpy arrays are not. However, much of the time I want std::vectors of a particular size -- in which case numpy would be a great match. (Speaking of which, do you happen to know of any good std::vector variant that has fixed length?) -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] NumPy 1.0.3 for OS-X
In article [EMAIL PROTECTED], David L Goldsmith [EMAIL PROTECTED] wrote: Hold on again, I think I did it: it works on my BSN Intel Mac and Chris is about to test it on his not-so-new PPC Mac. Assuming I built a viable product, how do I put it in the right place (i.e., @ http://pythonmac.org/packages/py25-fat/index.html)? Thanks! Put it on a server and send the link to Bob Ippolito: bob (insert at here) redivi (insert dot here) com Two other suggestions: - Include the date in your filename. That way if you have to modify the installer users can tell there's been a change. - Make it a .dmg file (e.g. by running Disk Utility and drag it onto the icon in the dock). I've found a few users have trouble with zip files. Thank you for doing this. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy endian question
In article [EMAIL PROTECTED], Christopher Hanley [EMAIL PROTECTED] wrote: Russell, This should work as a consistent test for bigendian: - isBigEndian = (obj.dtype.str[0] == '') Also, I have ported numarray's numdisplay to numpy if you would like to directly display an array in DS9. We haven't done an official release yet (sometime soon) but I can forward you a copy if you are interested. I would very much like a copy. I've never heard of numdisplay before but am always interested in code that can talk to DS9. I'm porting RO.DS9. it works but has some rather ugly bits of code in it to deal with the many vagaries of xpa and ds9. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy endian question
In article [EMAIL PROTECTED], Francesc Altet [EMAIL PROTECTED] wrote: El dj 26 de 04 del 2007 a les 11:38 -0700, en/na Russell E. Owen va escriure: In converting some code from numarray to numpy I had this: isBigendian = (arr.isbyteswapped() != numarray.isBigEndian) The only numpy version I've come up with is: isBigEndian = (arr.dtype.descr[0][1][0] == '') isBigEndian = (arr.dtype.str[0] == '') is a little bit shorter. A more elegant approach could be: isBigEndian = arr.dtype.isnative ^ numpy.little_endian Thank you. That's just what I wanted (I had looked for the numpy version of numarray.isBigEndian but somehow missed it). Your other suggestion is also much nicer than my version, but I'll use the latter. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Compact way of performing array math with specified result type?
I often find myself doing simple math on sequences of numbers (which might or might not be numpy arrays) where I want the result (and thus the inputs) coerced to a particular data type. I'd like to be able to say: numpy.divide(seq1, seq2, dtype=float) but ufuncs don't allow on to specify a result type. So I do this instead: numpy.array(seq1, dtype=float) / numpy.array(seq2, dtype=float) Is there a more compact solution (without having to create the result array first and supply it as an argument)? -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] numpy endian question
In converting some code from numarray to numpy I had this: isBigendian = (arr.isbyteswapped() != numarray.isBigEndian) The only numpy version I've come up with is: isBigEndian = (arr.dtype.descr[0][1][0] == '') which is short but very obscure. Has anyone got a suggestion for a clearer test? I found lots of *almost* useful flags and methods. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Questions about converting to numpy
In article [EMAIL PROTECTED], Robert Kern [EMAIL PROTECTED] wrote: Christopher Barker wrote: I can only help with one: - Even after reading the book I'm not really clear on why one would use numpy.float_ instead of numpy.float or float They float and numpy.float are the same, and numpy.float_ is the same as numpy.float64: import numpy float is numpy.float True numpy.float64 is numpy.float64 True float was added to the numpy namespace so that we could write consistent code like: a = array(object, numpy.float32) b = array(object, numpy.float) i.e. have it all in the same namespace. I'm not sure why float_ is an alias for float64, though I'm guessing it's possible that on some platforms they are not the same. Rather, numpy.float used to be an alias for numpy.float64; however, it overrode the builtin float() when from numpy import * was used at the interactive prompt. Consequently, we renamed it numpy.float_ and specifically imported the builtin float as numpy.float such that we didn't break code that had already started using numpy.float. But I still don't understand why one shouldn't just use dtype=float or numpy.float. Does that result in an array with a different type of float than numpy.float_ (float64)? Or does it just somehow speed up numpy because it doesn't have to convert the python type into a numpy dtype. Anyway, thank you all for the helpful answers! I'm glad numpy throws the standard MemoryError since I'm already testing for that. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Distributing prebuilt numpy and other extensions
In article [EMAIL PROTECTED], Zachary Pincus [EMAIL PROTECTED] wrote: Hello folks, I've developed some command-line tools for biologists using python/ numpy and some custom C and Fortran extensions, and I'm trying to figure out how to easily distribute them... For people using linux, I figure a source distribution is no problem at all. (Right?) On the other hand, for Mac users (whose computers by default don't have the dev tools, and even then would need to get a fortran compiler elsewhere) I'd like to figure out something a bit easier. I'd like to somehow provide an installer (double-clickable or python script) that does a version check and then installs an appropriate version of prebuilt binaries for numpy and my C and Fortran extensions. Is this possible within the bounds of the python or numpy distutils? Would setuptools be a better way to go? Preferably it would be a dead easy, one-step thing... Or is this whole idea problematic, and better to stick with source distribution in all cases? As Robert Kern said, using bdist_mpkg is a nice easy way to create a double-clickable Mac installer for python code. It builds an installer package using the normal setup.py file for your stuff. Lots of packages built this way are available at: http://pythonmac.org/packages But if you want one installer that installs everything then, you have to figure what to do if the user already has some of the your python packages installed (e.g. numpy). Overwrite the existing package? Somehow install it in parallel and have the user pick which version to use? None of this is automated in bdist_mpkg. It is set up to install one python package at a time. So... For your project I suspect you would be better off using easy_install http://peak.telecommunity.com/DevCenter/EasyInstall and packaging your project as a python egg. easy_install is cross-platform, handles dependencies automatically and can install from source or precompiled binaries. That said, I've not actually used it except to install existing eggs, though I'd like to find some time to learn it. -- Russell ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Different results from repeated calculation
On a PPC MacOS X box I don't see an error. If I append if __name__ == __main__: run() to your test code and then run it I get: repeatability #1 ... ok repeatability #2 ... ok repeatability #3 ... ok -- Ran 3 tests in 0.156s OK ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion