Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
On Jan 26, 2008 1:05 AM, David Cournapeau [EMAIL PROTECTED] wrote: Charles R Harris wrote: On Jan 25, 2008 11:30 PM, David Cournapeau [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Charles R Harris wrote: snip It varies with the Linux distro. The usual convention (LSB, I think) uses /usr/local/lib64, but Debian and distros derived from Debian use /usr/local/lib instead. That's how it was the last time I checked, anyway. I don't know what Gentoo and all the others do. Grrr, that's annoying. Do you know any resource which has clear explanation on that ? (reading LSB documents do not appeal much to me, and I would only do that as a last resort). The Debian folks will tell you that they did it right and everyone else screwed up horribly O_o. Well, as arrogant as debian folks may be, they are almost always right :) I will take a look at the debian maintainer guide, then. Anyway, it looks like they put the 32 bit stuff in lib32, so if a distro has a lib32, then you can figure the 64 bit stuff is in lib. Those are the only two variations I know of. What 64 bit windows does, I have no idea. Windows and 64 bits, here is something which scares me. I also have to see how solaris does it. But anyway, since vmware can run 64 bits OS on my 32 bits linux, things should be easier. ISTR that Solaris does it the Debian way. I wouldn't put money on it without checking, though. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
On Jan 25, 2008 11:30 PM, David Cournapeau [EMAIL PROTECTED] wrote: Charles R Harris wrote: snip It varies with the Linux distro. The usual convention (LSB, I think) uses /usr/local/lib64, but Debian and distros derived from Debian use /usr/local/lib instead. That's how it was the last time I checked, anyway. I don't know what Gentoo and all the others do. Grrr, that's annoying. Do you know any resource which has clear explanation on that ? (reading LSB documents do not appeal much to me, and I would only do that as a last resort). The Debian folks will tell you that they did it right and everyone else screwed up horribly O_o. Anyway, it looks like they put the 32 bit stuff in lib32, so if a distro has a lib32, then you can figure the 64 bit stuff is in lib. Those are the only two variations I know of. What 64 bit windows does, I have no idea. 32-bit (compat) libraries 64-bit (native) libraries Fedora Core /lib/, /usr/lib/ /lib64/, /usr/lib64/ Debian /lib32/, /usr/lib32/ /lib/, /usr/lib/ Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
Charles R Harris wrote: On Jan 25, 2008 11:30 PM, David Cournapeau [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Charles R Harris wrote: snip It varies with the Linux distro. The usual convention (LSB, I think) uses /usr/local/lib64, but Debian and distros derived from Debian use /usr/local/lib instead. That's how it was the last time I checked, anyway. I don't know what Gentoo and all the others do. Grrr, that's annoying. Do you know any resource which has clear explanation on that ? (reading LSB documents do not appeal much to me, and I would only do that as a last resort). The Debian folks will tell you that they did it right and everyone else screwed up horribly O_o. Well, as arrogant as debian folks may be, they are almost always right :) I will take a look at the debian maintainer guide, then. Anyway, it looks like they put the 32 bit stuff in lib32, so if a distro has a lib32, then you can figure the 64 bit stuff is in lib. Those are the only two variations I know of. What 64 bit windows does, I have no idea. Windows and 64 bits, here is something which scares me. I also have to see how solaris does it. But anyway, since vmware can run 64 bits OS on my 32 bits linux, things should be easier. Thanks for the info, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
Hi all, I don't know much about what are these scons are, if it's something essential (as it seems to be from amount of mailing list traffic) why can't it be just merged to numpy, w/o making any additional branches? Regards, D. David Cournapeau wrote: ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
dmitrey wrote: Hi all, I don't know much about what are these scons are, if it's something essential (as it seems to be from amount of mailing list traffic) why can't it be just merged to numpy, w/o making any additional branches? It's a very large, still experimental change to the entire build infrastructure. It requires branches for us to evaluate. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
2008/1/25, dmitrey [EMAIL PROTECTED]: Hi all, I don't know much about what are these scons are, if it's something essential (as it seems to be from amount of mailing list traffic) why can't it be just merged to numpy, w/o making any additional branches? Scons is a building system, like distutils but much more powerful. It is not simple to replace partially distutils in the numpy construction system. There are a lot of things to test during the build. So while David makes this happen, numpy can't be in an unstable state where the SVN head can't be compiled and used. This branch is thus needed and will be merged when both the trunk and the branch are stable. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
dmitrey wrote: Hi all, I don't know much about what are these scons are, if it's something essential (as it seems to be from amount of mailing list traffic) why can't it be just merged to numpy, w/o making any additional branches? scons is a build system, like make. The difference being that sconscripts (makefiles for scons) are written in python, and well supported on windows. The standard way to build python extension is to use distutils, but distutils itself is difficult to extend (the code is basically spaghetti, not maintained and not documented), and to be fair, numpy's needs go way beyond the usual python extension: we want to control optimization flags, we need fortran, we depend on a lot of external libraries we need to check, etc... numpy uses distutils extension (numpy.distutils), to support fortran and other goodies. But because it is based on distutils, it has inherited some of the distutils' problems; in particular, it is not possible to build dynamically loaded libraries (necessary for ctypes-based extensions), it is difficult to check for additional libraries, etc... So instead, I plug scons in distutils, so that all C code in numpy is built through sconscripts. But the build system arguably being one of the most crucial part of numpy, it is necessary to do changes incrementally in branches (the code changes are big: several thousand of python code, which is difficult to test by nature). cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
Matthew Brett wrote: Hi David, Basically, I'm trying to understand the library discovery, linking steps - ATLAS in particular. Don't trust the included doc: it is not up-to-date, and that's the part which I totally redesigned since I wrote the initial doc. - For a perflib check to succeed, I try to compile, link and run code snippets. Any failure means the Check* function will return failure. - When a perflib Check* succeeds, the meta library check checks that another code snippet can be compiled/linked/run. Thanks - and I agree, for all the reasons you've given in later emails that this seems like the way to go. But, in the mean time, I've got problems with the perflib stuff. Specifically, the scons build is not picking up the atlas library for blas and lapack that works for the standard build. I'm on a 64 bit system, providing an extra layer of complexity. I have never tested the thing on 64 bits (I don't have easy access to 64 bits machine, which should change soon, hopefully), so it is not surprising it does not work now. I have taken this into account for the design, though (see below). I've attached the build logs. I noticed that, for atlas, you check for atlas_enum.c - but do you in fact need this for the build? Now. I just wanted one header specific to atlas. It looks like not all version of ATLAS install this one, unfortunately (3.8, for example). numpy.distutils seemed to be satisfied with cblas.h and clapack.h in /usr/local/include/atlas. It's no big deal to copy it from sources, but was there a reason you chose that file? No reason, but it cannot be cblas.h (it has to be atlas specific; otherwise, it does not make sense). The list of headers to check can be empty, though. The test for linking to blas and lapack from atlas fails too - is this a false positive? Hmm, if atlas check does not work, it sounds like a right negative to me :) If ATLAS is not detected correctly, it won't be used by blas/lapack checkers. Or do you mean something else ? For both numpy.distutils and numpyscons, default locations of libraries do not include the lib64 libraries like /usr/local/lib64 that us 64 bit people use. Is that easy to fix? Yes, it is easy, in the sense that nothing in the checkers code is harcoded: all the checks internally uses BuildConfig instances, which is like a dictionary with default values and a restricted set of keys (the keys are library path, libraries, headers, etc...). Those BuildConfig instances are created from a config file (perflib.cfg), and should always be customizable from site.cfg The options which can be customized can be found in the perflib.cfg file. For example, having: [atlas] htc = in your site.cfg should say to CheckATLAS to avoid looking for atlas_enum.h To make 64 bits work by default is a bit more complicated. I thought a bit about the problem: that's why the checkers do not use BuildConfig instances directly, but request them through a BuildConfigFactory. One problem is that I don't understand how 64 bits libraries work; more precisely, what is the convention of library path ? Is there a lib64 for each lib directory (/lib64, /usr/lib64, /usr/local/li64) ? Should the standard ones (/lib, /usr/lib) checked at all, after the 64 bits counterpart ? cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
Hi, I've attached the build logs. I noticed that, for atlas, you check for atlas_enum.c - but do you in fact need this for the build? Now. I just wanted one header specific to atlas. It looks like not all version of ATLAS install this one, unfortunately (3.8, for example). numpy.distutils seemed to be satisfied with cblas.h and clapack.h in /usr/local/include/atlas. It's no big deal to copy it from sources, but was there a reason you chose that file? No reason, but it cannot be cblas.h (it has to be atlas specific; otherwise, it does not make sense). The list of headers to check can be empty, though. I see. You want to be sure from the include check that you have actually discovered ATLAS rather than something else? I guess that numpy.distutils solves this by the directory naming conventions - so - if it finds cblas.h in /usr/local/include/atlas as opposed to somewhere else, then it assumes it has the ATLAS version? Are these files different for ATLAS than for other libraries? If not, do we need to check that they are the ATLAS headers rather than any other? The test for linking to blas and lapack from atlas fails too - is this a false positive? Hmm, if atlas check does not work, it sounds like a right negative to me :) If ATLAS is not detected correctly, it won't be used by blas/lapack checkers. Or do you mean something else ? I mean false positive in the sense that it appears that numpy can build and pass tests with the ATLAS I have, so excluding it seems too stringent. The tests presumably should correspond to something the numpy code actually needs, rather than parts of ATLAS it can do without. For both numpy.distutils and numpyscons, default locations of libraries do not include the lib64 libraries like /usr/local/lib64 that us 64 bit people use. Is that easy to fix? Yes, it is easy, in the sense that nothing in the checkers code is harcoded: all the checks internally uses BuildConfig instances, which is like a dictionary with default values and a restricted set of keys (the keys are library path, libraries, headers, etc...). Those BuildConfig instances are created from a config file (perflib.cfg), and should always be customizable from site.cfg The options which can be customized can be found in the perflib.cfg file. For example, having: [atlas] htc = in your site.cfg should say to CheckATLAS to avoid looking for atlas_enum.h Thanks ... To make 64 bits work by default is a bit more complicated. I thought a bit about the problem: that's why the checkers do not use BuildConfig instances directly, but request them through a BuildConfigFactory. One problem is that I don't understand how 64 bits libraries work; more precisely, what is the convention of library path ? Is there a lib64 for each lib directory (/lib64, /usr/lib64, /usr/local/li64) ? Should the standard ones (/lib, /usr/lib) checked at all, after the 64 bits counterpart ? Well, the build works fine, it's just the perflib discovery - but I suppose that's what you meant. I think the convention is that 64 bit libraries do indeed go in /lib64, /usr/lib64, /usr/local/li64. My guess is that only 32-bit libraries should go in /lib, /usr/lib, but I don't think that convention is always followed. fftw, for example, installs 64 bit libraries in /usr/local/lib on my system. The compiler (at least gcc) rejects libraries that are in the wrong format, so I believe that finding 32 bit libraries will just cause a warning and continued search. Thanks again. Herculean work. Matthew ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
Neal Becker wrote: Is numscons specific to numpy/scipy, or is it for building arbitrary python extensions (replacing distutils?). I'm hoping for the latter. Actually, another way to answer your question: I am working on a patch such as the part of numscons which takes care of building python extensions in a cross platform way can be included in scons itself (I already have contribute quite a few patches/fix for scons since I started this, since I would rather not depend on in-house changes to scons). cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
Charles R Harris wrote: On Jan 25, 2008 3:21 AM, David Cournapeau [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Matthew Brett wrote: Hi David, Basically, I'm trying to understand the library discovery, linking steps - ATLAS in particular. Don't trust the included doc: it is not up-to-date, and that's the part which I totally redesigned since I wrote the initial doc. - For a perflib check to succeed, I try to compile, link and run code snippets. Any failure means the Check* function will return failure. - When a perflib Check* succeeds, the meta library check checks that another code snippet can be compiled/linked/run. Thanks - and I agree, for all the reasons you've given in later emails that this seems like the way to go. But, in the mean time, I've got problems with the perflib stuff. Specifically, the scons build is not picking up the atlas library for blas and lapack that works for the standard build. I'm on a 64 bit system, providing an extra layer of complexity. I have never tested the thing on 64 bits (I don't have easy access to 64 bits machine, which should change soon, hopefully), so it is not surprising it does not work now. I have taken this into account for the design, though (see below). I've attached the build logs. I noticed that, for atlas, you check for atlas_enum.c - but do you in fact need this for the build? Now. I just wanted one header specific to atlas. It looks like not all version of ATLAS install this one, unfortunately (3.8, for example). numpy.distutils seemed to be satisfied with cblas.h and clapack.h in /usr/local/include/atlas. It's no big deal to copy it from sources, but was there a reason you chose that file? No reason, but it cannot be cblas.h (it has to be atlas specific; otherwise, it does not make sense). The list of headers to check can be empty, though. The test for linking to blas and lapack from atlas fails too - is this a false positive? Hmm, if atlas check does not work, it sounds like a right negative to me :) If ATLAS is not detected correctly, it won't be used by blas/lapack checkers. Or do you mean something else ? For both numpy.distutils and numpyscons, default locations of libraries do not include the lib64 libraries like /usr/local/lib64 that us 64 bit people use. Is that easy to fix? Yes, it is easy, in the sense that nothing in the checkers code is harcoded: all the checks internally uses BuildConfig instances, which is like a dictionary with default values and a restricted set of keys (the keys are library path, libraries, headers, etc...). Those BuildConfig instances are created from a config file (perflib.cfg), and should always be customizable from site.cfg The options which can be customized can be found in the perflib.cfg file. For example, having: [atlas] htc = in your site.cfg should say to CheckATLAS to avoid looking for atlas_enum.h To make 64 bits work by default is a bit more complicated. I thought a bit about the problem: that's why the checkers do not use BuildConfig instances directly, but request them through a BuildConfigFactory. One problem is that I don't understand how 64 bits libraries work; more precisely, what is the convention of library path ? Is there a lib64 for each lib directory (/lib64, /usr/lib64, /usr/local/li64) ? Should the standard ones (/lib, /usr/lib) checked at all, after the 64 bits counterpart ? It varies with the Linux distro. The usual convention (LSB, I think) uses /usr/local/lib64, but Debian and distros derived from Debian use /usr/local/lib instead. That's how it was the last time I checked, anyway. I don't know what Gentoo and all the others do. Grrr, that's annoying. Do you know any resource which has clear explanation on that ? (reading LSB documents do not appeal much to me, and I would only do that as a last resort). cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
On Jan 25, 2008 3:21 AM, David Cournapeau [EMAIL PROTECTED] wrote: Matthew Brett wrote: Hi David, Basically, I'm trying to understand the library discovery, linking steps - ATLAS in particular. Don't trust the included doc: it is not up-to-date, and that's the part which I totally redesigned since I wrote the initial doc. - For a perflib check to succeed, I try to compile, link and run code snippets. Any failure means the Check* function will return failure. - When a perflib Check* succeeds, the meta library check checks that another code snippet can be compiled/linked/run. Thanks - and I agree, for all the reasons you've given in later emails that this seems like the way to go. But, in the mean time, I've got problems with the perflib stuff. Specifically, the scons build is not picking up the atlas library for blas and lapack that works for the standard build. I'm on a 64 bit system, providing an extra layer of complexity. I have never tested the thing on 64 bits (I don't have easy access to 64 bits machine, which should change soon, hopefully), so it is not surprising it does not work now. I have taken this into account for the design, though (see below). I've attached the build logs. I noticed that, for atlas, you check for atlas_enum.c - but do you in fact need this for the build? Now. I just wanted one header specific to atlas. It looks like not all version of ATLAS install this one, unfortunately (3.8, for example). numpy.distutils seemed to be satisfied with cblas.h and clapack.h in /usr/local/include/atlas. It's no big deal to copy it from sources, but was there a reason you chose that file? No reason, but it cannot be cblas.h (it has to be atlas specific; otherwise, it does not make sense). The list of headers to check can be empty, though. The test for linking to blas and lapack from atlas fails too - is this a false positive? Hmm, if atlas check does not work, it sounds like a right negative to me :) If ATLAS is not detected correctly, it won't be used by blas/lapack checkers. Or do you mean something else ? For both numpy.distutils and numpyscons, default locations of libraries do not include the lib64 libraries like /usr/local/lib64 that us 64 bit people use. Is that easy to fix? Yes, it is easy, in the sense that nothing in the checkers code is harcoded: all the checks internally uses BuildConfig instances, which is like a dictionary with default values and a restricted set of keys (the keys are library path, libraries, headers, etc...). Those BuildConfig instances are created from a config file (perflib.cfg), and should always be customizable from site.cfg The options which can be customized can be found in the perflib.cfg file. For example, having: [atlas] htc = in your site.cfg should say to CheckATLAS to avoid looking for atlas_enum.h To make 64 bits work by default is a bit more complicated. I thought a bit about the problem: that's why the checkers do not use BuildConfig instances directly, but request them through a BuildConfigFactory. One problem is that I don't understand how 64 bits libraries work; more precisely, what is the convention of library path ? Is there a lib64 for each lib directory (/lib64, /usr/lib64, /usr/local/li64) ? Should the standard ones (/lib, /usr/lib) checked at all, after the 64 bits counterpart ? It varies with the Linux distro. The usual convention (LSB, I think) uses /usr/local/lib64, but Debian and distros derived from Debian use /usr/local/lib instead. That's how it was the last time I checked, anyway. I don't know what Gentoo and all the others do. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
Hi David On Thu, Jan 24, 2008 at 12:34:56AM +0900, David Cournapeau wrote: I've just released the 0.3.0 release of numscons, an alternative build system for numpy. The tarballs are available on launchpad. https://launchpad.net/numpy.scons.support/0.3/0.3.0 To use it, you need to get the build_with_scons numpy branch: see http://projects.scipy.org/scipy/numpy/wiki/NumpyScons for more details. Fantastic job, thank you for investing so much time in improving the build system. I noticed that you were trying to get scipy building -- how is that progressing? Scons should make building on non-*nix platforms a lot easier. Regards Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
Matthew Brett wrote: Hi David, Thanks again for doing this. I've been playing with the compilation - should I email you direct with questions, to the lists? I don't think it would be inappropriate to use the current lists for that. Basically, I'm trying to understand the library discovery, linking steps - ATLAS in particular. Don't trust the included doc: it is not up-to-date, and that's the part which I totally redesigned since I wrote the initial doc. For things like FFT, BLAS, LAPACK, things are a bit complicated. Fortunately, for normal libraries, things are much easier: you use the NumpyCheckLibraryAndHeader function http://projects.scipy.org/scipy/numpy/wiki/NumpySconsExtExamples#Checkingforalibrary This function can check for a library, headers, and check for symbol in the library. site.cfg customization is also automatically taken care of. I also intend to add the possibility to check for libraries which use pkgconfig. For BLAS/LAPACK/FFT, the basic principles are: - I make a distinction between API libraries (called meta libraries in numscons) and implementation libraries (called perflib in numscons). - Meta libraries uses perflibs: FFT/BLAS/LAPACK may use Sunperf/MKL/ATLAS. - For each meta library and each perflib, a Check function is defined: CheckMKL, CheckCBLAS, etc... - For a perflib check to succeed, I try to compile, link and run code snippets. Any failure means the Check* function will return failure. - When a perflib Check* succeeds, the meta library check checks that another code snippet can be compiled/linked/run. For example, if CheckMKL succeeds in CheckCBLAS, I will compile/link/run a cblas program using MKL configuration. - Both perflib and meta libraries are customizable through site.cfg (I tried to keep backward compatibility at this level with numpy.distutils). I still have the feeling that it is over-designed, but BLAS/LAPACK libraries are really a PITA. Every perflib has slightly different conventions, and I wanted to use common code for all of them as much as possible [1]. Also, the configuration is not inside python code anymore: libraries, frameworks, link flags are all defined in perflib.cfg. cheers, David [1] This design also allow some funky stuff: for example, when using sun performance libraries with sun tools, because of something which look like a bug to me, the sunperf are not linked when building a shared library. So I link a code snippet, and parse the linker output to get the necessary libraries; this is totally transparent to BLAS/LAPACK checkers. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [ANN] numscons 0.3.0 release
Stefan van der Walt wrote: Hi David On Thu, Jan 24, 2008 at 12:34:56AM +0900, David Cournapeau wrote: I've just released the 0.3.0 release of numscons, an alternative build system for numpy. The tarballs are available on launchpad. https://launchpad.net/numpy.scons.support/0.3/0.3.0 To use it, you need to get the build_with_scons numpy branch: see http://projects.scipy.org/scipy/numpy/wiki/NumpyScons for more details. Fantastic job, thank you for investing so much time in improving the build system. I noticed that you were trying to get scipy building -- how is that progressing? Not much, but there is not much left to do: I don't remember exactly, but umfpack was still problematic; but except that, scipy should be able to build with numscons once I change the imports (to cope with the split between scons support and numpy.distutils). Scipy is much easier to build than numpy, because except f2py/fortran related problems, everything needed in scipy is needed by numpy. And the really hard job was to understand distutils anyway :) Scons should make building on non-*nix platforms a lot easier. Not so much, because scons does not know how to build python extensions (I had to develop my own builder, which fortunately should end up in scons at some point). Since I started this work, my perception on the advantages have changed: - real dependencies handling. I did not think it would be so interesting, but while using it, I got used to not removing the build directory when I change one configuration. - ability to use different compiler options for different extensions. - easy customization of compiler flags (interesting for debugging, or find the best optimization flags). - checks are logged in config.log files - and of course, the ability to build shared library (c-types extensions), static libraries, etc... which is one of the reason I started this. There are also other points which are less obvious, or for which I am too biased: - the system is simpler: the codebase is 3 times smaller (2500 vs 8000 lines) and there is a clear distinction between configuration and code (flags are not buried in the code anymore, they are defined in separate .cfg files). Also, even if scons is 'unpythonic' for some aspects, the codebase is so much better than distutils in the stdlib. Everytime I touch stdlib distutils, I fear that it will break something in another unrelated place, or that I did not call some functions in the right, platform specific order; I don't have this problem with scons. - the SConscripts also are simpler to read than setup.py I think (specially for numpy.core and other modules with complicated configurations). They are longer, but there is less magic, and the intent is more obvious I think. This is really important, because I think I spent more than 50 % of my time on understanding distutils for this. I hope that nobody will have to do that anymore. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] [ANN] numscons 0.3.0 release
Hi, I've just released the 0.3.0 release of numscons, an alternative build system for numpy. The tarballs are available on launchpad. https://launchpad.net/numpy.scons.support/0.3/0.3.0 To use it, you need to get the build_with_scons numpy branch: see http://projects.scipy.org/scipy/numpy/wiki/NumpyScons for more details. This release is an important milestone: - all regressions because of the split from numpy are fixed. - it can build numpy on linux (gcc/intel), mac os X (gcc), windows (mingw) and solaris (Sun compilers). - mkl, sunperf, accelerate/vecLib frameworks and ATLAS should work on the platforms where it makes sense. - a lot of internal changes: some basic unittest, a total revamp of the code to check for performance libraries. - almost all changes necessary to scons code are now included upstream, or pending review. If you test it and has problems building numpy, please submit a bug to launchpad: https://bugs.launchpad.net/numpy.scons.support/0.3/ Thanks, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion