Re: [Numpy-discussion] [ANN] numscons 0.3.0 release

2008-01-25 Thread dmitrey
Hi all,
I don't know much about what are these scons are, if it's something 
essential (as it seems to be from amount of mailing list traffic) why 
can't it be just merged to numpy, w/o making any additional branches?

Regards, D.

David Cournapeau wrote:
  
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] numscons 0.3.0 release

2008-01-25 Thread Robert Kern
dmitrey wrote:
 Hi all,
 I don't know much about what are these scons are, if it's something 
 essential (as it seems to be from amount of mailing list traffic) why 
 can't it be just merged to numpy, w/o making any additional branches?

It's a very large, still experimental change to the entire build 
infrastructure. 
It requires branches for us to evaluate.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
  that is made terrible by our own mad attempt to interpret it as though it had
  an underlying truth.
   -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] numscons 0.3.0 release

2008-01-25 Thread Matthieu Brucher
2008/1/25, dmitrey [EMAIL PROTECTED]:

 Hi all,
 I don't know much about what are these scons are, if it's something
 essential (as it seems to be from amount of mailing list traffic) why
 can't it be just merged to numpy, w/o making any additional branches?


Scons is a building system, like distutils but much more powerful.
It is not simple to replace partially distutils in the numpy construction
system. There are a lot of things to test during the build. So while David
makes this happen, numpy can't be in an unstable state where the SVN head
can't be compiled and used. This branch is thus needed and will be merged
when both the trunk and the branch are stable.

Matthieu
-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] numscons 0.3.0 release

2008-01-25 Thread David Cournapeau
dmitrey wrote:
 Hi all,
 I don't know much about what are these scons are, if it's something 
 essential (as it seems to be from amount of mailing list traffic) why 
 can't it be just merged to numpy, w/o making any additional branches?
scons is a build system, like make. The difference being that 
sconscripts (makefiles for scons) are written in python, and well 
supported on windows.

The standard way to build python extension is to use distutils, but 
distutils itself is difficult to extend (the code is basically 
spaghetti, not maintained and not documented), and to be fair, numpy's 
needs go way beyond the usual python extension: we want to control 
optimization flags, we need fortran, we depend on a lot of external 
libraries we need to check, etc... numpy uses distutils extension 
(numpy.distutils), to support fortran and other goodies. But because it 
is based on distutils, it has inherited some of the distutils' problems; 
in particular, it is not possible to build dynamically loaded libraries 
(necessary for ctypes-based extensions), it is difficult to check for 
additional libraries, etc...

So instead, I plug scons in distutils, so that all C code in numpy is 
built through sconscripts. But the build system arguably being one of 
the most crucial part of numpy, it is necessary to do changes 
incrementally in branches (the code changes are big: several thousand of 
python code, which is difficult to test by nature).

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to build on Solaris 10 (x86) using sunperf?

2008-01-25 Thread David Cournapeau
On Jan 24, 2008 3:06 AM, Peter Ward [EMAIL PROTECTED] wrote:


 The problems I was having were due to a bad site.cfg initially and then
 a problem with the python pkg from sunfreeware (ctypes version
 mismatch).

 Numpy is now happily installed.

 If you need someone to test anything new, let me know.

If you have a few minutes to spend on it, you could try numscons; I
don't have access to solaris (I only have access to indiana, that is
opensolaris on vmware, 32 bits only). For solaris, the advantages of
numscons is native support sunperf, and optimization flags for
compilation (only non-arch dependant for now). As I consider it still
experimental, you should not use numpy built this way, though.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] numscons 0.3.0 release

2008-01-25 Thread David Cournapeau
Matthew Brett wrote:
 Hi David,

 Basically, I'm trying to understand the library discovery, linking
 steps - ATLAS in particular.
 Don't trust the included doc: it is not up-to-date, and that's the part
 which I totally redesigned since I wrote the initial doc.

 - For a perflib check to succeed, I try to compile, link and run
 code snippets. Any failure means the Check* function will return failure.
 - When a perflib Check* succeeds, the meta library check checks that
 another code snippet can be compiled/linked/run.

 Thanks - and I agree, for all the reasons you've given in later emails
 that this seems like the way to go.

 But, in the mean time, I've got problems with the perflib stuff.
 Specifically, the scons build is not picking up the atlas library for
 blas and lapack that works for the standard build.  I'm on a 64 bit
 system, providing an extra layer of complexity.
I have never tested the thing on 64 bits (I don't have easy access to 64 
bits machine, which should change soon, hopefully), so it is not 
surprising it does not work now. I have taken this into account for the 
design, though (see below).

 I've attached the build logs.  I noticed that, for atlas, you check
 for atlas_enum.c - but do you in fact need this for the build?
Now. I just wanted one header specific to atlas. It looks like not all 
version of ATLAS install this one, unfortunately (3.8, for example).
 numpy.distutils seemed to be satisfied with cblas.h and clapack.h in
 /usr/local/include/atlas.  It's no big deal to copy it from sources,
 but was there a reason you chose that file?
No reason, but it cannot be cblas.h (it has to be atlas specific; 
otherwise, it does not make sense). The list of headers to check can be 
empty, though.

 The test for linking to blas and lapack from atlas fails too - is this
 a false positive?
Hmm, if atlas check does not work, it sounds like a right negative to me 
:) If ATLAS is not detected correctly, it won't be used by blas/lapack 
checkers. Or do you mean something else ?

 For both numpy.distutils and numpyscons, default locations of
 libraries do not include the lib64 libraries like /usr/local/lib64
 that us 64 bit people use.  Is that easy to fix?
Yes, it is easy, in the sense that nothing in the checkers code is 
harcoded: all the checks internally uses BuildConfig instances, which is 
like a dictionary with default values and a restricted set of keys (the 
keys are library path, libraries, headers, etc...). Those BuildConfig 
instances are created from a config file (perflib.cfg), and should 
always be customizable from site.cfg

The options which can be customized can be found in the perflib.cfg 
file. For example, having:

[atlas]
htc =

in your site.cfg should say to CheckATLAS to avoid looking for atlas_enum.h

To make 64 bits work by default is a bit more complicated. I thought a 
bit about the problem: that's why the checkers do not use BuildConfig 
instances directly, but request them through a BuildConfigFactory. One 
problem is that I don't understand how 64 bits libraries work; more 
precisely, what is the convention of library path ? Is there a lib64 for 
each lib directory (/lib64, /usr/lib64, /usr/local/li64) ? Should the 
standard ones (/lib, /usr/lib) checked at all, after the 64 bits 
counterpart ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numscons, available as python egg

2008-01-25 Thread David Cournapeau
Hi,

Sorry for the flooding, but I finally managed to build an egg and 
put it on the web, so numscons is available as an egg, now. You should 
be able to install it using easy_install, e.g

easy_install numscons

should work.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] numscons 0.3.0 release

2008-01-25 Thread Matthew Brett
Hi,

  I've attached the build logs.  I noticed that, for atlas, you check
  for atlas_enum.c - but do you in fact need this for the build?
 Now. I just wanted one header specific to atlas. It looks like not all
 version of ATLAS install this one, unfortunately (3.8, for example).
  numpy.distutils seemed to be satisfied with cblas.h and clapack.h in
  /usr/local/include/atlas.  It's no big deal to copy it from sources,
  but was there a reason you chose that file?

 No reason, but it cannot be cblas.h (it has to be atlas specific;
 otherwise, it does not make sense). The list of headers to check can be
 empty, though.

I see.  You want to be sure from the include check that you have
actually discovered ATLAS rather than something else?  I guess that
numpy.distutils solves this by the directory naming conventions - so -
if it finds cblas.h in /usr/local/include/atlas as opposed to
somewhere else, then it assumes it has the ATLAS version?  Are these
files different for ATLAS than for other libraries?  If not, do we
need to check that they are the ATLAS headers rather than any other?

  The test for linking to blas and lapack from atlas fails too - is this
  a false positive?
 Hmm, if atlas check does not work, it sounds like a right negative to me
 :) If ATLAS is not detected correctly, it won't be used by blas/lapack
 checkers. Or do you mean something else ?

I mean false positive in the sense that it appears that numpy can
build and pass tests with the ATLAS I have, so excluding it seems too
stringent.  The tests presumably should correspond to something the
numpy code actually needs, rather than parts of ATLAS it can do
without.

  For both numpy.distutils and numpyscons, default locations of
  libraries do not include the lib64 libraries like /usr/local/lib64
  that us 64 bit people use.  Is that easy to fix?
 Yes, it is easy, in the sense that nothing in the checkers code is
 harcoded: all the checks internally uses BuildConfig instances, which is
 like a dictionary with default values and a restricted set of keys (the
 keys are library path, libraries, headers, etc...). Those BuildConfig
 instances are created from a config file (perflib.cfg), and should
 always be customizable from site.cfg

 The options which can be customized can be found in the perflib.cfg
 file. For example, having:

 [atlas]
 htc =

 in your site.cfg should say to CheckATLAS to avoid looking for atlas_enum.h

Thanks ...

 To make 64 bits work by default is a bit more complicated. I thought a
 bit about the problem: that's why the checkers do not use BuildConfig
 instances directly, but request them through a BuildConfigFactory. One
 problem is that I don't understand how 64 bits libraries work; more
 precisely, what is the convention of library path ? Is there a lib64 for
 each lib directory (/lib64, /usr/lib64, /usr/local/li64) ? Should the
 standard ones (/lib, /usr/lib) checked at all, after the 64 bits
 counterpart ?

Well, the build works fine, it's just the perflib discovery - but I
suppose that's what you meant. I think the convention is that 64 bit
libraries do indeed go in /lib64, /usr/lib64, /usr/local/li64.  My
guess is that only 32-bit libraries should go in /lib, /usr/lib, but I
don't think that convention is always followed.  fftw, for example,
installs 64 bit libraries in /usr/local/lib on my system.  The
compiler (at least gcc) rejects libraries that are in the wrong
format, so I believe that finding 32 bit libraries will just cause a
warning and continued search.

Thanks again.  Herculean work.

Matthew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] numscons 0.3.0 release

2008-01-25 Thread David Cournapeau
Neal Becker wrote:
 Is numscons specific to numpy/scipy, or is it for building arbitrary python
 extensions (replacing distutils?).  I'm hoping for the latter.
Actually, another way to answer your question: I am working on a patch 
such as the part of numscons which takes care of building python 
extensions in a cross platform way can be included in scons itself (I 
already have contribute quite a few patches/fix for scons since I 
started this, since I would rather not depend on in-house changes to scons).

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NPY_FORCECAST

2008-01-25 Thread Travis E. Oliphant
Bill Spotz wrote:
 Hi,

 I am currently using PyArray_FromObject() to convert an input  
 argument to a 32-bit integer array.  On a 64-bit architecture, it is  
 easy to create an integer array whose default type is 64-bit.  When  
 this is sent to PyArray_FromObject(), it raises an error, saying  
 array cannot be safely cast to required type, even though all of  
 its elements are representable by 32 bits.
   

There is not a per-element check to casting.The casting is done with 
a C-level coercion which does not check for overflow. 

 How hard would it be to implement a new option (NPY_ATTEMPTCAST?)  
 that attempts to make the cast, but raises an OverflowError if any of  
 the source data is too large for the target array?  (Or,  
 alternatively, make this the default behavior if NPY_FORCECAST is  
 false?)
   
This would be a big change because it would have to be implemented (as 
far as I can tell) with some kind of checked-casting loops at the 
C-level.  

So, it could be done, but it would be a fairly large effort (and would 
impact the size of numpy significantly, I think).

-Travis


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] numscons 0.3.0 release

2008-01-25 Thread David Cournapeau
Charles R Harris wrote:


 On Jan 25, 2008 3:21 AM, David Cournapeau 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 
 wrote:

 Matthew Brett wrote:
  Hi David,
 
  Basically, I'm trying to understand the library discovery, linking
  steps - ATLAS in particular.
  Don't trust the included doc: it is not up-to-date, and that's
 the part
  which I totally redesigned since I wrote the initial doc.
 
  - For a perflib check to succeed, I try to compile, link
 and run
  code snippets. Any failure means the Check* function will
 return failure.
  - When a perflib Check* succeeds, the meta library check
 checks that
  another code snippet can be compiled/linked/run.
 
  Thanks - and I agree, for all the reasons you've given in later
 emails
  that this seems like the way to go.
 
  But, in the mean time, I've got problems with the perflib stuff.
  Specifically, the scons build is not picking up the atlas
 library for
  blas and lapack that works for the standard build.  I'm on a 64 bit
  system, providing an extra layer of complexity.
 I have never tested the thing on 64 bits (I don't have easy access
 to 64
 bits machine, which should change soon, hopefully), so it is not
 surprising it does not work now. I have taken this into account
 for the
 design, though (see below).
 
  I've attached the build logs.  I noticed that, for atlas, you check
  for atlas_enum.c - but do you in fact need this for the build?
 Now. I just wanted one header specific to atlas. It looks like not all
 version of ATLAS install this one, unfortunately (3.8, for example).
  numpy.distutils seemed to be satisfied with cblas.h and clapack.h in
  /usr/local/include/atlas.  It's no big deal to copy it from sources,
  but was there a reason you chose that file?
 No reason, but it cannot be cblas.h (it has to be atlas specific;
 otherwise, it does not make sense). The list of headers to check
 can be
 empty, though.
 
  The test for linking to blas and lapack from atlas fails too -
 is this
  a false positive?
 Hmm, if atlas check does not work, it sounds like a right negative
 to me
 :) If ATLAS is not detected correctly, it won't be used by blas/lapack
 checkers. Or do you mean something else ?
 
  For both numpy.distutils and numpyscons, default locations of
  libraries do not include the lib64 libraries like /usr/local/lib64
  that us 64 bit people use.  Is that easy to fix?
 Yes, it is easy, in the sense that nothing in the checkers code is
 harcoded: all the checks internally uses BuildConfig instances,
 which is
 like a dictionary with default values and a restricted set of keys
 (the
 keys are library path, libraries, headers, etc...). Those BuildConfig
 instances are created from a config file (perflib.cfg), and should
 always be customizable from site.cfg

 The options which can be customized can be found in the perflib.cfg
 file. For example, having:

 [atlas]
 htc =

 in your site.cfg should say to CheckATLAS to avoid looking for
 atlas_enum.h

 To make 64 bits work by default is a bit more complicated. I thought a
 bit about the problem: that's why the checkers do not use BuildConfig
 instances directly, but request them through a BuildConfigFactory. One
 problem is that I don't understand how 64 bits libraries work; more
 precisely, what is the convention of library path ? Is there a
 lib64 for
 each lib directory (/lib64, /usr/lib64, /usr/local/li64) ? Should the
 standard ones (/lib, /usr/lib) checked at all, after the 64 bits
 counterpart ?

  
 It varies with the Linux distro. The usual convention (LSB, I think) 
 uses /usr/local/lib64, but Debian and distros derived from Debian use 
 /usr/local/lib instead. That's how it was the last time I checked, 
 anyway. I don't know what Gentoo and all the others do.
Grrr, that's annoying. Do you know any resource which has clear 
explanation on that ? (reading LSB documents do not appeal much to me, 
and I would only do that as a last resort).

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] numscons 0.3.0 release

2008-01-25 Thread Charles R Harris
On Jan 25, 2008 3:21 AM, David Cournapeau [EMAIL PROTECTED]
wrote:

 Matthew Brett wrote:
  Hi David,
 
  Basically, I'm trying to understand the library discovery, linking
  steps - ATLAS in particular.
  Don't trust the included doc: it is not up-to-date, and that's the part
  which I totally redesigned since I wrote the initial doc.
 
  - For a perflib check to succeed, I try to compile, link and run
  code snippets. Any failure means the Check* function will return
 failure.
  - When a perflib Check* succeeds, the meta library check checks
 that
  another code snippet can be compiled/linked/run.
 
  Thanks - and I agree, for all the reasons you've given in later emails
  that this seems like the way to go.
 
  But, in the mean time, I've got problems with the perflib stuff.
  Specifically, the scons build is not picking up the atlas library for
  blas and lapack that works for the standard build.  I'm on a 64 bit
  system, providing an extra layer of complexity.
 I have never tested the thing on 64 bits (I don't have easy access to 64
 bits machine, which should change soon, hopefully), so it is not
 surprising it does not work now. I have taken this into account for the
 design, though (see below).
 
  I've attached the build logs.  I noticed that, for atlas, you check
  for atlas_enum.c - but do you in fact need this for the build?
 Now. I just wanted one header specific to atlas. It looks like not all
 version of ATLAS install this one, unfortunately (3.8, for example).
  numpy.distutils seemed to be satisfied with cblas.h and clapack.h in
  /usr/local/include/atlas.  It's no big deal to copy it from sources,
  but was there a reason you chose that file?
 No reason, but it cannot be cblas.h (it has to be atlas specific;
 otherwise, it does not make sense). The list of headers to check can be
 empty, though.
 
  The test for linking to blas and lapack from atlas fails too - is this
  a false positive?
 Hmm, if atlas check does not work, it sounds like a right negative to me
 :) If ATLAS is not detected correctly, it won't be used by blas/lapack
 checkers. Or do you mean something else ?
 
  For both numpy.distutils and numpyscons, default locations of
  libraries do not include the lib64 libraries like /usr/local/lib64
  that us 64 bit people use.  Is that easy to fix?
 Yes, it is easy, in the sense that nothing in the checkers code is
 harcoded: all the checks internally uses BuildConfig instances, which is
 like a dictionary with default values and a restricted set of keys (the
 keys are library path, libraries, headers, etc...). Those BuildConfig
 instances are created from a config file (perflib.cfg), and should
 always be customizable from site.cfg

 The options which can be customized can be found in the perflib.cfg
 file. For example, having:

 [atlas]
 htc =

 in your site.cfg should say to CheckATLAS to avoid looking for
 atlas_enum.h

 To make 64 bits work by default is a bit more complicated. I thought a
 bit about the problem: that's why the checkers do not use BuildConfig
 instances directly, but request them through a BuildConfigFactory. One
 problem is that I don't understand how 64 bits libraries work; more
 precisely, what is the convention of library path ? Is there a lib64 for
 each lib directory (/lib64, /usr/lib64, /usr/local/li64) ? Should the
 standard ones (/lib, /usr/lib) checked at all, after the 64 bits
 counterpart ?


It varies with the Linux distro. The usual convention (LSB, I think) uses
/usr/local/lib64, but Debian and distros derived from Debian use
/usr/local/lib instead. That's how it was the last time I checked, anyway. I
don't know what Gentoo and all the others do.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Doc-days are today! Come join on irc.freenode.net (#scipy)

2008-01-25 Thread Travis E. Oliphant

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Problems with long

2008-01-25 Thread Tom Johnson
Hi, I'm having some troubles with long.

 from numpy import log
 log(8463186938969424928L)
43.5822574833
 log(10454852688145851272L)
type 'exceptions.AttributeError': 'long' object has no attribute 'log'

Thoughts?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] NPY_FORCECAST

2008-01-25 Thread Bill Spotz
Hi,

I am currently using PyArray_FromObject() to convert an input  
argument to a 32-bit integer array.  On a 64-bit architecture, it is  
easy to create an integer array whose default type is 64-bit.  When  
this is sent to PyArray_FromObject(), it raises an error, saying  
array cannot be safely cast to required type, even though all of  
its elements are representable by 32 bits.

I see from the Guide to NumPy that I could call PyArray_FromAny()  
instead, providing the flag NPY_FORCECAST to force the cast to be  
made.  However, this code is within numpy.i, will be used by others,  
and could lead to unexpected behavior if I were to make the change.

How hard would it be to implement a new option (NPY_ATTEMPTCAST?)  
that attempts to make the cast, but raises an OverflowError if any of  
the source data is too large for the target array?  (Or,  
alternatively, make this the default behavior if NPY_FORCECAST is  
false?)

Thanks

** Bill Spotz  **
** Sandia National Laboratories  Voice: (505)845-0170  **
** P.O. Box 5800 Fax:   (505)284-0154  **
** Albuquerque, NM 87185-0370Email: [EMAIL PROTECTED] **




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] extending a recarray

2008-01-25 Thread Kevin Christman
I am using recarray to store experiment information that is read from a file:

self.records = 
numpy.rec.fromrecords(self.allData,formats='f4,S30,S30,f4,f4,f4,f4,f4,f4,f4,f4,f4,f4',names='MotorNum,MotorType,MotorName,MotorHP,
 NSync,NShaftFL,NShaft,pLoad,pLoadSlipMethod, 
pLoadORMEL,etaMeasured,etaORMEL,etaNameplate')

Now I would like to add additional columns to this array for my 
calculations/analysis of the data.  How is this done?  Thanks,

Kevin




  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] tensordot bug?

2008-01-25 Thread Charles R Harris
Is this a bug?

In [21]: a = ones((2,2))

In [22]: b = ones((2,2,2,2))

In [23]: tensordot(a, tensordot(b, a, 2), 2)
Out[23]: array(16.0)

It seems to me that consistency with the dot product would require a scalar
result, not a 0-dim array.

Also, do we have a plain old tensor product? Outer flattens the indices.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion