Re: [Numpy-discussion] is it a bug?

2009-03-12 Thread Stéfan van der Walt
2009/3/12 Robert Kern robert.k...@gmail.com:
 idx = np.array([0,1])
 e = x[0,:,idx]
 print e.shape

 #- return (2,3). I think the right answer should be (3,2). Is
 #       it a bug here? my numpy version is 1.2.1.

 It's certainly weird, but it's working as designed. Fancy indexing via
 arrays is a separate subsystem from indexing via slices. Basically,
 fancy indexing decides the outermost shape of the result (e.g. the
 leftmost items in the shape tuple). If there are any sliced axes, they
 are *appended* to the end of that shape tuple.

This was my understanding, but now I see:

In [31]: x = np.random.random([4,5,6,7])

In [32]: idx = np.array([1,2])

In [33]: x[:, idx, idx, :].shape
Out[33]: (4, 2, 7)

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-12 Thread Francesc Alted
A Wednesday 11 March 2009, Ryan May escrigué:
 Thanks.  That's actually pretty close to what I had.  I was actually
 thinking that you were using only blas_opt and lapack_opt, since
 supposedly the [mkl] style section is deprecated.  Thus far, I cannot
 get these to work with MKL.

Well, my configuration was thought to link with the VML integrated in 
the MKL, but I'd say that it would be similar for blas and lapack.  
What's you configuration?  What's the error you are running into?

Cheers,

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] code performanceon windows (32 and/or 64 bit) using SWIG: C++ compiler MS vs.cygwin

2009-03-12 Thread Sebastian Haase
On Thu, Mar 12, 2009 at 4:39 AM, David Cournapeau courn...@gmail.com wrote:
 On Thu, Mar 12, 2009 at 12:38 PM, David Cournapeau courn...@gmail.com wrote:
 and you can't
 cross compile easily.

 Of course, this applies to numpy/scipy - you can cross compile your
 own extensions relatively easily (at least I don't see why it would
 not be possible).

 David

Thanks for the reply.
I actually don't have easy access to the MS compiler.
David, will you be making 64bit binary versions of numpy+scipy available ?
Cross compiling  I have never done that: I suppose it's an
addition option to g++ and having extra libraries somewhere, right ?

-Sebastian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy via easy_install on windows

2009-03-12 Thread Jon Wright
Hello,

If I do:

C:\ easy_install numpy

... on a windows box, it attempts to do a source download and build, 
which typically doesn't work. If however I use:

C:\ easy_install numpy==1.0.4

... then the magic works just fine. Any chance of a more recent 
bdist_egg being made available for windows?

Thanks

Jon

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy via easy_install on windows

2009-03-12 Thread David Cournapeau
Hi Jon,

Jon Wright wrote:
 Hello,

 If I do:

 C:\ easy_install numpy

 ... on a windows box, it attempts to do a source download and build, 
 which typically doesn't work. If however I use:

 C:\ easy_install numpy==1.0.4

 ... then the magic works just fine. Any chance of a more recent 
 bdist_egg being made available for windows?
   

Is there a reason why you would not just use the binary installer ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] image processing using numpy-scipy?

2009-03-12 Thread Mark Asbach

Hi there,

I have read the docs of PIL but there is no function for this. Can I  
use numpy-scipy for the matter?

The image size is 1K.



did you have a look at OpenCV?

http://sourceforge.net/projects/opencvlibrary

Since a couple of weeks, we have implemented the numpy array interface  
so data exchange is easy [check out from SVN].


Best, Mark

--
Mark Asbach
Institut für Nachrichtentechnik, RWTH Aachen University
http://www.ient.rwth-aachen.de/cms/team/m_asbach



smime.p7s
Description: S/MIME cryptographic signature
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy via easy_install on windows

2009-03-12 Thread Jon Wright
David Cournapeau wrote:
 Hi Jon,
 
 Jon Wright wrote:
 Hello,

 If I do:

 C:\ easy_install numpy

 ... on a windows box, it attempts to do a source download and build, 
 which typically doesn't work. If however I use:

 C:\ easy_install numpy==1.0.4

 ... then the magic works just fine. Any chance of a more recent 
 bdist_egg being made available for windows?
   
 
 Is there a reason why you would not just use the binary installer ?

I'd like to have numpy as a dependency being pulled into a virtualenv 
automatically. Is that possible with the binary installer?

Thanks,

Jon

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] image processing using numpy-scipy?

2009-03-12 Thread Zachary Pincus
 did you have a look at OpenCV?

 http://sourceforge.net/projects/opencvlibrary

 Since a couple of weeks, we have implemented the numpy array  
 interface so data exchange is easy [check out from SVN].

Oh fantastic! That is great news indeed.

Zach
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Implementing hashing protocol for dtypes

2009-03-12 Thread David Cournapeau
On Thu, Mar 12, 2009 at 9:13 PM, David Cournapeau courn...@gmail.com wrote:
 On Thu, Mar 12, 2009 at 1:00 PM, Robert Kern robert.k...@gmail.com wrote:


 It was an example.

 Ok, guess I will have to learn the difference between i.e. and e.g. one day.

 Anyway, here is a first shot at it:

 http://codereview.appspot.com/26052

Sorry, the link is http://codereview.appspot.com/26052/show

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Implementing hashing protocol for dtypes

2009-03-12 Thread David Cournapeau
On Thu, Mar 12, 2009 at 1:00 PM, Robert Kern robert.k...@gmail.com wrote:


 It was an example.

Ok, guess I will have to learn the difference between i.e. and e.g. one day.

Anyway, here is a first shot at it:

http://codereview.appspot.com/26052

I added a few tests which fail with trunk and work with the patch (for
example, two equivalent types now hash the same), only tested on Linux
so far. I am not sure I took into account every case: I am not
familiar with the PyArray_Descr API (this patch was a good excuse to
dive into this part of the code), and I also noticed a few
discrepancies with the doc (the fields struct member never seems to be
NULL, but set to None for builtin types).

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 1.3 release: getting rid of sourceforge ?

2009-03-12 Thread David Cournapeau
Hi,

I was wondering if there was any reason for still using sourceforge
? AFAIK, we only use it to put the files there, and dealing with
sourceforge to upload files is less than optimal to say the least. Is
there any drawback to directly put the files to scipy.org ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Implementing hashing protocol for dtypes

2009-03-12 Thread Stéfan van der Walt
2009/3/12 David Cournapeau courn...@gmail.com:
 Anyway, here is a first shot at it:

 http://codereview.appspot.com/26052

Design question: should [('x', float), ('y', float)] and [('t',
float), ('s', float)] hash to the same value or not?

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Portable macro to get NAN, INF, positive and negative zero

2009-03-12 Thread Bruce Southey
David Cournapeau wrote:
 Hi,

 For the record, I have just added the following functionalities to
 numpy, which may simplify some C code:
 - NPY_NAN/NPY_INFINITY/NPY_PZERO/NPY_NZERO: macros to get nan, inf,
 positive and negative zeros. Rationale: some code use NAN, _get_nan,
 etc... NAN is a GNU C extension, INFINITY is not available on many C
 compilers. The NPY_ macros are defined from the IEEE754 format, and as
 such should be very fast (the values should be inlined).
 - we can now use inline safely in numpy C code: it is defined to
 something recognized by the compiler or nothing if inline is not
 supported. It is NOT defined publicly to avoid namespace pollution.
 - NPY_INLINE is a macro which can be used publicly, and has the same
 usage as inline.

 cheers,

 David
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   
Hi,
I am curious how this relates to Zach's comment in the thread on 
'Infinity Definitions':
http://mail.scipy.org/pipermail/numpy-discussion/2008-July/035740.html

 If I recall correctly, one reason for the plethora of infinity  
 definitions (which had been mentioned previously on the list) was that  
 the repr for some or all float/complex types was generated by code in  
 the host OS, and not in numpy. As such, these reprs were different for  
 different platforms. As there was a desire to ensure that reprs could  
 always be evaluated, the various ways that inf and nan could be spit  
 out by the host libs were all included.

 Has this been fixed now, so that repr(inf), (etc.) looks identical on  
 all platforms?

If this is no longer a concern then we should be able to remove those 
duplicate definitions and use of uppercase.

Bruce

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Implementing hashing protocol for dtypes

2009-03-12 Thread David Cournapeau
Stéfan van der Walt wrote:
 2009/3/12 David Cournapeau courn...@gmail.com:
   
 Anyway, here is a first shot at it:

 http://codereview.appspot.com/26052
 

 Design question: should [('x', float), ('y', float)] and [('t',
 float), ('s', float)] hash to the same value or not?
   

According to:

http://docs.python.org/reference/datamodel.html#object.__hash__

The only constraint is that a == b - hash(a) == hash(b) (which is
broken currently in numpy, even for builtin dtypes). The main problem is
that I am not very clear yet on what a == b is for dtypes (the code for
PyArray_EquivTypes goes through PyObject_Compare for compound types). In
your example, both dtypes are not equal (and they do not hash the same).

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-12 Thread Ryan May
On Thu, Mar 12, 2009 at 3:05 AM, Francesc Alted fal...@pytables.org wrote:

 A Wednesday 11 March 2009, Ryan May escrigué:
  Thanks.  That's actually pretty close to what I had.  I was actually
  thinking that you were using only blas_opt and lapack_opt, since
  supposedly the [mkl] style section is deprecated.  Thus far, I cannot
  get these to work with MKL.

 Well, my configuration was thought to link with the VML integrated in
 the MKL, but I'd say that it would be similar for blas and lapack.
 What's you configuration?  What's the error you are running into?


I can get it working now with either the [mkl] section like your config or
the following config:

[DEFAULT]
include_dirs = /opt/intel/mkl/10.0.2.018/include/
library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t/:/usr/lib

[blas]
libraries = mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5

[lapack]
libraries = mkl_lapack, mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5

It's just confusing I guess because if I change blas and lapack to blas_opt
and lapack_opt, I cannot get it to work.  The only reason I even care is
that site.cfg.example leads me to believe that the *_opt sections are the
way you're supposed to add them.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Portable macro to get NAN, INF, positive and negative zero

2009-03-12 Thread David Cournapeau
On Thu, Mar 12, 2009 at 10:19 PM, Bruce Southey bsout...@gmail.com wrote:
 David Cournapeau wrote:
 Hi,

     For the record, I have just added the following functionalities to
 numpy, which may simplify some C code:
     - NPY_NAN/NPY_INFINITY/NPY_PZERO/NPY_NZERO: macros to get nan, inf,
 positive and negative zeros. Rationale: some code use NAN, _get_nan,
 etc... NAN is a GNU C extension, INFINITY is not available on many C
 compilers. The NPY_ macros are defined from the IEEE754 format, and as
 such should be very fast (the values should be inlined).
     - we can now use inline safely in numpy C code: it is defined to
 something recognized by the compiler or nothing if inline is not
 supported. It is NOT defined publicly to avoid namespace pollution.
     - NPY_INLINE is a macro which can be used publicly, and has the same
 usage as inline.

 cheers,

 David
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 Hi,
 I am curious how this relates to Zach's comment in the thread on
 'Infinity Definitions':

It does not directly - but I implemented those macro after Pauli,
Charles (Harris) and me worked on improving formatting; those macro
replace several ad-hoc solutions through the numpy code base.

Concerning formatting, there is much more consistency with python 2.6
(because python itself bypasses the C runtime and does the parsing
itself), and we followed them. With numpy 1.3, you should almost never
see anything else than nan/inf on any platform. There are still some
cases where it fails, and some cases we can't do anything about (print
'%s' % a, print a, print '%f' % a all go through different codepath,
and we can't control at least one of them, I don't remember which
one).


 If this is no longer a concern then we should be able to remove those
 duplicate definitions and use of uppercase.

Yes, we should also fix the pretty print options, so that arrays and
not just scalar arrays print nicely:

a = np.array([np.nan, 1, 2])
print a - NaN, ...
print a[0] - nan

But this is much easier, as the code is in python.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-12 Thread David Cournapeau
Ryan May wrote:

 [DEFAULT]
 include_dirs = /opt/intel/mkl/10.0.2.018/include/
 http://10.0.2.018/include/
 library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t/:/usr/lib
 http://10.0.2.018/lib/em64t/:/usr/lib

 [blas]
 libraries = mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5

 [lapack]
 libraries = mkl_lapack, mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5

 It's just confusing I guess because if I change blas and lapack to
 blas_opt and lapack_opt, I cannot get it to work.


Yes, the whole thing is very confusing; trying to understand it when I
try to be compatible with it in numscons drove me crazy (the changes
with default section handling in python 2.6 did no help). IMHO, we
should get rid of all this at some point, and use something much simpler
(one file, no sections, just straight LIBPATH + LIBS + CPPATH options),
because the current code has gone much beyond the madness point. But it
will break some configurations for sure.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-12 Thread Ryan May
On Thu, Mar 12, 2009 at 8:30 AM, David Cournapeau 
da...@ar.media.kyoto-u.ac.jp wrote:

 Ryan May wrote:
 
  [DEFAULT]
  include_dirs = /opt/intel/mkl/10.0.2.018/include/
  http://10.0.2.018/include/
  library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t/:/usr/lib
  http://10.0.2.018/lib/em64t/:/usr/lib
 
  [blas]
  libraries = mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5
 
  [lapack]
  libraries = mkl_lapack, mkl_gf_lp64, mkl_gnu_thread, mkl_core, iomp5
 
  It's just confusing I guess because if I change blas and lapack to
  blas_opt and lapack_opt, I cannot get it to work.


 Yes, the whole thing is very confusing; trying to understand it when I
 try to be compatible with it in numscons drove me crazy (the changes
 with default section handling in python 2.6 did no help). IMHO, we
 should get rid of all this at some point, and use something much simpler
 (one file, no sections, just straight LIBPATH + LIBS + CPPATH options),
 because the current code has gone much beyond the madness point. But it
 will break some configurations for sure.


Glad to hear it's not just me.  I was beginning to think I was being thick
headed

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error building SciPy SVN with NumPy SVN

2009-03-12 Thread David Cournapeau
On Thu, Mar 12, 2009 at 5:25 AM, Ryan May rma...@gmail.com wrote:

 That's fine.  I just wanted to make sure I didn't do something weird while
 getting numpy built with MKL.

It should be fixed in r6650

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error building SciPy SVN with NumPy SVN

2009-03-12 Thread Ryan May
On Thu, Mar 12, 2009 at 9:02 AM, David Cournapeau courn...@gmail.comwrote:

 On Thu, Mar 12, 2009 at 5:25 AM, Ryan May rma...@gmail.com wrote:

  That's fine.  I just wanted to make sure I didn't do something weird
 while
  getting numpy built with MKL.

 It should be fixed in r6650


Fixed for me.  I get a segfault running scipy.test(), but that's probably
due to MKL.

Thanks, David.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error building SciPy SVN with NumPy SVN

2009-03-12 Thread David Cournapeau
On Thu, Mar 12, 2009 at 11:23 PM, Ryan May rma...@gmail.com wrote:


 Fixed for me.  I get a segfault running scipy.test(), but that's probably
 due to MKL.

Yes, it is. Scipy run the test suite fine for me.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error building SciPy SVN with NumPy SVN

2009-03-12 Thread Ryan May
On Thu, Mar 12, 2009 at 9:55 AM, David Cournapeau courn...@gmail.comwrote:

 On Thu, Mar 12, 2009 at 11:23 PM, Ryan May rma...@gmail.com wrote:

 
  Fixed for me.  I get a segfault running scipy.test(), but that's probably
  due to MKL.

 Yes, it is. Scipy run the test suite fine for me.


While scipy builds, matplotlib's basemap toolkit spits this out:

running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler
options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler
options
running build_src
building extension mpl_toolkits.basemap._proj sources
error: build/src.linux-x86_64-2.5/gfortran_vs2003_hack.c: No such file or
directory

Any ideas?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-12 Thread Francesc Alted
A Thursday 12 March 2009, Ryan May escrigué:
 I can get it working now with either the [mkl] section like your
 config or the following config:

 [DEFAULT]
 include_dirs = /opt/intel/mkl/10.0.2.018/include/
 library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t/:/usr/lib
 ^
I see that you are using a multi-directory path here.  My understanding 
was that this is not supported by numpy.distutils, but apparently it 
worked for you (?), or if you get rid of the ':/usr/lib' trailing part 
of library_dirs it works ok too?


-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel MKL on Core2 system

2009-03-12 Thread Ryan May
On Thu, Mar 12, 2009 at 10:11 AM, Francesc Alted fal...@pytables.orgwrote:

 A Thursday 12 March 2009, Ryan May escrigué:
  I can get it working now with either the [mkl] section like your
  config or the following config:
 
  [DEFAULT]
  include_dirs = /opt/intel/mkl/10.0.2.018/include/
  library_dirs = /opt/intel/mkl/10.0.2.018/lib/em64t/:/usr/lib
  ^
 I see that you are using a multi-directory path here.  My understanding
 was that this is not supported by numpy.distutils, but apparently it
 worked for you (?), or if you get rid of the ':/usr/lib' trailing part
 of library_dirs it works ok too?


Well, if by multi-directory you mean the colon-separated list, this is what
is documented in site.cfg.example and used by the gentoo ebuild on my
system.  I need the /usr/lib part so that it can pick up libblas.so and
liblapack.so.  Otherwise, it won't link in MKL.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error building SciPy SVN with NumPy SVN

2009-03-12 Thread David Cournapeau
On Fri, Mar 13, 2009 at 12:10 AM, Ryan May rma...@gmail.com wrote:
 On Thu, Mar 12, 2009 at 9:55 AM, David Cournapeau courn...@gmail.com
 wrote:

 On Thu, Mar 12, 2009 at 11:23 PM, Ryan May rma...@gmail.com wrote:

 
  Fixed for me.  I get a segfault running scipy.test(), but that's
  probably
  due to MKL.

 Yes, it is. Scipy run the test suite fine for me.

 While scipy builds, matplotlib's basemap toolkit spits this out:

 running install
 running build
 running config_cc
 unifing config_cc, config, build_clib, build_ext, build commands --compiler
 options
 running config_fc
 unifing config_fc, config, build_clib, build_ext, build commands --fcompiler
 options
 running build_src
 building extension mpl_toolkits.basemap._proj sources
 error: build/src.linux-x86_64-2.5/gfortran_vs2003_hack.c: No such file or
 directory

Ok, I've just back out the changes in 6653 - let's not break everything now :)

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error building SciPy SVN with NumPy SVN

2009-03-12 Thread Ryan May
On Thu, Mar 12, 2009 at 12:00 PM, David Cournapeau courn...@gmail.comwrote:

 On Fri, Mar 13, 2009 at 12:10 AM, Ryan May rma...@gmail.com wrote:
  On Thu, Mar 12, 2009 at 9:55 AM, David Cournapeau courn...@gmail.com
  wrote:
 
  On Thu, Mar 12, 2009 at 11:23 PM, Ryan May rma...@gmail.com wrote:
 
  
   Fixed for me.  I get a segfault running scipy.test(), but that's
   probably
   due to MKL.
 
  Yes, it is. Scipy run the test suite fine for me.
 
  While scipy builds, matplotlib's basemap toolkit spits this out:
 
  running install
  running build
  running config_cc
  unifing config_cc, config, build_clib, build_ext, build commands
 --compiler
  options
  running config_fc
  unifing config_fc, config, build_clib, build_ext, build commands
 --fcompiler
  options
  running build_src
  building extension mpl_toolkits.basemap._proj sources
  error: build/src.linux-x86_64-2.5/gfortran_vs2003_hack.c: No such file or
  directory

 Ok, I've just back out the changes in 6653 - let's not break everything now
 :)


Thanks, that fixed it.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Sent from: Norman Oklahoma United States.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Poll: Semantics for % in Cython

2009-03-12 Thread Dag Sverre Seljebotn
(First off, is it OK to continue polling the NumPy list now and then on 
Cython language decisions? Or should I expect that any interested Cython 
users follow the Cython list?)

In Python, if I write -1 % 5, I get 4. However, in C if I write -1 % 
5 I get -1. The question is, what should I get in Cython if I write (a 
% b) where a and b are cdef ints? Should I

[ ] Get 4, because it should behave just like in Python, avoiding 
surprises when adding types to existing algorithms (this will require 
extra logic and be a bit slower)

[ ] Get -1, because they're C ints, and besides one isn't using
Cython if one doesn't care about performance

Whatever we do, this also affects the division operator, so that one in 
any case will have a==(a//b)*b+a%b.

(Orthogonal to this, we can introduce compiler directives to change the 
meaning of the operator from the default in a code blocks, and/or make 
special functions for the semantics that are not chosen as default.)

-- 
Dag Sverre
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Poll: Semantics for % in Cython

2009-03-12 Thread Gael Varoquaux
On Thu, Mar 12, 2009 at 07:59:48PM +0100, Dag Sverre Seljebotn wrote:
 (First off, is it OK to continue polling the NumPy list now and then on 
 Cython language decisions? Or should I expect that any interested Cython 
 users follow the Cython list?)

Yes, IMHO.

 In Python, if I write -1 % 5, I get 4. However, in C if I write -1 % 
 5 I get -1. The question is, what should I get in Cython if I write (a 
 % b) where a and b are cdef ints? Should I

 [ ] Get 4, because it should behave just like in Python, avoiding 
 surprises when adding types to existing algorithms (this will require 
 extra logic and be a bit slower)

 [ ] Get -1, because they're C ints, and besides one isn't using
 Cython if one doesn't care about performance

Behave like in Python. Cython should try to be as Python-like as
possible, IMHO. I would like to think of it as an (optionally)
static-typed Python.

My 2 cents,

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy and Scientific Python

2009-03-12 Thread vincent . thierion
Hello,

I use numpy and Scientific Numpy for my work.
I installed them in a manner I can use them on remote OS in copying  
them and using sys.path.append. Many times it works, but sometimes  
(depending on Python version) I receive this error :

ImportError: $MYLIBFOLDER/site-packages/
numpy/core/multiarray.so: cannot open shared object file: No such file  
or directory

While this file exists on the expected place.

Error occurs on 2.3.4 (#1, Dec 11 2007, 18:02:43) [GCC 3.4.6 20060404  
(Red Hat 3.4.6-9)]

Thank you in advance

Vincent


Ce message a ete envoye par le serveur IMP de l'EMA.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Portable macro to get NAN, INF, positive and negative zero

2009-03-12 Thread Bruce Southey
David Cournapeau wrote:
 On Thu, Mar 12, 2009 at 10:19 PM, Bruce Southey bsout...@gmail.com wrote:
   
 David Cournapeau wrote:
 
 Hi,

 For the record, I have just added the following functionalities to
 numpy, which may simplify some C code:
 - NPY_NAN/NPY_INFINITY/NPY_PZERO/NPY_NZERO: macros to get nan, inf,
 positive and negative zeros. Rationale: some code use NAN, _get_nan,
 etc... NAN is a GNU C extension, INFINITY is not available on many C
 compilers. The NPY_ macros are defined from the IEEE754 format, and as
 such should be very fast (the values should be inlined).
 - we can now use inline safely in numpy C code: it is defined to
 something recognized by the compiler or nothing if inline is not
 supported. It is NOT defined publicly to avoid namespace pollution.
 - NPY_INLINE is a macro which can be used publicly, and has the same
 usage as inline.

 cheers,

 David
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

   
 Hi,
 I am curious how this relates to Zach's comment in the thread on
 'Infinity Definitions':
 

 It does not directly - but I implemented those macro after Pauli,
 Charles (Harris) and me worked on improving formatting; those macro
 replace several ad-hoc solutions through the numpy code base.

 Concerning formatting, there is much more consistency with python 2.6
 (because python itself bypasses the C runtime and does the parsing
 itself), and we followed them. With numpy 1.3, you should almost never
 see anything else than nan/inf on any platform. There are still some
 cases where it fails, and some cases we can't do anything about (print
 '%s' % a, print a, print '%f' % a all go through different codepath,
 and we can't control at least one of them, I don't remember which
 one).

   
 If this is no longer a concern then we should be able to remove those
 duplicate definitions and use of uppercase.
 

 Yes, we should also fix the pretty print options, so that arrays and
 not just scalar arrays print nicely:

 a = np.array([np.nan, 1, 2])
 print a - NaN, ...
 print a[0] - nan

 But this is much easier, as the code is in python.

 cheers,

 David
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   
Okay,
I have created ticket 1051 for this change with hopefully patches that 
address this.  The patches remove these duplicate definitions and 
uppercase names  but these other usages should be depreciated (but I do 
not know how). After the changes, all tests pass only Linux system for 
Python 2.4, 2.5 and 2.6.

Regards
Bruce


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Poll: Semantics for % in Cython

2009-03-12 Thread Stéfan van der Walt
Hi Dag

2009/3/12 Dag Sverre Seljebotn da...@student.matnat.uio.no:
 (First off, is it OK to continue polling the NumPy list now and then on
 Cython language decisions? Or should I expect that any interested Cython
 users follow the Cython list?)

Given that many of the subscribers make use of the NumPy support in
Cython, I don't think they would mind; I, for one, don't.

 In Python, if I write -1 % 5, I get 4. However, in C if I write -1 %
 5 I get -1. The question is, what should I get in Cython if I write (a
 % b) where a and b are cdef ints? Should I

 [ ] Get 4, because it should behave just like in Python, avoiding
 surprises when adding types to existing algorithms (this will require
 extra logic and be a bit slower)

I'd much prefer this option.  When students struggle to get their code
faster, my advice to them is: run it to Cython, and if you are still
not happy, start tweaking this and that.  It would be much harder to
take that route if you had to take a number of exceptional behaviours
into account.

 (Orthogonal to this, we can introduce compiler directives to change the
 meaning of the operator from the default in a code blocks, and/or make
 special functions for the semantics that are not chosen as default.)

In my experience, keeping the rules simple has a big benefit (the
programmer's brain cache can only hold a small number of items -- a
very good analogy made by Fernando Perez), so I would prefer not to
have this option.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Poll: Semantics for % in Cython

2009-03-12 Thread Charles R Harris
On Thu, Mar 12, 2009 at 12:59 PM, Dag Sverre Seljebotn 
da...@student.matnat.uio.no wrote:

 (First off, is it OK to continue polling the NumPy list now and then on
 Cython language decisions? Or should I expect that any interested Cython
 users follow the Cython list?)

 In Python, if I write -1 % 5, I get 4. However, in C if I write -1 %
 5 I get -1. The question is, what should I get in Cython if I write (a
 % b) where a and b are cdef ints? Should I


I almost always want the python version, even in C, because I want the
results to lie in the interval [0,5) like a good modulus functions should ;)
I suppose the question is : is '%' standing for the modulus or is it
standing for the remainder in whatever version of division is being used.
This is similar to the difference between the trunc and floor functions; I
find using the floor function causes fewer problems, but it isn't the
default.

That said, I think it best to leave '%' with its C default and add a special
modulus function for the python version. Changing its meaning in C-like code
is going to confuse things.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Poll: Semantics for % in Cython

2009-03-12 Thread Stéfan van der Walt
2009/3/13 Charles R Harris charlesr.har...@gmail.com:
 That said, I think it best to leave '%' with its C default and add a special
 modulus function for the python version. Changing its meaning in C-like code
 is going to confuse things.

This is Cython code, so I think there is an argument to be made that
it is Python-like!

Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Poll: Semantics for % in Cython

2009-03-12 Thread Sturla Molden




 2009/3/13 Charles R Harris charlesr.har...@gmail.com:
 That said, I think it best to leave '%' with its C default and add a
 special
 modulus function for the python version. Changing its meaning in C-like
 code
 is going to confuse things.

 This is Cython code, so I think there is an argument to be made that
 it is Python-like!


I'll just repeat what I've already said on the Cython mailing list:

I think C types should behave like C types and Python objects like Python
objects. If a C long suddenly starts to return double when divided by
another C long, then that will be a major source of confusion on my part.
If I want the behaviour of Python integers, Cython lets me use Python
objects. I don't declare a variable cdef long if I want it to behave like
a Python int.

Sturla Molden
















___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Poll: Semantics for % in Cython

2009-03-12 Thread Robert Kern
On Thu, Mar 12, 2009 at 17:45, Sturla Molden stu...@molden.no wrote:




 2009/3/13 Charles R Harris charlesr.har...@gmail.com:
 That said, I think it best to leave '%' with its C default and add a
 special
 modulus function for the python version. Changing its meaning in C-like
 code
 is going to confuse things.

 This is Cython code, so I think there is an argument to be made that
 it is Python-like!


 I'll just repeat what I've already said on the Cython mailing list:

 I think C types should behave like C types and Python objects like Python
 objects. If a C long suddenly starts to return double when divided by
 another C long, then that will be a major source of confusion on my part.
 If I want the behaviour of Python integers, Cython lets me use Python
 objects. I don't declare a variable cdef long if I want it to behave like
 a Python int.

That may be part of the confusion. The expression -1%5 has no
variables. Perhaps Dag can clarify what he is asking about:

  # Constants?  (No one uses just constants in expressions,
  # really, but consistency with the other choices will
  # affect this.)
  -1 % 5

  # Explicitly declared C types?
  cdef long i, j, k
  i = -1
  j = 5
  k = i % j

  # Python types?
  i = -1
  j = 5
  k = i % j

  # A mixture?
  cdef long i
  i = -1
  j = 5
  k = i % j

When I do (2147483647 + 2147483647) in current Cython, to choose
another operation, does it use C types, or does it construct PyInts?
I.e., do I get C wraparound arithmetic, or do I get a PyLong?

I recommend making % behave consistently with the other operators;
i.e. if x+y uses C semantics, x%y should, too.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is it a bug?

2009-03-12 Thread shuwj5...@163.com
 
 On Wed, Mar 11, 2009 at 19:55, shuwj5...@163.com shuwj5...@163.com wrote:
  Hi,
 
  import numpy as np
  x = np.arange(30)
  x.shape = (2,3,5)
 
  idx = np.array([0,1])
  e = x[0,idx,:]
  print e.shape
  # return (2,5). ok.
 
  idx = np.array([0,1])
  e = x[0,:,idx]
  print e.shape
 
  #- return (2,3). I think the right answer should be (3,2). Is
  # ? ? ? it a bug here? my numpy version is 1.2.1.
 
 It's certainly weird, but it's working as designed. Fancy indexing via
 arrays is a separate subsystem from indexing via slices. Basically,
 fancy indexing decides the outermost shape of the result (e.g. the
 leftmost items in the shape tuple). If there are any sliced axes, they
 are *appended* to the end of that shape tuple.
 
x = np.arange(30)
x.shape = (2,3,5)

idx = np.array([0,1,3,4])
e = x[:,:,idx]
print e.shape
#--- return (2,3,4) just as me think.

e = x[0,:,idx]
print e.shape
#--- return (4,3). 

e = x[:,0,idx]
print e.shape
#--- return (2,4). not (4,2). why these three cases excute so
# differently?

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is it a bug?

2009-03-12 Thread Robert Kern
On Thu, Mar 12, 2009 at 01:34, Stéfan van der Walt ste...@sun.ac.za wrote:
 2009/3/12 Robert Kern robert.k...@gmail.com:
 idx = np.array([0,1])
 e = x[0,:,idx]
 print e.shape

 #- return (2,3). I think the right answer should be (3,2). Is
 #       it a bug here? my numpy version is 1.2.1.

 It's certainly weird, but it's working as designed. Fancy indexing via
 arrays is a separate subsystem from indexing via slices. Basically,
 fancy indexing decides the outermost shape of the result (e.g. the
 leftmost items in the shape tuple). If there are any sliced axes, they
 are *appended* to the end of that shape tuple.

 This was my understanding, but now I see:

 In [31]: x = np.random.random([4,5,6,7])

 In [32]: idx = np.array([1,2])

 In [33]: x[:, idx, idx, :].shape
 Out[33]: (4, 2, 7)

Hmm. Well, your guess is as good as mine at this point.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy via easy_install on windows

2009-03-12 Thread David Cournapeau
On Thu, Mar 12, 2009 at 8:08 PM, Jon Wright wri...@esrf.fr wrote:


 I'd like to have numpy as a dependency being pulled into a virtualenv
 automatically. Is that possible with the binary installer?

I don't think so - but I would think that people using virtualenv are
familiar with compiling softwares.

I now remember that numpy could not be built from sources by
easy_install, but I believe we fixed the problem. Would you mind using
on a recent svn checkout ? I would like this to be fixed if that's
still a problem,

Distributing eggs on windows would be too troublesome, I would prefer
avoiding it if we can,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is it a bug?

2009-03-12 Thread Travis E. Oliphant
shuwj5...@163.com wrote:

 It's certainly weird, but it's working as designed. Fancy indexing via
 arrays is a separate subsystem from indexing via slices. Basically,
 fancy indexing decides the outermost shape of the result (e.g. the
 leftmost items in the shape tuple). If there are any sliced axes, they
 are *appended* to the end of that shape tuple.

 
 x = np.arange(30)
 x.shape = (2,3,5)

 idx = np.array([0,1,3,4])
 e = x[:,:,idx]
 print e.shape
 #--- return (2,3,4) just as me think.

 e = x[0,:,idx]
 print e.shape
 #--- return (4,3). 

 e = x[:,0,idx]
 print e.shape
 #--- return (2,4). not (4,2). why these three cases excute so
 # differently?
   

This is probably best characterized as a wart stemming from a use-case 
oversight in the approach created to handle mixing simple indexing and 
advanced indexing.

Basically, you can understand what happens by noting that when when 
scalars are used in combination with index arrays, they are treated as 
if they were part of an indexing array.  In other words 0 is interpreted 
as [0] (or 1 is interpreted as [1]) when combined with advanced 
indexing.  This is in part so that scalars will be broadcast to the 
shape of any indexing array to correctly handle indexing in other 
use-cases.

Then, when advanced indexing is combined with ':' or '...' some special 
rules show up in determining the output shape that have to do with 
resolving potential ambiguities.   It is arguable that the rules for 
resolving ambiguities are a bit simplistic and therefore don't handle 
some real use-cases very well like the case you show.   On the other 
hand, simple rules are better even if the rules about combining ':' and 
'...' and advanced indexing are not well-known.

So, to be a little more clear about what is going on, define idx2 = [0] 
and then ask what should the shapes of x[idx2, :, idx] and x[:, idx2, 
idx] be?   Remember that advanced indexing will broadcast idx2 and idx 
to the same shape ( in this case (4,) but they could broadcast to any 
shape at all).   This broadcasted result shape must be somehow combined 
with  the shape resulting from performing the slice selection.

With x[:, idx2, idx] it is unambiguous to tack the broadcasted shape to 
the end of the shape resulting from the slice-selection (i.e. 
x[:,0,0].shape).   This leads to the (2,4) result.

Now, what about x[idx2, :, idx]?   The idx2 and idx are still broadcast 
to the same shape which could be any shape (in this particular case it 
is (4,)), but the slice-selection is done in the middle.   So, where 
should the shape of the slice selection (i.e. x[0,:,0].shape) be placed 
in the output shape?At the time this is determined, there is no 
notion that idx2 came from a scalar and so it could have come from any 
array.   Therefore, when there is this kind of ambiguity, the code 
always places the broadcasted shape at the beginning.Thus, the 
result is (4,) + (3,)  -- (4.3).

Perhaps it is a bit surprising in this particular case, but it is 
working as designed.I admit that this particular asymmetry does 
create some cognitive dissonance which leaves something to be desired.  

-Travis


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] is it a bug?

2009-03-12 Thread Travis E. Oliphant
Robert Kern wrote:
 On Thu, Mar 12, 2009 at 01:34, Stéfan van der Walt ste...@sun.ac.za wrote:
   
 2009/3/12 Robert Kern robert.k...@gmail.com:
 
 idx = np.array([0,1])
 e = x[0,:,idx]
 print e.shape

 #- return (2,3). I think the right answer should be (3,2). Is
 #   it a bug here? my numpy version is 1.2.1.
 
 It's certainly weird, but it's working as designed. Fancy indexing via
 arrays is a separate subsystem from indexing via slices. Basically,
 fancy indexing decides the outermost shape of the result (e.g. the
 leftmost items in the shape tuple). If there are any sliced axes, they
 are *appended* to the end of that shape tuple.
   
 This was my understanding, but now I see:

 In [31]: x = np.random.random([4,5,6,7])

 In [32]: idx = np.array([1,2])

 In [33]: x[:, idx, idx, :].shape
 Out[33]: (4, 2, 7)
 

 Hmm. Well, your guess is as good as mine at this point.

   
Referencing my previous post on this topic.   In this case, it is 
unambiguous to replace dimensions 1 and 2 with the result of 
broadcasting idx and idx together.   Thus the (5,6) dimensions is 
replaced by the (2,) result of indexing leaving the outer dimensions 
in-tact,  thus (4,2,7) is the result.

I could be persuaded that this attempt to differentiate unambiguous 
from ambiguous sub-space replacements was mis-guided and we should 
have stuck with the simpler rule expressed above.But, it seemed so 
aesthetically pleasing to swap-out the indexed sub-space when it was 
possible to do it.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] A question about import in numpy and in place build

2009-03-12 Thread David Cournapeau
Hi,

While making sure in-place builds work, I got the following problem:

python setup.py build_ext -i
python -c import numpy as np; np.test()
- many errors

The error are all import errors:

Traceback (most recent call last):
  File /usr/media/src/dsp/numpy/git/numpy/tests/test_ctypeslib.py,
line 83, in test_shape
self.assert_(p.from_param(np.array([[1,2]])))
  File numpy/ctypeslib.py, line 150, in from_param
return obj.ctypes
  File numpy/core/__init__.py, line 27, in module
__all__ += numeric.__all__
NameError: name 'numeric' is not defined

Now, in the numpy/core/__init__.py, there are some from numeric import
* lines, but no import numeric. So indeed numeric is not defined. But
why does this work for 'normal' numpy builds ? I want to be sure I don't
introduce some subtle issues before fixing the problem the obvious way,

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion