Re: [Numpy-discussion] Addition of a dict object to all NumPy objects

2008-08-16 Thread Robert Kern
On Fri, Aug 15, 2008 at 19:30, Christian Heimes [EMAIL PROTECTED] wrote:
 Please also note that CPython uses a freelist of unused dict instances.
 The default size of the dict free list is 80 elements. The allocation
 and deallocation of dicts is cheap if you can stay below the threshold.

That's very useful information. Thanks!

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread David Cournapeau
On Fri, Aug 15, 2008 at 7:11 PM, Charles R Harris
[EMAIL PROTECTED] wrote:

 Doesn't mingw use the MSVC library?

Yes, it does. But long double is both a compiler and library issue.
sizeof(long double) is defined by the compiler, and it is different
with mingw and visual studio ATM. I don't know what the correct
behavior should be, but the fact is that they are different.

 are the same type, so this seems to be a bug in mingw. I suppose numpy could
 detect this situation and define the types to be identical, but if this is
 impractical, then perhaps the best thing to do is issue an error message.

It is impractical in the short term because defining SIZE_LONG_DOUBLE
to 8 breaks the mingw build, and fixing it would require fixing the
whole math function configuration (the whole HAVE_LONGDOUBLE_FUNCS and
co mess). That's highly non trivial code because of the tens of
different possible configurations, and is fundamental to the whole
numpy, since it affects the math functions effectively used.

 There's isn't much you can do about long doubles while maintaining MSVC
 compatibility.

Does that mean you are in favor of letting things the way they are now
for 1.2, and fix this on 1.3/1.2.1 ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [REVIEW] Update NumPy API format to support updates that don't break binary compatibility

2008-08-16 Thread Andrew Straw
Looking at the code, but not testing it -- this looks fine to me. (I 
wrote the original NPY_VERSION stuff and sent it to Travis, who modified 
and included it.)

I have added a couple of extremely minor points to the code review tool 
-- as much as a chance to play with the tool as to comment on the code.

-Andrew

Stéfan van der Walt wrote:
 The current NumPy API number, stored as NPY_VERSION in the header files, needs
 to be incremented every time the NumPy C-API changes.  The counter tells
 developers with exactly which revision of the API they are dealing.  NumPy 
 does
 some checking to make sure that it does not run against an old version of the
 API.  Currently, we have no way of distinguishing between changes that break
 binary compatibility and those that don't.

 The proposed fix breaks the version number up into two counters -- one that 
 gets
 increased when binary compatibility is broken, and another when the API is
 changed without breaking compatibility.

 Backward compatibility with packages such as Matplotlib is maintained by
 renaming NPY_VERSION to NPY_BINARY_VERSION.

 Please review the proposed change at http://codereview.appspot.com/2946

 Regards
 Stéfan
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [REVIEW] Update NumPy API format to support updates that don't break binary compatibility

2008-08-16 Thread Stéfan van der Walt
2008/8/16 Andrew Straw [EMAIL PROTECTED]:
 Looking at the code, but not testing it -- this looks fine to me. (I
 wrote the original NPY_VERSION stuff and sent it to Travis, who modified
 and included it.)

 I have added a couple of extremely minor points to the code review tool
 -- as much as a chance to play with the tool as to comment on the code.

Excellent, thank you!  I have updated the patch accordingly.  If
anybody else has anything to add, please do so.  Unless there are
objections, I'd like to merge the patch tomorrow.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Jon Wright
Travis, Stéfan,

I missed Travis mail previously. Are you *really* sure you want force 
all C code which uses numpy arrays to be recompiled? If you mean that 
all your matplotlib/PIL/pyopengl/etc users are going to have to make a 
co-ordinated upgrade, then this seems to be a grave mistake. Does 
Stéfan's patch fix this problem to avoid all C code being recompiled? A 
co-ordinated recompile of *all* C code using numpy is certainly not a 
minimal effect!

I really hope you can find a way around the recompiling, it is really a 
problem for anyone trying to distribute code as a python module for windows.

Jon



Travis E. Oliphant wrote:
 Hi all,
 
 The 1.2 version of NumPy is going to be tagged.  There is at least one 
 change I'd like to add:   The hasobject member of the PyArray_Descr 
 structure should be renamed to flags and converted to a 32-bit 
 integer.   
 
 What does everybody think about this change?  It should have minimal 
 affect except to require a re-compile of extension modules using NumPy.  
 The only people requiring code changes would be those making intimate 
 use of the PyArray_Descr structure instead of using the macros. 
 
 It's a simple change if there is no major opposition.
 
 -Travis
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Robert Kern
On Sat, Aug 16, 2008 at 04:34, Jon Wright [EMAIL PROTECTED] wrote:
 Travis, Stéfan,

 I missed Travis mail previously. Are you *really* sure you want force
 all C code which uses numpy arrays to be recompiled? If you mean that
 all your matplotlib/PIL/pyopengl/etc users are going to have to make a
 co-ordinated upgrade, then this seems to be a grave mistake.

FWIW, neither PIL nor PyOpenGL have C code which uses numpy arrays, so
they are entirely unaffected. And this does not require an *upgrade*
of the actually affected packages, just a rebuild of the binary.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Andrew Straw
Robert Kern wrote:
 On Sat, Aug 16, 2008 at 04:34, Jon Wright [EMAIL PROTECTED] wrote:
   
 Travis, Stéfan,

 I missed Travis mail previously. Are you *really* sure you want force
 all C code which uses numpy arrays to be recompiled? If you mean that
 all your matplotlib/PIL/pyopengl/etc users are going to have to make a
 co-ordinated upgrade, then this seems to be a grave mistake.
 

 FWIW, neither PIL nor PyOpenGL have C code which uses numpy arrays, so
 they are entirely unaffected. And this does not require an *upgrade*
 of the actually affected packages, just a rebuild of the binary.

   
I'll also point out that PEP 3118 will make this unnecessary in the 
future for many applications. http://www.python.org/dev/peps/pep-3118/

 From what I can tell ( http://svn.python.org/view?rev=61491view=rev ), 
this is already in Python 2.6.

-Andrew
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 12:47 AM, David Cournapeau [EMAIL PROTECTED]wrote:

 On Fri, Aug 15, 2008 at 7:11 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
 
  Doesn't mingw use the MSVC library?

 Yes, it does. But long double is both a compiler and library issue.
 sizeof(long double) is defined by the compiler, and it is different
 with mingw and visual studio ATM. I don't know what the correct
 behavior should be, but the fact is that they are different.

  are the same type, so this seems to be a bug in mingw. I suppose numpy
 could
  detect this situation and define the types to be identical, but if this
 is
  impractical, then perhaps the best thing to do is issue an error message.

 It is impractical in the short term because defining SIZE_LONG_DOUBLE
 to 8 breaks the mingw build, and fixing it would require fixing the
 whole math function configuration (the whole HAVE_LONGDOUBLE_FUNCS and
 co mess). That's highly non trivial code because of the tens of
 different possible configurations, and is fundamental to the whole
 numpy, since it affects the math functions effectively used.

  There's isn't much you can do about long doubles while maintaining MSVC
  compatibility.

 Does that mean you are in favor of letting things the way they are now
 for 1.2, and fix this on 1.3/1.2.1 ?

 cheers,

 David
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C99 on windows

2008-08-16 Thread Pauli Virtanen
Hi,

Sat, 16 Aug 2008 03:25:11 +0200, Christian Heimes wrote:
 David Cournapeau wrote:
 The current trunk has 14 failures on windows (with mingw). 12 of them
 are related to C99 (see ticket 869). Can the people involved in recent
 changes to complex functions take a look at it ? I think this is high
 priority for 1.2.0
 
 I'm asking just out of curiosity. Why is NumPy using C99 and what
 features of C99 are used? The Microsoft compilers aren't supporting C99
 and they'll probably never will. I don't know if the Intel CC supports
 C99. Even GCC doesn't implement C99 to its full extend. Are you planing
 to restrict yourself to MinGW32?

To clarify this again: *no* features of C99 were used. The C99 specs were 
only used as a guideline to what behavior we want of complex math 
functions, and I wrote tests for this, and marked failing ones as skipped.

However, it turned out that different tests fail on different platforms, 
which means that the inf/nan handling of our complex-valued functions is 
effectively undefined. Eventually, most of the tests had to be marked as 
skipped, and so it made more sense to remove them altogether.

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 12:47 AM, David Cournapeau [EMAIL PROTECTED]wrote:

 On Fri, Aug 15, 2008 at 7:11 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
 
  Doesn't mingw use the MSVC library?

 Yes, it does. But long double is both a compiler and library issue.
 sizeof(long double) is defined by the compiler, and it is different
 with mingw and visual studio ATM. I don't know what the correct
 behavior should be, but the fact is that they are different.

  are the same type, so this seems to be a bug in mingw. I suppose numpy
 could
  detect this situation and define the types to be identical, but if this
 is
  impractical, then perhaps the best thing to do is issue an error message.

 It is impractical in the short term because defining SIZE_LONG_DOUBLE
 to 8 breaks the mingw build, and fixing it would require fixing the
 whole math function configuration (the whole HAVE_LONGDOUBLE_FUNCS and
 co mess). That's highly non trivial code because of the tens of
 different possible configurations, and is fundamental to the whole
 numpy, since it affects the math functions effectively used.

  There's isn't much you can do about long doubles while maintaining MSVC
  compatibility.

 Does that mean you are in favor of letting things the way they are now
 for 1.2, and fix this on 1.3/1.2.1 ?


Yes. I don't think MS will support true long doubles any time soon and
this affects printing and the math functions. I'm not sure what the best
solution is, there are various possibilities.

1) We could define the numpy longdouble type to be double, which makes us
compatible with MS and is effectively what numpy compiled with MSVC does
since MSVC long doubles are doubles. Perhaps this could be done by adding a
-DNOLONGDOUBLE flag to the compiler flags and then defining longdouble to
double. But this means auditing the code to make sure the long double type
is never explicitly used so that the mingw compiler does the right thing. I
don't think this should be a problem otherwise except for the loss of true
long doubles and their extra bit of precision.

2) We can keep the mingw true long doubles and avoid any reliance on the
MS library. This may already be done for the math functions by effectively
computing the long double functions as doubles, but I will have to check on
that. In any case, some of the usefulness of long doubles will still be
lost.

3) We can write our own library functions. That seems like a lot of work but
perhaps there is something in the BSD library we could use. I think BSD uses
the gnu compiler but has it's own libraries.

In any case, these solutions all look to be more than two weeks down the
road, so in the short term we probably have to go with the current
behaviour.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C99 on windows

2008-08-16 Thread Christian Heimes
Pauli Virtanen wrote:
 To clarify this again: *no* features of C99 were used. The C99 specs were 
 only used as a guideline to what behavior we want of complex math 
 functions, and I wrote tests for this, and marked failing ones as skipped.

Got it.

 However, it turned out that different tests fail on different platforms, 
 which means that the inf/nan handling of our complex-valued functions is 
 effectively undefined. Eventually, most of the tests had to be marked as 
 skipped, and so it made more sense to remove them altogether.

We hit the same issue during our work for Python 2.6 and 3.0. We came to 
the conclusion that we can't rely on the platform's math functions 
(especially trigonometric and hyperbolic functions) for special values. 
So Mark came up with the idea of lookup tables for special values. Read 
my other mail for more informations.

Christian

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 3:43 AM, Robert Kern [EMAIL PROTECTED] wrote:

 On Sat, Aug 16, 2008 at 04:34, Jon Wright [EMAIL PROTECTED] wrote:
  Travis, Stéfan,
 
  I missed Travis mail previously. Are you *really* sure you want force
  all C code which uses numpy arrays to be recompiled? If you mean that
  all your matplotlib/PIL/pyopengl/etc users are going to have to make a
  co-ordinated upgrade, then this seems to be a grave mistake.

 FWIW, neither PIL nor PyOpenGL have C code which uses numpy arrays, so
 they are entirely unaffected. And this does not require an *upgrade*
 of the actually affected packages, just a rebuild of the binary.


What about SAGE, MPL, PyTables, and friends? I don't really know what all
depends on Numpy at this point, but I think Ron's point about Windows
packages is a good one. On Linux I just compile all these from svn anyway,
but I suspect windows users mostly depend on precompiled packages. And this
probably also effects packagers for Linux binary distros like ubuntu and
fedora also, as they have to recompile all those packages and keep the
dependencies straight. True, most of the packages trail numpy development by
some time, but that is also an arguement against the urge to push things in
early.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread David Cournapeau
On Sat, Aug 16, 2008 at 9:11 AM, Charles R Harris
[EMAIL PROTECTED] wrote:

 1) We could define the numpy longdouble type to be double, which makes us
 compatible with MS and is effectively what numpy compiled with MSVC does
 since MSVC long doubles are doubles. Perhaps this could be done by adding a
 -DNOLONGDOUBLE flag to the compiler flags and then defining longdouble to
 double. But this means auditing the code to make sure the long double type
 is never explicitly used so that the mingw compiler does the right thing. I
 don't think this should be a problem otherwise except for the loss of true
 long doubles and their extra bit of precision.

According to Travis, we never use long double, always npy_long_double.
But as I mentioned above, forcing a mode with long double being
effectively double breaks the configuration.

This is the problem, because fixing it (which I will do once the trunk
is open for 1.3) requires some non trivial changes in the
configuration, namely: we look for some math functions (says acoshf)
and assumes that finding one guarantees all of the same kind (float
versions of hyperbolic float functions) exist, which only worked on
some platforms, but not on others.

The solution is not complicated (testing for every function +
bisecting to avoid a too long configuration stage), but will need
thorough testing.

 2) We can keep the mingw true long doubles and avoid any reliance on the
 MS library. This may already be done for the math functions by effectively
 computing the long double functions as doubles, but I will have to check on
 that. In any case, some of the usefulness of long doubles will still be
 lost.

The problem is not only the computation, but all the support around:
printf, etc... (the current test failures are in those functions) And
anyway, since python is linked against the MS C runtime, I think it is
kind of pointless to do so.

 3) We can write our own library functions. That seems like a lot of work but
 perhaps there is something in the BSD library we could use. I think BSD uses
 the gnu compiler but has it's own libraries.

Yes, the C library in BSD* is certainly not the glibc. But see above
for the limited usefulness of this. We have to use long double as
double in the configuration; the question is not if, but how (using VS
for binaries, updating mingw to fix its broken configuration when
sizeof(npy_double) == sizeof(npy_longdouble)) and when.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Travis E. Oliphant
Jon Wright wrote:
 Travis, Stéfan,

 I missed Travis mail previously. Are you *really* sure you want force 
 all C code which uses numpy arrays to be recompiled? 
Re-compilation is necessary at some point.  We have not required 
recompilation for a long time now.Yes, it is a pain for 
distribution, but those who don't want to re-compile can point people to 
1.1.1 which will still work until they make a new release compiled 
against the newer NumPy.

I would encourage people to make use of things like Python(x,y), EPD, 
and SAGE for their distribution needs.

-Travis


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread Travis E. Oliphant
Charles R Harris wrote:


 Yes. I don't think MS will support true long doubles any time soon 
 and this affects printing and the math functions. I'm not sure what 
 the best solution is, there are various possibilities.

 1) We could define the numpy longdouble type to be double, which makes 
 us compatible with MS and is effectively what numpy compiled with MSVC 
 does since MSVC long doubles are doubles. Perhaps this could be done 
 by adding a -DNOLONGDOUBLE flag to the compiler flags and then 
 defining longdouble to double. But this means auditing the code to 
 make sure the long double type is never explicitly used so that the 
 mingw compiler does the right thing. I don't think this should be a 
 problem otherwise except for the loss of true long doubles and their 
 extra bit of precision.
All code in numpy uses npy_longdouble which is typedef'd to double if 
SIZEOF_LONGDOUBLE is the same as SIZEOF_DOUBLE.   I don't understand why 
it's a problem to just define SIZEOF_LONGDOUBLE = 8 for mingw which is 
what it is for MSVC.This makes longdouble 8 bytes.   

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 9:50 AM, Travis E. Oliphant
[EMAIL PROTECTED]wrote:

 Charles R Harris wrote:
 
 
  Yes. I don't think MS will support true long doubles any time soon
  and this affects printing and the math functions. I'm not sure what
  the best solution is, there are various possibilities.
 
  1) We could define the numpy longdouble type to be double, which makes
  us compatible with MS and is effectively what numpy compiled with MSVC
  does since MSVC long doubles are doubles. Perhaps this could be done
  by adding a -DNOLONGDOUBLE flag to the compiler flags and then
  defining longdouble to double. But this means auditing the code to
  make sure the long double type is never explicitly used so that the
  mingw compiler does the right thing. I don't think this should be a
  problem otherwise except for the loss of true long doubles and their
  extra bit of precision.
 All code in numpy uses npy_longdouble which is typedef'd to double if
 SIZEOF_LONGDOUBLE is the same as SIZEOF_DOUBLE.   I don't understand why
 it's a problem to just define SIZEOF_LONGDOUBLE = 8 for mingw which is
 what it is for MSVC.This makes longdouble 8 bytes.


That's my opinion also, I just thought that -DNOLONGDOUBLE was an easy way
to force that choice. David thinks that the function detection in the ufunc
module will be a problem. I need to take a look, perhaps there is a
dependency on include files that differ between the two compilers.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread David Cournapeau
On Sat, Aug 16, 2008 at 11:15 AM, Charles R Harris
[EMAIL PROTECTED] wrote:


 That's my opinion also, I just thought that -DNOLONGDOUBLE was an easy way
 to force that choice. David thinks that the function detection in the ufunc
 module will be a problem.

Forget what I said, I think I used a broken mingw toolchain, which
caused the problems I got. In any case, I cannot reproduce the problem
anymore.

But forcing SIZEOF_LONG_DOUBLE to 8 at configuration stage caused test
failures too (at different points, though):

http://scipy.org/scipy/numpy/ticket/890

The first one looks strange.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread David Cournapeau
On Sat, Aug 16, 2008 at 12:15 PM, Charles R Harris
[EMAIL PROTECTED] wrote:

 I was just going to look at that; it's nice to have the ticket mailing list
 working again. Is there an easy way to force the SIZEOF_LONG_DOUBLE to 8 so
 I can test this on linux?

Changing this line in numpy¥core¥setup.py:

-  ('SIZEOF_LONG_DOUBLE', 'long double'),
+  ('SIZEOF_LONG_DOUBLE', 'double'),

is what I did to get the result on windows. But it only works
because I know the C runtime really has long double of 8 bytes. On
platforms where it is not true, it is likely to break things.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread David Cournapeau
On Sat, Aug 16, 2008 at 10:47 AM, Travis E. Oliphant
[EMAIL PROTECTED] wrote:

 Re-compilation is necessary at some point.  We have not required
 recompilation for a long time now.Yes, it is a pain for
 distribution, but those who don't want to re-compile can point people to
 1.1.1 which will still work until they make a new release compiled
 against the newer NumPy.

Does that mean we will continue breaking the ABI from time to time
during the 1.* cycle ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 11:24 AM, David Cournapeau [EMAIL PROTECTED]wrote:

 On Sat, Aug 16, 2008 at 12:15 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
 
  I was just going to look at that; it's nice to have the ticket mailing
 list
  working again. Is there an easy way to force the SIZEOF_LONG_DOUBLE to 8
 so
  I can test this on linux?

 Changing this line in numpy¥core¥setup.py:

 -  ('SIZEOF_LONG_DOUBLE', 'long double'),
 +  ('SIZEOF_LONG_DOUBLE', 'double'),

 is what I did to get the result on windows. But it only works
 because I know the C runtime really has long double of 8 bytes. On
 platforms where it is not true, it is likely to break things.


Hmm. ISTM that numpy should be set up so that the change works on all
platforms. However, making it so might be something else.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Jon Wright
David Cournapeau wrote:
 Does that mean we will continue breaking the ABI from time to time
 during the 1.* cycle ?


Can someone help me to understand me what is the compelling reason for 
this change? If it only means everyone recompiles, it is hard to see 
what we, as users, are gaining by doing that.

Thanks,

Jon

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 11:44 AM, Jon Wright [EMAIL PROTECTED] wrote:

 David Cournapeau wrote:
  Does that mean we will continue breaking the ABI from time to time
  during the 1.* cycle ?


 Can someone help me to understand me what is the compelling reason for
 this change? If it only means everyone recompiles, it is hard to see
 what we, as users, are gaining by doing that.


Turns out that ipython needs to be recompiled also because of the newly
added version checking.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 11:56 AM, Charles R Harris 
[EMAIL PROTECTED] wrote:



 On Sat, Aug 16, 2008 at 11:44 AM, Jon Wright [EMAIL PROTECTED] wrote:

 David Cournapeau wrote:
  Does that mean we will continue breaking the ABI from time to time
  during the 1.* cycle ?


 Can someone help me to understand me what is the compelling reason for
 this change? If it only means everyone recompiles, it is hard to see
 what we, as users, are gaining by doing that.


 Turns out that ipython needs to be recompiled also because of the newly
 added version checking.


Looks like there is a bug in the new API version tracking:

 import numpy as np
RuntimeError: module compiled against version 109 of C-API but this
version of numpy is 10a
Traceback (most recent call last):
  File stdin, line 1, in module
  File /usr/lib/python2.5/site-packages/numpy/__init__.py, line 132, in
module
import add_newdocs
  File /usr/lib/python2.5/site-packages/numpy/add_newdocs.py, line 9, in
module
from lib import add_newdoc
  File /usr/lib/python2.5/site-packages/numpy/lib/__init__.py, line 4, in
module
from type_check import *
  File /usr/lib/python2.5/site-packages/numpy/lib/type_check.py, line 8,
in module
import numpy.core.numeric as _nx
  File /usr/lib/python2.5/site-packages/numpy/core/__init__.py, line 19,
in module
import scalarmath
ImportError: numpy.core.multiarray failed to import

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 12:05 PM, Charles R Harris 
[EMAIL PROTECTED] wrote:



 On Sat, Aug 16, 2008 at 11:56 AM, Charles R Harris 
 [EMAIL PROTECTED] wrote:



 On Sat, Aug 16, 2008 at 11:44 AM, Jon Wright [EMAIL PROTECTED] wrote:

 David Cournapeau wrote:
  Does that mean we will continue breaking the ABI from time to time
  during the 1.* cycle ?


 Can someone help me to understand me what is the compelling reason for
 this change? If it only means everyone recompiles, it is hard to see
 what we, as users, are gaining by doing that.


 Turns out that ipython needs to be recompiled also because of the newly
 added version checking.


 Looks like there is a bug in the new API version tracking:

  import numpy as np
 RuntimeError: module compiled against version 109 of C-API but this
 version of numpy is 10a
 Traceback (most recent call last):
   File stdin, line 1, in module
   File /usr/lib/python2.5/site-packages/numpy/__init__.py, line 132, in
 module
 import add_newdocs
   File /usr/lib/python2.5/site-packages/numpy/add_newdocs.py, line 9, in
 module
 from lib import add_newdoc
   File /usr/lib/python2.5/site-packages/numpy/lib/__init__.py, line 4, in
 module
 from type_check import *
   File /usr/lib/python2.5/site-packages/numpy/lib/type_check.py, line 8,
 in module
 import numpy.core.numeric as _nx
   File /usr/lib/python2.5/site-packages/numpy/core/__init__.py, line 19,
 in module
 import scalarmath
 ImportError: numpy.core.multiarray failed to import


False alarm. I needed to remove the build directory first.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 11:39 AM, Charles R Harris 
[EMAIL PROTECTED] wrote:



 On Sat, Aug 16, 2008 at 11:24 AM, David Cournapeau [EMAIL PROTECTED]wrote:

 On Sat, Aug 16, 2008 at 12:15 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
 
  I was just going to look at that; it's nice to have the ticket mailing
 list
  working again. Is there an easy way to force the SIZEOF_LONG_DOUBLE to 8
 so
  I can test this on linux?

 Changing this line in numpy¥core¥setup.py:

 -  ('SIZEOF_LONG_DOUBLE', 'long double'),
 +  ('SIZEOF_LONG_DOUBLE', 'double'),

 is what I did to get the result on windows. But it only works
 because I know the C runtime really has long double of 8 bytes. On
 platforms where it is not true, it is likely to break things.


 Hmm. ISTM that numpy should be set up so that the change works on all
 platforms. However, making it so might be something else.


Almost works, I get the same two failures as you plus a failure in
test_precisions_consistent.

ERROR: Test generic loops.
--
Traceback (most recent call last):
  File /usr/lib/python2.5/site-packages/numpy/core/tests/test_ufunc.py,
line 79, in test_generic_loops
assert_almost_equal(fone(x), fone_val, err_msg=msg)
  File /usr/lib/python2.5/site-packages/numpy/testing/utils.py, line 205,
in assert_almost_equal
return assert_array_almost_equal(actual, desired, decimal, err_msg)
  File /usr/lib/python2.5/site-packages/numpy/testing/utils.py, line 304,
in assert_array_almost_equal
header='Arrays are not almost equal')
  File /usr/lib/python2.5/site-packages/numpy/testing/utils.py, line 272,
in assert_array_compare
val = comparison(x[~xnanid], y[~ynanid])
IndexError: 0-d arrays can't be indexed

==
FAIL: test_large_types (test_scalarmath.TestPower)
--
Traceback (most recent call last):
  File
/usr/lib/python2.5/site-packages/numpy/core/tests/test_scalarmath.py, line
54, in test_large_types
assert_almost_equal(b, 6765201, err_msg=msg)
  File /usr/lib/python2.5/site-packages/numpy/testing/utils.py, line 207,
in assert_almost_equal
assert round(abs(desired - actual),decimal) == 0, msg
AssertionError:
Items are not equal: error with type 'numpy.float64': got inf
 ACTUAL: inf
 DESIRED: 6765201

==
FAIL: test_umath.TestComplexFunctions.test_precisions_consistent
--
Traceback (most recent call last):
  File /usr/lib/python2.5/site-packages/nose/case.py, line 203, in runTest
self.test(*self.arg)
  File /usr/lib/python2.5/site-packages/numpy/core/tests/test_umath.py,
line 206, in test_precisions_consistent
assert_almost_equal(fcl, fcd, decimal=15, err_msg='fch-fcl %s'%f)
  File /usr/lib/python2.5/site-packages/numpy/testing/utils.py, line 207,
in assert_almost_equal
assert round(abs(desired - actual),decimal) == 0, msg
AssertionError:
Items are not equal: fch-fcl ufunc 'arcsin'
 ACTUAL: (0.66623943249251527+1.0612750619050355j)
 DESIRED: (0.66623943249251527+1.0612750619050355j)

==
SKIP: test_umath.TestComplexFunctions.test_branch_cuts_failing
--
Traceback (most recent call last):
  File /usr/lib/python2.5/site-packages/nose/case.py, line 203, in runTest
self.test(*self.arg)
  File /usr/lib/python2.5/site-packages/numpy/testing/decorators.py, line
93, in skipper
raise nose.SkipTest, 'This test is known to fail'
SkipTest: This test is known to fail


There is seems to be a problem in defining the functions called for the
different types.

In [3]: x = np.zeros(10, dtype=np.longdouble)[0::2]

In [4]: x.dtype.itemsize
Out[4]: 8

In [5]: x
Out[5]: array([0.0, 0.0, 0.0, 0.0, 0.0], dtype=float64)

In [6]: np.exp(x)
Out[6]: array([NaN, NaN, NaN, NaN, NaN], dtype=float64)

In [7]: np.sin(x)
Out[7]: array([NaN, NaN, NaN, NaN, NaN], dtype=float64)

In [8]: x.dtype.char
Out[8]: 'g'


If I force the function to the double version this bit works fine. The odd
error message is a bug in numpy.testing where assert_array_compare fails for
arrays that contain only nan's. They all get masked out.

 Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Fernando Perez
On Sat, Aug 16, 2008 at 10:56 AM, Charles R Harris
[EMAIL PROTECTED] wrote:

 Turns out that ipython needs to be recompiled also because of the newly
 added version checking.

I'm sorry, can you clarify this?  ipython has no C code at all, so I'm
not sure what you mean here.

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 1:26 PM, Fernando Perez [EMAIL PROTECTED]wrote:

 On Sat, Aug 16, 2008 at 10:56 AM, Charles R Harris
 [EMAIL PROTECTED] wrote:

  Turns out that ipython needs to be recompiled also because of the newly
  added version checking.

 I'm sorry, can you clarify this?  ipython has no C code at all, so I'm
 not sure what you mean here.


It was a false alarm. I was calling ipython with pylab and MPL needs to be
recompiled.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread David Cournapeau
On Sat, Aug 16, 2008 at 12:44 PM, Jon Wright [EMAIL PROTECTED] wrote:
 David Cournapeau wrote:
 Does that mean we will continue breaking the ABI from time to time
 during the 1.* cycle ?


 Can someone help me to understand me what is the compelling reason for
 this change? If it only means everyone recompiles, it is hard to see
 what we, as users, are gaining by doing that.

Breaking the ABI forces users to recompile, that's pretty much the
definition of ABI breaks.

Unfortunately, up to now, we did not have a mechanism to make the
difference between extending the API (which can be done without
breaking the ABI if done carefully) and breaking the API (which
induces ABI breakage). Hopefully, once this is in place, we won't have
to do it anymore.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread David Cournapeau
On Sat, Aug 16, 2008 at 2:07 PM, Charles R Harris
[EMAIL PROTECTED] wrote:


 There is seems to be a problem in defining the functions called for the
 different types.

I don't know enough about this part of the code to be sure about the
whole function calls stack, but I would guess this is not surprising
and even expected: you force long double to be double, but you still
call the long double function if you do np.exp, so this cannot work.

On windows, it should, because the runtime (e.g. the C function expl)
does actually expects a double (expl expects an 8 bytes value). But on
mac os X, it is not necessarily the case (although it could, I don't
know the status of long double on that platform).

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 1:07 PM, Charles R Harris [EMAIL PROTECTED]
 wrote:



 On Sat, Aug 16, 2008 at 11:39 AM, Charles R Harris 
 [EMAIL PROTECTED] wrote:



 On Sat, Aug 16, 2008 at 11:24 AM, David Cournapeau [EMAIL PROTECTED]wrote:

 On Sat, Aug 16, 2008 at 12:15 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
 
  I was just going to look at that; it's nice to have the ticket mailing
 list
  working again. Is there an easy way to force the SIZEOF_LONG_DOUBLE to
 8 so
  I can test this on linux?

 Changing this line in numpy¥core¥setup.py:

 -  ('SIZEOF_LONG_DOUBLE', 'long double'),
 +  ('SIZEOF_LONG_DOUBLE', 'double'),

 is what I did to get the result on windows. But it only works
 because I know the C runtime really has long double of 8 bytes. On
 platforms where it is not true, it is likely to break things.


 Hmm. ISTM that numpy should be set up so that the change works on all
 platforms. However, making it so might be something else.


 Almost works, I get the same two failures as you plus a failure in
 test_precisions_consistent.


snip

Simply undefining HAVE_LONGDOUBLE_FUNCS doesn't work because the ufuncs are
defined the usual way with the l/f suffixes and there is a conflict with the
math.h include file. We could fix that by defining our own functions, i.e.,

#define npy_sinl sin

etc.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 2:46 PM, David Cournapeau [EMAIL PROTECTED]wrote:

 On Sat, Aug 16, 2008 at 2:07 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
 
 
  There is seems to be a problem in defining the functions called for the
  different types.

 I don't know enough about this part of the code to be sure about the
 whole function calls stack, but I would guess this is not surprising
 and even expected: you force long double to be double, but you still
 call the long double function if you do np.exp, so this cannot work.


Part of the problem is that the loops/functions called depend on the
typecode, i.e., dtype.char, which stays as 'g' even when the underlying type
is float64. We can't really fix that because the C-API still allows folks to
create arrays using the typecode and we would have to fudge things so the
right wrong descr was returned. Ugly, and a left over from numeric which
defined types to match the c types instead of the precision. Maybe we should
rewrite in FORTRAN ;) Anyway, I think the easiest solution might be to use
npy_func internally and add a -DNOLONGDOUBLE flag to override the values
returned by the distutils test code. Whether we should try such a big
modification before 1.2 is another question.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Jon Wright
Robert Kern wrote:

 FWIW, neither PIL nor PyOpenGL have C code which uses numpy arrays, so
 they are entirely unaffected.

OK, so here are some projects which might notice a 1.2 installation, in 
as much as they turn up on a google code search for:

  #include numpy/arrayobject.h -scipy -enthought

... so one might imagine their programs will suddenly stop working. 
Based on the first 40 of 200 hits, these projects seem to offer binary 
windows downloads and have a C source which includes numpy/arrayobject.h :

PyQwt3d
ScientificPython
RPy
PyTables
pygsl
VPython
bayes-blocks http://forge.pascal-network.org/projects/bblocks/
fdopython http://fdo.osgeo.org

For these ones I only found source, so they'd be daft to complain about 
a program that previously worked just fine has stopped working:

neuron (www.neuron.yale.edu)
pysl- bridge between python and S-lang
BGL - Biggles Graphics Library
www.eos.ubc.ca/research/clouds/software.html
astronomyworks.googlecode.com
pyrap.google.com
pyroms.googlecode.com
pyamg.googlecode.com
mdanalysis.googlecode.com
pythoncall - python to matlab interface
code.astraw.com/projects/motmot

(It is impressive that there are so many users out there, and it turns 
out that this is a great way to find interesting code.)

Even if there turn out to be a lot of duplicates in the 200 hits, 
already most of those projects have notices saying things like be sure 
to get numpy, not Numeric or numarray. Do you want all of them to be 
delivering a matrix of binaries for different python versions multiplied 
by numpy 1.1.x versus 1.2.x ? What happens when someone wants to use 
*two* modules at the time, but one is distributing 1.1 binaries and the 
other has 1.2? The main reason I changed to numpy was that you stopped 
doing the Numeric binaries for python2.5. There was no way to distribute 
my own code for 2.5 without shipping Numeric too, which I didn't want to 
do, given that you were trying to get everyone to switch.

What is the cool new feature that 1.2 has gained which is making all 
this worthwhile? Are you really 100% certain you need that new feature 
enough to make all those strangers need to do all that work? Can you 
give a concrete example of something which is gained by:

 The hasobject member of the PyArray_Descr structure should be renamed to 
 flags and converted to a 32-bit integer.

Try to look 12 months into the future and ask yourselves if it was 
really a good idea to break the ABI.

Jon
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANNOUNCE: ETS 3.0.0 released!

2008-08-16 Thread Dave Peterson
Hello,

I'm pleased to announce that ETS 3.0.0 has just been tagged and
released! Source distributions have been pushed to PyPi and over the
next couple hours, Win32 and OSX binaries will also be uploaded to PyPi.
This means you can install ETS, assuming you have the prereq software
installed, via the simple command:
easy_install ETS[nonets]

Please see the Install page on our wiki for more detailed installation
instructions:
https://svn.enthought.com/enthought/wiki/Install


Developers of ETS will find that the projects' trunks have already been
bumped up to the next version numbers so a simple ets up (or svn up)
should bring you up to date. Others may wish to grab a complete new
checkout via a ets co ETS. The release branches that had been created
are now removed. The next release is currently expected to be ETS 3.0.1


-- Dave

Enthought Tool Suite
---

The Enthought Tool Suite (ETS) is a collection of components developed
by Enthought and open source participants, which we use every day to
construct custom scientific applications. It includes a wide variety of
components, including:

* an extensible application framework
* application building blocks
* 2-D and 3-D graphics libraries
* scientific and math libraries
* developer tools

The cornerstone on which these tools rest is the Traits package, which
provides explicit type declarations in Python; its features include
initialization, validation, delegation, notification, and visualization
of typed attributes. More information is available for all the packages
within ETS from the Enthought Tool Suite development home page at
http://code.enthought.com/projects/tool-suite.php.


Testimonials


I set out to rebuild an application in one week that had been developed
over the last seven years (in C by generations of post-docs). Pyface and
Traits were my cornerstones and I knew nothing about Pyface or Wx. It
has been a hectic week. But here ... sits in front of me a nice
application that does most of what it should. I think this has been a
huge success. ... Thanks to the tools Enthought built, and thanks to the
friendly support from people on the [enthought-dev] list, I have been
able to build what I think is the best application so far. I have built
similar applications (controlling cameras for imaging Bose-Einstein
condensate) in C+MFC, Matlab, and C+labWindows, each time it has taken
me at least four times longer to get to a result I regard as inferior.
So I just wanted to say a big thank you. Thank you to Enthought for
providing this great software open-source. Thank you for everybody on
the list for your replies.
— Gaël Varoquaux, Laboratoire Charles Fabry, Institut d’Optique,
Palaiseau, France

I'm currently writing a realtime data acquisition/display application …
I'm using Enthought Tool Suite and Traits, and Chaco for display. IMHO,
I think that in five years ETS/Traits will be the most comonly used
framework for scientific applications.
— Gary Pajer, Department of Chemistry, Biochemistry and Physics, Rider
University, Lawrenceville NJ



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 3:02 PM, Charles R Harris [EMAIL PROTECTED]
 wrote:



 On Sat, Aug 16, 2008 at 2:46 PM, David Cournapeau [EMAIL PROTECTED]wrote:

 On Sat, Aug 16, 2008 at 2:07 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
 
 
  There is seems to be a problem in defining the functions called for the
  different types.

 I don't know enough about this part of the code to be sure about the
 whole function calls stack, but I would guess this is not surprising
 and even expected: you force long double to be double, but you still
 call the long double function if you do np.exp, so this cannot work.


 Part of the problem is that the loops/functions called depend on the
 typecode, i.e., dtype.char, which stays as 'g' even when the underlying type
 is float64. We can't really fix that because the C-API still allows folks to
 create arrays using the typecode and we would have to fudge things so the
 right wrong descr was returned. Ugly, and a left over from numeric which
 defined types to match the c types instead of the precision. Maybe we should
 rewrite in FORTRAN ;) Anyway, I think the easiest solution might be to use
 npy_func internally and add a -DNOLONGDOUBLE flag to override the values
 returned by the distutils test code. Whether we should try such a big
 modification before 1.2 is another question.


Another place this could be fixed for the ufuncs is in the ufunc code
generator, i.e., in __umath_generated.c lines like

exp_functions[2] = PyUFunc_g_g;
exp_data[2] = (void *) expl;

could have expl replaced by exp. But there are likely other problems that
will need fixing.

Too bad there isn't a flag in gcc to automatically compile long doubles as
doubles. There *is* one to compile floats as doubles.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread David Cournapeau
On Sat, Aug 16, 2008 at 5:18 PM, Charles R Harris
[EMAIL PROTECTED] wrote:


 could have expl replaced by exp. But there are likely other problems that
 will need fixing.

I think this is red-herring. Does it really make sense to force
configuring long double as double if the C runtime and compiler do
support long double as a different type than double ? The problem is
only a concern on windows because of compilers bugs (or more exactly
compiler/runtime mismatch): on windows, long double are double, and so
long double functions are given 8 bytes double, which is why it should
work.

My only worry is related to ticket 891
(http://projects.scipy.org/scipy/numpy/ticket/891), which confused me
around this long double is double thing.

Anyway, that will be wait for 1.3.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] long double woes on win32

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 5:46 PM, David Cournapeau [EMAIL PROTECTED]wrote:

 On Sat, Aug 16, 2008 at 5:18 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
 

  could have expl replaced by exp. But there are likely other problems that
  will need fixing.

 I think this is red-herring. Does it really make sense to force
 configuring long double as double if the C runtime and compiler do
 support long double as a different type than double ? The problem is
 only a concern on windows because of compilers bugs (or more exactly
 compiler/runtime mismatch): on windows, long double are double, and so
 long double functions are given 8 bytes double, which is why it should
 work.


Yeah, I think that is right. Although it would be nice if things were set up
so a few simple defines would make everything work. There is really no
reason it couldn't work, just that numpy wasn't designed that way.

Anyway, where does mingw get its math.h include file? I suspect it might be
a problem if it doesn't match the windows version because the testcode might
not correctly detect the existence, or lack thereof, of various functions.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 3:02 PM, Jon Wright [EMAIL PROTECTED] wrote:
snip



 Try to look 12 months into the future and ask yourselves if it was
 really a good idea to break the ABI.


I'm slowly coming to the conviction that there should be no C-ABI changes in
1.2. And maybe not in 1.3 either, instead we should work on fixing bugs and
beating the code into shape. The selling point of 1.1.x is python 2.3
compatibility, the selling point of 1.2.x should be bug fixes and cleanups,
including testing and documentation. ABI changes should be scheduled for
2.0. At some point down the road we can start thinking of Python 3.0, but I
don't expect Python 3.0 adoption to be quick, it is a big step and a lot of
projects will have to make the transition before it becomes compelling.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread David Cournapeau
On Sat, Aug 16, 2008 at 11:16 PM, Charles R Harris
[EMAIL PROTECTED] wrote:

 I'm slowly coming to the conviction that there should be no C-ABI changes in
 1.2.

It does not make sense to revert those changes anymore, but we keep
having those discussions, and I still don't understand whether there
is a consensus here. Breaking a bit or breaking a lot is the same: we
broke API in 1.1, we are breaking in 1.2, will we break in 1.3 ? If we
we will always break when we need to, at least I would like to be
clear on that.

I personally think it is wrong to break API and ABI. I don't care
about the version where we break; I think the focus on the version is
a bit misplaced, because in python (and numpy), the version is
different from what is the norm in other big open source projects
anyway (in any open source project I know, breaking between N.x and
N.x+1 is a bug).

The focus should be the period of time you can rely on a stable API
(and ABI). Changing the API every three months sounds really wrong to
me, whatever the version is. That's something that most if not all
successful open source projects used by other ones do (gtk, qt, kde,
etc...). I don't see what's so different in numpy so that we could
afford doing something those other projects can't.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Possible new multiplication operators for Python

2008-08-16 Thread Fernando Perez
Hi all,

[ please keep all replies to this only on the numpy list.  I'm cc'ing
the scipy ones to make others aware of the topic, but do NOT reply on
those lists so we can have an organized thread for future reference]

In the Python-dev mailing lists, there were recently two threads
regarding the possibility of adding to the language new multiplication
operators (amongst others).  This would allow one to define things
like an element-wise and a matrix product for numpy arrays, for
example:

http://mail.python.org/pipermail/python-dev/2008-July/081508.html
http://mail.python.org/pipermail/python-dev/2008-July/081551.html

It turns out that there's an old pep on this issue:

http://www.python.org/dev/peps/pep-0225/

which hasn't been ruled out, simply postponed.  At this point it seems
that there is room for some discussion, and obviously the input of the
numpy/scipy crowd would be very welcome.  I volunteered to host a BOF
next week at scipy so we could collect feedback from those present,
but it's important that those NOT present at the conference can
equally voice their ideas/opinions.

So I wanted to open this thread here to collect feedback.  We'll then
try to have the bof next week at the conference, and I'll summarize
everything for python-dev.  Obviously this doesn't mean that we'll get
any changes in, but at least there's interest in discussing a topic
that has been dear to everyone here.

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread David Cournapeau
On Sat, Aug 16, 2008 at 11:59 PM, David Cournapeau [EMAIL PROTECTED] wrote:
 On Sat, Aug 16, 2008 at 11:16 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:

 I'm slowly coming to the conviction that there should be no C-ABI changes in
 1.2.

 It does not make sense to revert those changes anymore,

Actually, I did not follow the discussion when this change happened,
but it does not look difficult to change the code such as we do not
break the ABI. Instead of replacing the flag, we can put it at the
end, and deprecate (but not remove) the old one.

Would anyone be strongly against that ?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible new multiplication operators for Python

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 11:03 PM, Fernando Perez [EMAIL PROTECTED]wrote:

 Hi all,

 [ please keep all replies to this only on the numpy list.  I'm cc'ing
 the scipy ones to make others aware of the topic, but do NOT reply on
 those lists so we can have an organized thread for future reference]

 In the Python-dev mailing lists, there were recently two threads
 regarding the possibility of adding to the language new multiplication
 operators (amongst others).  This would allow one to define things
 like an element-wise and a matrix product for numpy arrays, for
 example:

 http://mail.python.org/pipermail/python-dev/2008-July/081508.html
 http://mail.python.org/pipermail/python-dev/2008-July/081551.html

 It turns out that there's an old pep on this issue:

 http://www.python.org/dev/peps/pep-0225/

 which hasn't been ruled out, simply postponed.  At this point it seems
 that there is room for some discussion, and obviously the input of the
 numpy/scipy crowd would be very welcome.  I volunteered to host a BOF
 next week at scipy so we could collect feedback from those present,
 but it's important that those NOT present at the conference can
 equally voice their ideas/opinions.

 So I wanted to open this thread here to collect feedback.  We'll then
 try to have the bof next week at the conference, and I'll summarize
 everything for python-dev.  Obviously this doesn't mean that we'll get
 any changes in, but at least there's interest in discussing a topic
 that has been dear to everyone here.


Geez, some of those folks in those threads are downright rude.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Possible new multiplication operators for Python

2008-08-16 Thread Fernando Perez
On Sat, Aug 16, 2008 at 10:22 PM, Charles R Harris
[EMAIL PROTECTED] wrote:

 Geez, some of those folks in those threads are downright rude.

Python-dev is nowhere nearly as civil as these lists, which I consider
to be an asset of ours which we should always strive to protect.  In
this list even strong disagreement is mostly very well handled, and
hopefully we'll keep this tradition as we grow.

Python-dev may not be as bad as the infamous LKML, but it can
certainly be unpleasant at times (this is one more reason to praise
Travis O for all the work he's pushed over there, he certainly had to
take quite a few gang-like beatings in the process).

No worries though: I'll collect all the feedback and report back up there.

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C-API change for 1.2

2008-08-16 Thread Charles R Harris
On Sat, Aug 16, 2008 at 11:21 PM, David Cournapeau [EMAIL PROTECTED]wrote:

 On Sat, Aug 16, 2008 at 11:59 PM, David Cournapeau [EMAIL PROTECTED]
 wrote:
  On Sat, Aug 16, 2008 at 11:16 PM, Charles R Harris
  [EMAIL PROTECTED] wrote:
 
  I'm slowly coming to the conviction that there should be no C-ABI
 changes in
  1.2.
 
  It does not make sense to revert those changes anymore,

 Actually, I did not follow the discussion when this change happened,
 but it does not look difficult to change the code such as we do not
 break the ABI. Instead of replacing the flag, we can put it at the
 end, and deprecate (but not remove) the old one.

 Would anyone be strongly against that ?


I have nothing against extensions when they can be made to serve. If a
dictionary gets added to ndarrays I hope it is done that way, likewise for
generalized ufuncs. In the present case I think Travis wants to preserve the
functionality while changing the name and type, and that doesn't really fit
the extension model. But I might be wrong about that.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion