[Numpy-discussion] change default integer from int32 to int64 on win64?

2014-07-23 Thread Julian Taylor
hi,
it recently came to my attention that the default integer type in numpy
on windows 64 bit is a 32 bit integers [0].
This seems like a quite serious problem as it means you can't use any
integers created from python integers  32 bit to index arrays larger
than 2GB.
For example np.product(array.shape) which will never overflow on linux
and mac, can overflow on win64.

I think this is a very dangerous platform difference and a quite large
inconvenience for win64 users so I think it would be good to fix this.
This would be a very large change of API and probably also ABI.
But as we also never officially released win64 binaries we could change
it for from source compilations and give win64 binary distributors the
option to keep the old ABI/API at their discretion.

Any thoughts on this from win64 users?

Cheers,
Julian Taylor

[0] https://github.com/astropy/astropy/pull/2697
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] __numpy_ufunc__ and 1.9 release

2014-07-23 Thread Julian Taylor
On 15.07.2014 20:06, Julian Taylor wrote:
 hi,
 as you may know we want to release numpy 1.9 soon. We should have solved
 most indexing regressions the first beta showed.
 
 The remaining blockers are finishing the new __numpy_ufunc__ feature.
 This feature should allow for alternative method to overriding the
 behavior of ufuncs from subclasses.
 It is described here:
 https://github.com/numpy/numpy/blob/master/doc/neps/ufunc-overrides.rst
 
 The current blocker issues are:
 https://github.com/numpy/numpy/issues/4753
 https://github.com/numpy/numpy/pull/4815
 
 I'm not to familiar with all the complications of subclassing so I can't
 really say how hard this is to solve.
 My issue is that it there still seems to be debate on how to handle
 operator overriding correctly and I am opposed to releasing a numpy with
 yet another experimental feature that may or may not be finished
 sometime later. Having datetime in infinite experimental state is bad
 enough.
 I think nobody is served well if we release 1.9 with the feature
 prematurely based on a not representative set of users and the later
 after more users showed up see we have to change its behavior.
 
 So I'm wondering if we should delay the introduction of this feature to
 1.10 or is it important enough to wait until there is a consensus on the
 remaining issues?
 

So its been a week and we got a few answers and new issues.
To summarize:
- to my knowledge no progress was made on the issues
- scipy already has a released version using the current implementation
- no very loud objections to delaying the feature to 1.10
- I am still unfamiliar with the problematics of subclassing, but don't
want to release something new which has unsolved issues.

That scipy already uses it in a released version (0.14) is very
problematic. Can maybe someone give some insight if the potential
changes to resolve the remaining issues would break scipy?
If so we have following choices:
- declare what we have as final and close the remaining issues as 'won't
fix'. Any changes would have to have a new name __numpy_ufunc2__ or a
somehow versioned the interface
- delay the introduction, potentially breaking scipy 0.14 when numpy
1.10 is released.

I would like to get the next (and last) numpy 1.9 beta out soon, so I
would propose to make a decision until this Saturday the 26.02.2014
however misinformed it may be.

Please note that the numpy 1.10 release cycle is likely going to be a
very long one as we are currently planning to change a bunch of default
behaviours that currently raise deprecation warnings and possibly will
try to fix string types, text IO and datetime. Please see the future
changes notes in the current 1.9.x release notes.
If we delay numpy_ufunc it is not unlikely that it will take a year
until we release 1.10. Though we could still put it into a earlier 1.9.1.

Cheers,
Julian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] change default integer from int32 to int64 on win64?

2014-07-23 Thread Robert Kern
On Wed, Jul 23, 2014 at 6:19 PM, Julian Taylor
jtaylor.deb...@googlemail.com wrote:
 hi,
 it recently came to my attention that the default integer type in numpy
 on windows 64 bit is a 32 bit integers [0].
 This seems like a quite serious problem as it means you can't use any
 integers created from python integers  32 bit to index arrays larger
 than 2GB.
 For example np.product(array.shape) which will never overflow on linux
 and mac, can overflow on win64.

Currently, on win64, we use Python long integer objects for `.shape`
and related attributes. I wonder if we could return numpy int64
scalars instead. Then np.product() (or anything else that consumes
these via np.asarray()) would infer the correct dtype for the result.

 I think this is a very dangerous platform difference and a quite large
 inconvenience for win64 users so I think it would be good to fix this.
 This would be a very large change of API and probably also ABI.

Yes. Not only would it be a very large change from the status quo, I
think it introduces *much greater* platform difference than what we
have currently. The assumption that the default integer object
corresponds to the platform C long, whatever that is, is pretty
heavily ingrained.

 But as we also never officially released win64 binaries we could change
 it for from source compilations and give win64 binary distributors the
 option to keep the old ABI/API at their discretion.

That option would make the problem worse, not better.

-- 
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] change default integer from int32 to int64 on win64?

2014-07-23 Thread Julian Taylor
On 23.07.2014 20:54, Robert Kern wrote:
 On Wed, Jul 23, 2014 at 6:19 PM, Julian Taylor
 jtaylor.deb...@googlemail.com wrote:
 hi,
 it recently came to my attention that the default integer type in numpy
 on windows 64 bit is a 32 bit integers [0].
 This seems like a quite serious problem as it means you can't use any
 integers created from python integers  32 bit to index arrays larger
 than 2GB.
 For example np.product(array.shape) which will never overflow on linux
 and mac, can overflow on win64.
 
 Currently, on win64, we use Python long integer objects for `.shape`
 and related attributes. I wonder if we could return numpy int64
 scalars instead. Then np.product() (or anything else that consumes
 these via np.asarray()) would infer the correct dtype for the result.

this might be a less invasive alternative that might solve a lot of the
incompatibilities, but it would probably also change np.arange(5) and
similar functions to int64 which might change the dtype of a lot of
arrays. The difference to just changing it everywhere might not be so
large anymore.

 
 I think this is a very dangerous platform difference and a quite large
 inconvenience for win64 users so I think it would be good to fix this.
 This would be a very large change of API and probably also ABI.
 
 Yes. Not only would it be a very large change from the status quo, I
 think it introduces *much greater* platform difference than what we
 have currently. The assumption that the default integer object
 corresponds to the platform C long, whatever that is, is pretty
 heavily ingrained.

This should be only a concern for the ABI which can be solved by simply
recompiling.
In comparison that the API is different on win64 compared to all other
platforms is something that needs source level changes.

 
 But as we also never officially released win64 binaries we could change
 it for from source compilations and give win64 binary distributors the
 option to keep the old ABI/API at their discretion.
 
 That option would make the problem worse, not better.
 

maybe, I'm not familiar with the numpy win64 distribution landscape.
Is it not like linux where you have one distributor per workstation
setup that can update all its packages to a new ABI on one go?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] change default integer from int32 to int64 on win64?

2014-07-23 Thread Robert Kern
On Wed, Jul 23, 2014 at 8:50 PM, Julian Taylor
jtaylor.deb...@googlemail.com wrote:
 On 23.07.2014 20:54, Robert Kern wrote:
 On Wed, Jul 23, 2014 at 6:19 PM, Julian Taylor
 jtaylor.deb...@googlemail.com wrote:
 hi,
 it recently came to my attention that the default integer type in numpy
 on windows 64 bit is a 32 bit integers [0].
 This seems like a quite serious problem as it means you can't use any
 integers created from python integers  32 bit to index arrays larger
 than 2GB.
 For example np.product(array.shape) which will never overflow on linux
 and mac, can overflow on win64.

 Currently, on win64, we use Python long integer objects for `.shape`
 and related attributes. I wonder if we could return numpy int64
 scalars instead. Then np.product() (or anything else that consumes
 these via np.asarray()) would infer the correct dtype for the result.

 this might be a less invasive alternative that might solve a lot of the
 incompatibilities, but it would probably also change np.arange(5) and
 similar functions to int64 which might change the dtype of a lot of
 arrays. The difference to just changing it everywhere might not be so
 large anymore.

No, np.arange(5) would not change behavior given my suggestion, only
the type of the integer objects in ndarray.shape and related tuples.

 I think this is a very dangerous platform difference and a quite large
 inconvenience for win64 users so I think it would be good to fix this.
 This would be a very large change of API and probably also ABI.

 Yes. Not only would it be a very large change from the status quo, I
 think it introduces *much greater* platform difference than what we
 have currently. The assumption that the default integer object
 corresponds to the platform C long, whatever that is, is pretty
 heavily ingrained.

 This should be only a concern for the ABI which can be solved by simply
 recompiling.
 In comparison that the API is different on win64 compared to all other
 platforms is something that needs source level changes.

No, the API is no different on win64 than other platforms. Why do you
think it is? The win64 platform is a weird platform in this respect,
having made a choice that other 64-bit platforms didn't, but numpy's
API treats it consistently. When we say that something is a C long,
it's a C long on all platforms.

 But as we also never officially released win64 binaries we could change
 it for from source compilations and give win64 binary distributors the
 option to keep the old ABI/API at their discretion.

 That option would make the problem worse, not better.

 maybe, I'm not familiar with the numpy win64 distribution landscape.
 Is it not like linux where you have one distributor per workstation
 setup that can update all its packages to a new ABI on one go?

No. There tend to be multiple providers.

-- 
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] change default integer from int32 to int64 on win64?

2014-07-23 Thread Sebastian Berg
On Wed, 2014-07-23 at 21:50 +0200, Julian Taylor wrote:
 On 23.07.2014 20:54, Robert Kern wrote:
  On Wed, Jul 23, 2014 at 6:19 PM, Julian Taylor
  jtaylor.deb...@googlemail.com wrote:
  hi,
  it recently came to my attention that the default integer type in numpy
  on windows 64 bit is a 32 bit integers [0].
  This seems like a quite serious problem as it means you can't use any
  integers created from python integers  32 bit to index arrays larger
  than 2GB.
  For example np.product(array.shape) which will never overflow on linux
  and mac, can overflow on win64.
  
  Currently, on win64, we use Python long integer objects for `.shape`
  and related attributes. I wonder if we could return numpy int64
  scalars instead. Then np.product() (or anything else that consumes
  these via np.asarray()) would infer the correct dtype for the result.
 
 this might be a less invasive alternative that might solve a lot of the
 incompatibilities, but it would probably also change np.arange(5) and
 similar functions to int64 which might change the dtype of a lot of
 arrays. The difference to just changing it everywhere might not be so
 large anymore.
 

Aren't most such functions already using intp? Just guessing, but:

In [16]: np.arange(30, dtype=np.long).dtype.num
Out[16]: 9

In [17]: np.arange(30, dtype=np.intp).dtype.num
Out[17]: 7

In [18]: np.arange(30).dtype.num
Out[18]: 7

frankly, I am not sure what needs to change at all, except the normal
array creation and the sum promotion rule. I am probably naive here, but
what is the ABI change that is necessary for that?

I guess the problem you see is breaking code doing np.array([1,2,3]) and
then assuming in C that it is a long array?

- Sebastian

  
  I think this is a very dangerous platform difference and a quite large
  inconvenience for win64 users so I think it would be good to fix this.
  This would be a very large change of API and probably also ABI.
  
  Yes. Not only would it be a very large change from the status quo, I
  think it introduces *much greater* platform difference than what we
  have currently. The assumption that the default integer object
  corresponds to the platform C long, whatever that is, is pretty
  heavily ingrained.
 
 This should be only a concern for the ABI which can be solved by simply
 recompiling.
 In comparison that the API is different on win64 compared to all other
 platforms is something that needs source level changes.
 
  
  But as we also never officially released win64 binaries we could change
  it for from source compilations and give win64 binary distributors the
  option to keep the old ABI/API at their discretion.
  
  That option would make the problem worse, not better.
  
 
 maybe, I'm not familiar with the numpy win64 distribution landscape.
 Is it not like linux where you have one distributor per workstation
 setup that can update all its packages to a new ABI on one go?
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] change default integer from int32 to int64 on win64?

2014-07-23 Thread Sebastian Berg
On Wed, 2014-07-23 at 22:06 +0200, Sebastian Berg wrote:
 On Wed, 2014-07-23 at 21:50 +0200, Julian Taylor wrote:
  On 23.07.2014 20:54, Robert Kern wrote:
   On Wed, Jul 23, 2014 at 6:19 PM, Julian Taylor
   jtaylor.deb...@googlemail.com wrote:
   hi,
   it recently came to my attention that the default integer type in numpy
   on windows 64 bit is a 32 bit integers [0].
   This seems like a quite serious problem as it means you can't use any
   integers created from python integers  32 bit to index arrays larger
   than 2GB.
   For example np.product(array.shape) which will never overflow on linux
   and mac, can overflow on win64.
   
   Currently, on win64, we use Python long integer objects for `.shape`
   and related attributes. I wonder if we could return numpy int64
   scalars instead. Then np.product() (or anything else that consumes
   these via np.asarray()) would infer the correct dtype for the result.
  
  this might be a less invasive alternative that might solve a lot of the
  incompatibilities, but it would probably also change np.arange(5) and
  similar functions to int64 which might change the dtype of a lot of
  arrays. The difference to just changing it everywhere might not be so
  large anymore.
  
 
 Aren't most such functions already using intp? Just guessing, but:
 
 In [16]: np.arange(30, dtype=np.long).dtype.num
 Out[16]: 9
 
 In [17]: np.arange(30, dtype=np.intp).dtype.num
 Out[17]: 7
 
 In [18]: np.arange(30).dtype.num
 Out[18]: 7
 

Ops, never mind that stuff, probably not... np.int_ is 7 too, this is
just the way how intp is chosen.

 frankly, I am not sure what needs to change at all, except the normal
 array creation and the sum promotion rule. I am probably naive here, but
 what is the ABI change that is necessary for that?
 
 I guess the problem you see is breaking code doing np.array([1,2,3]) and
 then assuming in C that it is a long array?
 
 - Sebastian
 
   
   I think this is a very dangerous platform difference and a quite large
   inconvenience for win64 users so I think it would be good to fix this.
   This would be a very large change of API and probably also ABI.
   
   Yes. Not only would it be a very large change from the status quo, I
   think it introduces *much greater* platform difference than what we
   have currently. The assumption that the default integer object
   corresponds to the platform C long, whatever that is, is pretty
   heavily ingrained.
  
  This should be only a concern for the ABI which can be solved by simply
  recompiling.
  In comparison that the API is different on win64 compared to all other
  platforms is something that needs source level changes.
  
   
   But as we also never officially released win64 binaries we could change
   it for from source compilations and give win64 binary distributors the
   option to keep the old ABI/API at their discretion.
   
   That option would make the problem worse, not better.
   
  
  maybe, I'm not familiar with the numpy win64 distribution landscape.
  Is it not like linux where you have one distributor per workstation
  setup that can update all its packages to a new ABI on one go?
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
  
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] change default integer from int32 to int64 on win64?

2014-07-23 Thread Julian Taylor
On 23.07.2014 22:04, Robert Kern wrote:
 On Wed, Jul 23, 2014 at 8:50 PM, Julian Taylor
 jtaylor.deb...@googlemail.com wrote:
 On 23.07.2014 20:54, Robert Kern wrote:
 On Wed, Jul 23, 2014 at 6:19 PM, Julian Taylor
 jtaylor.deb...@googlemail.com wrote:
 hi,
 it recently came to my attention that the default integer type in numpy
 on windows 64 bit is a 32 bit integers [0].
 This seems like a quite serious problem as it means you can't use any
 integers created from python integers  32 bit to index arrays larger
 than 2GB.
 For example np.product(array.shape) which will never overflow on linux
 and mac, can overflow on win64.

 Currently, on win64, we use Python long integer objects for `.shape`
 and related attributes. I wonder if we could return numpy int64
 scalars instead. Then np.product() (or anything else that consumes
 these via np.asarray()) would infer the correct dtype for the result.

 this might be a less invasive alternative that might solve a lot of the
 incompatibilities, but it would probably also change np.arange(5) and
 similar functions to int64 which might change the dtype of a lot of
 arrays. The difference to just changing it everywhere might not be so
 large anymore.
 
 No, np.arange(5) would not change behavior given my suggestion, only
 the type of the integer objects in ndarray.shape and related tuples.

ndarray.shape are not numpy scalars but python objects, so they would
always be converted back to 32 bit integers when given back to numpy.

 
 I think this is a very dangerous platform difference and a quite large
 inconvenience for win64 users so I think it would be good to fix this.
 This would be a very large change of API and probably also ABI.

 Yes. Not only would it be a very large change from the status quo, I
 think it introduces *much greater* platform difference than what we
 have currently. The assumption that the default integer object
 corresponds to the platform C long, whatever that is, is pretty
 heavily ingrained.

 This should be only a concern for the ABI which can be solved by simply
 recompiling.
 In comparison that the API is different on win64 compared to all other
 platforms is something that needs source level changes.
 
 No, the API is no different on win64 than other platforms. Why do you
 think it is? The win64 platform is a weird platform in this respect,
 having made a choice that other 64-bit platforms didn't, but numpy's
 API treats it consistently. When we say that something is a C long,
 it's a C long on all platforms.

The API is different if you consider it from a python perspective.
The default integer dtype should be sufficiently large to index into any
numpy array, thats what I call an API here. win64 behaves different, you
have to explicitly upcast your index to be able to index all memory.
But API or ABI is just semantics here, what I actually mean is the
difference of source changes vs recompiling to deal with the issue.
Of course there might be C code that needs more than recompiling, but it
should not be that much, it would have to be already somewhat
broken/restrictive code that uses numpy buffers without first checking
which type it has.

There can also be python code that might need source changes e.g.
np.int_ memory mapping a binary from win32 assuming np.int_ is also 32
bit on win64, but this would be broken on linux and mac already now.

 But as we also never officially released win64 binaries we could change
 it for from source compilations and give win64 binary distributors the
 option to keep the old ABI/API at their discretion.

 That option would make the problem worse, not better.

 maybe, I'm not familiar with the numpy win64 distribution landscape.
 Is it not like linux where you have one distributor per workstation
 setup that can update all its packages to a new ABI on one go?
 
 No. There tend to be multiple providers.
 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] change default integer from int32 to int64 on win64?

2014-07-23 Thread Robert Kern
On Wed, Jul 23, 2014 at 9:34 PM, Julian Taylor
jtaylor.deb...@googlemail.com wrote:
 On 23.07.2014 22:04, Robert Kern wrote:
 On Wed, Jul 23, 2014 at 8:50 PM, Julian Taylor
 jtaylor.deb...@googlemail.com wrote:
 On 23.07.2014 20:54, Robert Kern wrote:
 On Wed, Jul 23, 2014 at 6:19 PM, Julian Taylor
 jtaylor.deb...@googlemail.com wrote:
 hi,
 it recently came to my attention that the default integer type in numpy
 on windows 64 bit is a 32 bit integers [0].
 This seems like a quite serious problem as it means you can't use any
 integers created from python integers  32 bit to index arrays larger
 than 2GB.
 For example np.product(array.shape) which will never overflow on linux
 and mac, can overflow on win64.

 Currently, on win64, we use Python long integer objects for `.shape`
 and related attributes. I wonder if we could return numpy int64
 scalars instead. Then np.product() (or anything else that consumes
 these via np.asarray()) would infer the correct dtype for the result.

 this might be a less invasive alternative that might solve a lot of the
 incompatibilities, but it would probably also change np.arange(5) and
 similar functions to int64 which might change the dtype of a lot of
 arrays. The difference to just changing it everywhere might not be so
 large anymore.

 No, np.arange(5) would not change behavior given my suggestion, only
 the type of the integer objects in ndarray.shape and related tuples.

 ndarray.shape are not numpy scalars but python objects, so they would
 always be converted back to 32 bit integers when given back to numpy.

That's what I'm suggesting that we change: make
`type(ndarray.shape[i])` be `np.intp` instead of `long`.

However, I'm not sure that this is an issue with numpy 1.8.0 at least.
I can't reproduce the reported problem on Win64:

In [12]: import numpy as np

In [13]: from numpy.lib import stride_tricks

In [14]: import sys

In [15]: b = stride_tricks.as_strided(np.zeros(1), shape=(10,
20, 40), strides=(0, 0, 0))

In [16]: b.shape
Out[16]: (10L, 20L, 40L)

In [17]: np.product(b.shape)
Out[17]: 8000

In [18]: np.product(b.shape).dtype
Out[18]: dtype('int64')

In [19]: sys.maxint
Out[19]: 2147483647

In [20]: np.__version__
Out[20]: '1.8.0'

In [21]: np.array(b.shape)
Out[21]: array([10, 20, 40], dtype=int64)


This is on Python 2.7, so maybe something got weird in the Python 3
version that Chris Gohlke tested?

 I think this is a very dangerous platform difference and a quite large
 inconvenience for win64 users so I think it would be good to fix this.
 This would be a very large change of API and probably also ABI.

 Yes. Not only would it be a very large change from the status quo, I
 think it introduces *much greater* platform difference than what we
 have currently. The assumption that the default integer object
 corresponds to the platform C long, whatever that is, is pretty
 heavily ingrained.

 This should be only a concern for the ABI which can be solved by simply
 recompiling.
 In comparison that the API is different on win64 compared to all other
 platforms is something that needs source level changes.

 No, the API is no different on win64 than other platforms. Why do you
 think it is? The win64 platform is a weird platform in this respect,
 having made a choice that other 64-bit platforms didn't, but numpy's
 API treats it consistently. When we say that something is a C long,
 it's a C long on all platforms.

 The API is different if you consider it from a python perspective.
 The default integer dtype should be sufficiently large to index into any
 numpy array, thats what I call an API here.

That's perhaps what you want, but numpy has never claimed to do this.
The numpy project deliberately chose (and is so documented) to make
its default integer type a C long, not a C size_t, to match Python's
default.

 win64 behaves different, you
 have to explicitly upcast your index to be able to index all memory.
 But API or ABI is just semantics here, what I actually mean is the
 difference of source changes vs recompiling to deal with the issue.
 Of course there might be C code that needs more than recompiling, but it
 should not be that much, it would have to be already somewhat
 broken/restrictive code that uses numpy buffers without first checking
 which type it has.

 There can also be python code that might need source changes e.g.
 np.int_ memory mapping a binary from win32 assuming np.int_ is also 32
 bit on win64, but this would be broken on linux and mac already now.

Anything that assumes that np.int_ is any particular fixed size is
always broken, naturally.

-- 
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] change default integer from int32 to int64 on win64?

2014-07-23 Thread Robert Kern
On Wed, Jul 23, 2014 at 9:57 PM, Robert Kern robert.k...@gmail.com wrote:

 That's what I'm suggesting that we change: make
 `type(ndarray.shape[i])` be `np.intp` instead of `long`.

 However, I'm not sure that this is an issue with numpy 1.8.0 at least.
 I can't reproduce the reported problem on Win64:

 In [12]: import numpy as np

 In [13]: from numpy.lib import stride_tricks

 In [14]: import sys

 In [15]: b = stride_tricks.as_strided(np.zeros(1), shape=(10,
 20, 40), strides=(0, 0, 0))

 In [16]: b.shape
 Out[16]: (10L, 20L, 40L)

 In [17]: np.product(b.shape)
 Out[17]: 8000

 In [18]: np.product(b.shape).dtype
 Out[18]: dtype('int64')

 In [19]: sys.maxint
 Out[19]: 2147483647

 In [20]: np.__version__
 Out[20]: '1.8.0'

 In [21]: np.array(b.shape)
 Out[21]: array([10, 20, 40], dtype=int64)


 This is on Python 2.7, so maybe something got weird in the Python 3
 version that Chris Gohlke tested?

Ah yes, naturally. Because there is no separate `long` type in Python
3, np.asarray() can't use the type to distinguish what type to build
the array. Returning np.intp objects in the tuple would resolve the
problem in much the same way the problem is currently resolved in
Python 2. This would also have the effect of unifying API on all
platforms: currently, win64 is the only platform where the `.shape`
tuple and related attribute returns Python longs instead of Python
ints.

-- 
Robert Kern
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] change default integer from int32 to int64 on win64?

2014-07-23 Thread Nathaniel Smith
On Wed, Jul 23, 2014 at 9:57 PM, Robert Kern robert.k...@gmail.com wrote:
 That's perhaps what you want, but numpy has never claimed to do this.
 The numpy project deliberately chose (and is so documented) to make
 its default integer type a C long, not a C size_t, to match Python's
 default.

This is true, but it's not very compelling on its own -- big as a
pointer is a much much more useful property than big as a long. The
only real reason this made sense in the first place is the equivalence
between Python int and C long, but even that is gone now with Python
3. IMO at this point backcompat is really the only serious reason for
keeping int32 as the default integer type in win64. But of course this
is a pretty serious concern...

Julian: making the change experimentally and checking how badly scipy
and some similar libraries break might be a way to focus the
backcompat discussion more.

-- 
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] __numpy_ufunc__ and 1.9 release

2014-07-23 Thread Pauli Virtanen
23.07.2014, 20:37, Julian Taylor kirjoitti:
[clip: __numpy_ufunc__]
 So its been a week and we got a few answers and new issues. To
 summarize: - to my knowledge no progress was made on the issues -
 scipy already has a released version using the current
 implementation - no very loud objections to delaying the feature to
 1.10 - I am still unfamiliar with the problematics of subclassing,
 but don't want to release something new which has unsolved issues.
 
 That scipy already uses it in a released version (0.14) is very 
 problematic. Can maybe someone give some insight if the potential 
 changes to resolve the remaining issues would break scipy?
 
 If so we have following choices:
 
 - declare what we have as final and close the remaining issues as
 'won't fix'. Any changes would have to have a new name
 __numpy_ufunc2__ or a somehow versioned the interface - delay the
 introduction, potentially breaking scipy 0.14 when numpy 1.10 is
 released.
 
 I would like to get the next (and last) numpy 1.9 beta out soon, so
 I would propose to make a decision until this Saturday the
 26.02.2014 however misinformed it may be.

It seems fairly unlikely to me that the `__numpy_ufunc__` interface
itself requires any changes. I believe the definition of the interface
is quite safe to consider as fixed --- it is a fairly straighforward
hook for Numpy ufuncs. (There are also no essential changes in it
since last year.)

For the binary operator overriding, Scipy sets the constraint that

ndarray * spmatrix

MUST call spmatrix.__rmul__ even if spmatrix.__numpy_ufunc__ is
defined. spmatrixes are not ndarray subclasses, and various
subclassing problems do not enter here.

Note that this binop discussion is somewhat separate from the
__numpy_ufunc__ interface itself. The only information available about
it at the binop stage is `hasattr(other, '__numpy_ufunc__')`.

   ***

Regarding the blockers:

(1) https://github.com/numpy/numpy/issues/4753

This is a bug in the argument normalization --- output arguments are
not checked for the presence of __numpy_ufunc__ if they are passed
as keyword arguments (as a positional argument it works). It's a bug
in the implementation, but I don't think it is really a blocker.

Scipy sparse matrices will in practice seldom be used as output args
for ufuncs.

   ***

(2) https://github.com/numpy/numpy/pull/4815

The is open question concerns semantics of `__numpy_ufunc__` versus
Python operator overrides. When should ndarray.__mul__(other) return
NotImplemented?

Scipy sparse matrices are not subclasses of ndarray, so the code in
question in Numpy gets to run only for

ndarray * spmatrix

This provides a constraint to what solution we can choose in Numpy to
deal with the issue:

ndarray.__mul__(spmatrix)  MUST  continue to return NotImplemented

This is the current behavior, and cannot be changed: it is not
possible to defer this to __numpy_ufunc__(ufunc=np.multiply), because
sparse matrices define `*` as the matrix multiply, and not the
elementwise multiply. (This settles one line of discussion in the
issues --- ndarray should defer.)

How Numpy currently determines whether to return NotImplemented in
this case or to call np.multiply(self, other) is by comparing
`__array_priority__` attributes of `self` and `other`. Scipy sparse
matrices define an `__array_priority__` larger than ndarrays, which
then makes a NotImplemented be returned.

The idea in the __numpy_ufunc__ NEP was to replace this with
`hasattr(other, '__numpy_ufunc__') and hasattr(other, '__rmul__')`.
However, when both self and other are ndarray subclasses in a certain
configuration, both end up returning NotImplemented, and Python raises
TypeError.

The `__array_priority__` mechanism is also broken in some of the
subclassing cases: https://github.com/numpy/numpy/issues/4766

As far as I see, the backward compatibility requirement from Scipy
only rules out the option that ndarray.__mul__(other) should
unconditionally call `np.add(self, other)`.

We have some freedom how to solve the binop vs. subclass issues. It's
possible to e.g. retain the __array_priority__ stuff as a backward
compatibility measure as we do currently.

-- 
Pauli Virtanen


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] change default integer from int32 to int64 on win64?

2014-07-23 Thread Sturla Molden
Julian Taylor jtaylor.deb...@googlemail.com wrote:

 The default integer dtype should be sufficiently large to index into any
 numpy array, thats what I call an API here. win64 behaves different, you
 have to explicitly upcast your index to be able to index all memory.

No, you don't have to manually upcast Python int to Python long.

Python 2 will automatically create a Python long if you overflow a Python
int.

On Python 3 the Python int does not have a size limit.


Sturla

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion