[Numpy-discussion] NumPy 1.12.1 released

2017-03-18 Thread Charles R Harris
Hi All,

I'm pleased to announce the release of NumPy 1.12.1. NumPy 1.12.1 supports
Python 2.7 and 3.4 - 3.6 and fixes bugs and regressions found in NumPy
1.12.0.
In particular, the regression in f2py constant parsing is fixed.

Wheels for Linux, Windows, and OSX can be found on pypi. Archives can be
downloaded
from github .


*Contributors*

A total of 10 people contributed to this release.  People with a "+" by
their
names contributed a patch for the first time.

* Charles Harris
* Eric Wieser
* Greg Young
* Joerg Behrmann +
* John Kirkham
* Julian Taylor
* Marten van Kerkwijk
* Matthew Brett
* Shota Kawabuchi
* Jean Utke +

*Fixes Backported*

* #8483: BUG: Fix wrong future nat warning and equiv type logic error...
* #8489: BUG: Fix wrong masked median for some special cases
* #8490: DOC: Place np.average in inline code
* #8491: TST: Work around isfinite inconsistency on i386
* #8494: BUG: Guard against replacing constants without `'_'` spec in f2py.
* #8524: BUG: Fix mean for float 16 non-array inputs for 1.12
* #8571: BUG: Fix calling python api with error set and minor leaks for...
* #8602: BUG: Make iscomplexobj compatible with custom dtypes again
* #8618: BUG: Fix undefined behaviour induced by bad `__array_wrap__`
* #8648: BUG: Fix `MaskedArray.__setitem__`
* #8659: BUG: PPC64el machines are POWER for Fortran in f2py
* #8665: BUG: Look up methods on MaskedArray in `_frommethod`
* #8674: BUG: Remove extra digit in `binary_repr` at limit
* #8704: BUG: Fix deepcopy regression for empty arrays.
* #8707: BUG: Fix ma.median for empty ndarrays

Cheers,

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: xtensor 0.7.1 numpy-style syntax in C++ with bindings to numpy arrays

2017-03-17 Thread Charles R Harris
On Fri, Mar 17, 2017 at 7:18 AM, Sylvain Corlay 
wrote:

> Hi All,
>
> On behalf of the xtensor development team, I am pleased to announce the
> releases of
>
>   - xtensor 0.7.1   https://github.com/QuantStack/xtensor/
>   - xtensor-python 0.6.0   https://github.com/QuantStack/xtensor-python/
>
>
>
That's cool stuff!

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] NumPy pre-release 1.12.1rc1

2017-03-06 Thread Charles R Harris
Hi All,

I'm pleased to announce the release of NumPy 1.12.1rc1. NumPy 1.12.1rc1
supports
Python 2.7 and 3.4 - 3.6 and fixes bugs and regressions found in NumPy
1.12.0.
In particular, the regression in f2py constant parsing is fixed.

Wheels for Linux, Windows, and OSX can be found on pypi. Archives can be
downloaded
from github .


*Contributors*

A total of 10 people contributed to this release.  People with a "+" by
their
names contributed a patch for the first time.

* Charles Harris
* Eric Wieser
* Greg Young
* Joerg Behrmann +
* John Kirkham
* Julian Taylor
* Marten van Kerkwijk
* Matthew Brett
* Shota Kawabuchi
* Jean Utke +

*Fixes Backported*

* #8483: BUG: Fix wrong future nat warning and equiv type logic error...
* #8489: BUG: Fix wrong masked median for some special cases
* #8490: DOC: Place np.average in inline code
* #8491: TST: Work around isfinite inconsistency on i386
* #8494: BUG: Guard against replacing constants without `'_'` spec in f2py.
* #8524: BUG: Fix mean for float 16 non-array inputs for 1.12
* #8571: BUG: Fix calling python api with error set and minor leaks for...
* #8602: BUG: Make iscomplexobj compatible with custom dtypes again
* #8618: BUG: Fix undefined behaviour induced by bad `__array_wrap__`
* #8648: BUG: Fix `MaskedArray.__setitem__`
* #8659: BUG: PPC64el machines are POWER for Fortran in f2py
* #8665: BUG: Look up methods on MaskedArray in `_frommethod`
* #8674: BUG: Remove extra digit in `binary_repr` at limit
* #8704: BUG: Fix deepcopy regression for empty arrays.
* #8707: BUG: Fix ma.median for empty ndarrays

Cheers,

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] automatically avoiding temporary arrays

2017-02-27 Thread Charles R Harris
On Mon, Feb 27, 2017 at 11:43 AM, Benjamin Root 
wrote:

> What's the timeline for the next release? I have the perfect usecase for
> this (Haversine calculation on large arrays that takes up ~33% of one of my
> processing scripts). However, to test it out, I have a huge dependency mess
> to wade through first, and there are no resources devoted to that project
> for at least a few weeks. I want to make sure I get feedback to y'all.
>

I'd like to branch 1.13.x at the end of March. The planned features that
still need to go in are the `__array_ufunc__` work and the `lapack_lite`
update. The first RC should not take much longer. I believe Matthew is
building wheels for testing on the fly but I don't know where you can find
them.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Removal of some numpy files

2017-02-25 Thread Charles R Harris
On Sat, Feb 25, 2017 at 2:34 PM, Matthew Brett 
wrote:

> On Sat, Feb 25, 2017 at 7:48 AM, David Cournapeau 
> wrote:
> > tools/win32build is used to build the so-called superpack installers,
> which
> > we don't build anymore AFAIK
> >
> > tools/numpy-macosx-installer is used to build the .dmg for numpy (also
> not
> > used anymore AFAIK).
>
> No, we aren't using the .dmg script anymore, dmg installers have been
> fully replaced by wheels.
>

I've put up a PR, #8695  to do
this.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Removal of some numpy files

2017-02-25 Thread Charles R Harris
Hi All,

While looking through the numpy tools directory I noticed some scripts that
look outdated that might be candidates for removal:

   1. tools/numpy-macosx-installer/
   2. tools/win32build/

Does anyone know if either of those are stlll relevant?

Cheers,

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Could we simplify backporting?

2017-02-24 Thread Charles R Harris
On Fri, Feb 24, 2017 at 8:00 AM, Evgeni Burovski  wrote:

> > I really don't like the double work and the large amount of noise coming
> > from backporting every other PR to NumPy very quickly. For SciPy the
> policy
> > is:
> >   - anyone can set the "backport-candidate" label
> >   - the release manager backports, usually a bunch in one go
> >   - only important fixes get backported (involves some judging, but
> things
> > like silencing warnings, doc fixes, etc. are not important enough)
> >
> > This works well, and I'd hope that we can make the NumPy approach
> similar.
>
>
> Just to add to what Ralf is saying:
>
> * people sometimes send PRs against maintenance branches instead of
> master. In scipy we just label these as backport-candidate, and then
> the RM sorts them out: which ones to forward port and which ones to
> backport. This works OK on scipy scale (I had just trawled though a
> half dozen or so). If numpy needs more backport activity, it might
> make sense to have separate labels for backport-candidate and
> needs-forward-port.
>
> * A while ago Julian was advocating for some git magic of basing PRs
> on the common merge base for master and maintenance branches, so that
> a commit can be merged directly without a cherry-pick (I think). This
> seems to be beyond a common git-fu (beyond mine for sure!). What I did
> in scipy, I just edited the commit messages after cherry-picking to
> add a reference of the original PR a commit was cherry-picked from.
>

Cherry-picking is easier, especially when there are only a few backports
without conflicts.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Eric Wieser added to NumPy team.

2017-02-18 Thread Charles R Harris
Hi All,

I'm pleased to welcome Eric to the NumPy team. There is a pile of pending
PRs that grows every day and we are counting on Eric will help us keep it
in check ;)

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Marten van Kerkwijk added to numpy team.

2017-02-13 Thread Charles R Harris
Hi All,

I'm pleased to welcome Marten to the numpy team. His reviews of PRs have
been very useful in the past and I am happy that he has accepted our
invitation to join the team.

Cheers,

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] question about long doubles on ppc64el

2017-01-16 Thread Charles R Harris
On Sun, Jan 15, 2017 at 11:00 PM, Thomas Caswell  wrote:

> Folks,
>
> Over at h5py we are trying to get a release out and have discovered (via
> debian) that on ppc64el there is an apparent disagreement between the size
> of a native long double according to hdf5 and numpy.
>
> For all of the gorey details see: https://github.com/h5py/h5py/issues/817
>  .
>
> In short, `np.longdouble` seems to be `np.float128` and according to the
> docs should map to the native 'long double'.  However, hdf5 provides a
> `H5T_NATIVE_LDOUBLE` which should also refer to the native 'long double',
> but seems to be a 64 bit float.
>
> Anyone on this list have a ppc64el machine (or experience with) that can
> provide some guidance here?
>

I believe the ppc64 long double is IBM double double, i.e., two doubles for
128 bits. It isn't IEEE compliant and probably not very portable. It is
possible that different compilers could treat it differently or it may be
flagged to be treated in some specific way.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] NumPy 1.12.0 release

2017-01-15 Thread Charles R Harris
Hi All,

I'm pleased to announce the NumPy 1.12.0 release. This release supports
Python 2.7 and 3.4-3.6. Wheels for all supported Python versions may be
downloaded from PiPY
, the tarball
and zip files may be downloaded from Github
. The release notes
and files hashes may also be found at Github
 .

NumPy 1.12.0rc 2 is the result of 418 pull requests submitted by 139
contributors and comprises a large number of fixes and improvements. Among
the many improvements it is difficult to  pick out just a few as standing
above the others, but the following may be of particular interest or
indicate areas likely to have future consequences.

* Order of operations in ``np.einsum`` can now be optimized for large speed
improvements.
* New ``signature`` argument to ``np.vectorize`` for vectorizing with core
dimensions.
* The ``keepdims`` argument was added to many functions.
* New context manager for testing warnings
* Support for BLIS in numpy.distutils
* Much improved support for PyPy (not yet finished)

Enjoy,

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Deprecating matrices.

2017-01-07 Thread Charles R Harris
On Sat, Jan 7, 2017 at 5:31 PM, CJ Carey  wrote:

> I agree with Ralf; coupling these changes to sparse is a bad idea.
>
> I think that scipy.sparse will be an important consideration during the
> deprecation process, though, perhaps as an indicator of how painful the
> transition might be for third party code.
>
> I'm +1 for splitting matrices out into a standalone package.
>

Decoupled or not, sparse still needs to be dealt with. What is the plan?



Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Deprecating matrices.

2017-01-07 Thread Charles R Harris
On Sat, Jan 7, 2017 at 4:51 PM, Ralf Gommers <ralf.gomm...@gmail.com> wrote:

>
>
> On Sun, Jan 8, 2017 at 12:42 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Sat, Jan 7, 2017 at 4:35 PM, Ralf Gommers <ralf.gomm...@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Sun, Jan 8, 2017 at 12:26 PM, Charles R Harris <
>>> charlesr.har...@gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On Sat, Jan 7, 2017 at 2:29 PM, Ralf Gommers <ralf.gomm...@gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>> It looks to me like we're getting a bit off track here. The sparse
>>>>> matrices in scipy are heavily used, and despite rough edges pretty good at
>>>>> what they do. Deprecating them is not a goal.
>>>>>
>>>>> The actual goal for the exercise that started this thread (at least as
>>>>> I see it) is to remove np.matrix from numpy itself so users (that don't
>>>>> know the difference) will only use ndarrays. And the few users that prefer
>>>>> np.matrix for teaching can now switch because of @, so their preference
>>>>> should have disappeared.
>>>>>
>>>>> To reach that goal, no deprecation or backwards incompatible changes
>>>>> to scipy.sparse are needed.
>>>>>
>>>>
>>>> What is the way forward with sparse? That looks like the biggest
>>>> blocker on the road to a matrix free NumPy. I don't see moving the matrix
>>>> package elsewhere as a solution for that.
>>>>
>>>
>>> Why not?
>>>
>>>
>> Because it doesn't get rid of matrices in SciPy, not does one gain a
>> scalar multiplication operator for sparse.
>>
>
> That's a different goal though. You can reach the "get matrix out of
> numpy" goal fairly easily (docs and packaging work), but if you insist on
> coupling it to major changes to scipy.sparse (a lot more work + backwards
> compat break), then what will likely happen is: nothing.
>

Could always remove matrix from the top level namespace and make it
private. It still needs to reside someplace as long as sparse uses it.
Fixing sparse is more work, but we have three years and it won't be getting
any easier as time goes on.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Deprecating matrices.

2017-01-07 Thread Charles R Harris
On Sat, Jan 7, 2017 at 4:35 PM, Ralf Gommers <ralf.gomm...@gmail.com> wrote:

>
>
> On Sun, Jan 8, 2017 at 12:26 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Sat, Jan 7, 2017 at 2:29 PM, Ralf Gommers <ralf.gomm...@gmail.com>
>> wrote:
>>
>>>
>>> It looks to me like we're getting a bit off track here. The sparse
>>> matrices in scipy are heavily used, and despite rough edges pretty good at
>>> what they do. Deprecating them is not a goal.
>>>
>>> The actual goal for the exercise that started this thread (at least as I
>>> see it) is to remove np.matrix from numpy itself so users (that don't know
>>> the difference) will only use ndarrays. And the few users that prefer
>>> np.matrix for teaching can now switch because of @, so their preference
>>> should have disappeared.
>>>
>>> To reach that goal, no deprecation or backwards incompatible changes to
>>> scipy.sparse are needed.
>>>
>>
>> What is the way forward with sparse? That looks like the biggest blocker
>> on the road to a matrix free NumPy. I don't see moving the matrix package
>> elsewhere as a solution for that.
>>
>
> Why not?
>
>
Because it doesn't get rid of matrices in SciPy, not does one gain a scalar
multiplication operator for sparse.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Deprecating matrices.

2017-01-07 Thread Charles R Harris
On Sat, Jan 7, 2017 at 2:29 PM, Ralf Gommers  wrote:

>
>
> On Sun, Jan 8, 2017 at 9:31 AM, Todd  wrote:
>
>>
>>
>> On Jan 6, 2017 20:28, "Ralf Gommers"  wrote:
>>
>>
>>
>> On Sat, Jan 7, 2017 at 2:21 PM, CJ Carey 
>> wrote:
>>
>>>
>>> On Fri, Jan 6, 2017 at 6:19 PM, Ralf Gommers 
>>> wrote:
>>>
 This sounds like a reasonable idea. Timeline could be something like:

 1. Now: create new package, deprecate np.matrix in docs.
 2. In say 1.5 years: start issuing visible deprecation warnings in numpy
 3. After 2020: remove matrix from numpy.

 Ralf

>>>
>>> I think this sounds reasonable, and reminds me of the deliberate
>>> deprecation process taken for scipy.weave. I guess we'll see how successful
>>> it was when 0.19 is released.
>>>
>>> The major problem I have with removing numpy matrices is the effect on
>>> scipy.sparse, which mostly-consistently mimics numpy.matrix semantics and
>>> often produces numpy.matrix results when densifying. The two are coupled
>>> tightly enough that if numpy matrices go away, all of the existing sparse
>>> matrix classes will have to go at the same time.
>>>
>>> I don't think that would be the end of the world,
>>>
>>
>> Not the end of the world literally, but the impact would be pretty major.
>> I think we're stuck with scipy.sparse, and may at some point will add a new
>> sparse *array* implementation next to it. For scipy we will have to add a
>> dependency on the new npmatrix package or vendor it.
>>
>> Ralf
>>
>>
>>
>>> but it's definitely something that should happen while scipy is still
>>> pre-1.0, if it's ever going to happen.
>>>
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>> So what about this:
>>
>> 1. Create a sparse array class
>> 2. (optional) Refactor the sparse matrix class to be based on the sparse
>> array class (may not be feasible)
>> 3. Copy the spare matrix class into the matrix package
>> 4. Deprecate the scipy sparse matrix class
>> 5. Remove the scipy sparse matrix class when the numpy matrix class
>>
>
> It looks to me like we're getting a bit off track here. The sparse
> matrices in scipy are heavily used, and despite rough edges pretty good at
> what they do. Deprecating them is not a goal.
>
> The actual goal for the exercise that started this thread (at least as I
> see it) is to remove np.matrix from numpy itself so users (that don't know
> the difference) will only use ndarrays. And the few users that prefer
> np.matrix for teaching can now switch because of @, so their preference
> should have disappeared.
>
> To reach that goal, no deprecation or backwards incompatible changes to
> scipy.sparse are needed.
>

What is the way forward with sparse? That looks like the biggest blocker on
the road to a matrix free NumPy. I don't see moving the matrix package
elsewhere as a solution for that.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Deprecating matrices.

2017-01-06 Thread Charles R Harris
On Fri, Jan 6, 2017 at 6:37 PM,  wrote:

>
>
>
> On Fri, Jan 6, 2017 at 8:28 PM, Ralf Gommers 
> wrote:
>
>>
>>
>> On Sat, Jan 7, 2017 at 2:21 PM, CJ Carey 
>> wrote:
>>
>>>
>>> On Fri, Jan 6, 2017 at 6:19 PM, Ralf Gommers 
>>> wrote:
>>>
 This sounds like a reasonable idea. Timeline could be something like:

 1. Now: create new package, deprecate np.matrix in docs.
 2. In say 1.5 years: start issuing visible deprecation warnings in numpy
 3. After 2020: remove matrix from numpy.

 Ralf

>>>
>>> I think this sounds reasonable, and reminds me of the deliberate
>>> deprecation process taken for scipy.weave. I guess we'll see how successful
>>> it was when 0.19 is released.
>>>
>>> The major problem I have with removing numpy matrices is the effect on
>>> scipy.sparse, which mostly-consistently mimics numpy.matrix semantics and
>>> often produces numpy.matrix results when densifying. The two are coupled
>>> tightly enough that if numpy matrices go away, all of the existing sparse
>>> matrix classes will have to go at the same time.
>>>
>>> I don't think that would be the end of the world,
>>>
>>
>> Not the end of the world literally, but the impact would be pretty major.
>> I think we're stuck with scipy.sparse, and may at some point will add a new
>> sparse *array* implementation next to it. For scipy we will have to add a
>> dependency on the new npmatrix package or vendor it.
>>
>
> That sounds to me like moving maintenance of numpy.matrix from numpy to
> scipy, if scipy.sparse is one of the main users and still depends on it.
>

What I was thinking was encouraging folks to use `arr.dot(...)` or `@`
instead of `*` for matrix multiplication, keeping `*` for scalar
multiplication. If those operations were defined for matrices, then at some
point sparse could go to arrays and it would not be noticeable except for
the treatment of 1-D arrays -- which admittedly might be a bit tricky.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Default type for functions that accumulate integers

2017-01-03 Thread Charles R Harris
On Tue, Jan 3, 2017 at 10:08 AM, Sebastian Berg <sebast...@sipsolutions.net>
wrote:

> On Mo, 2017-01-02 at 18:46 -0800, Nathaniel Smith wrote:
> > On Mon, Jan 2, 2017 at 6:27 PM, Charles R Harris
> > <charlesr.har...@gmail.com> wrote:
> > >
> > > Hi All,
> > >
> > > Currently functions like trace use the C long type as the default
> > > accumulator for integer types of lesser precision:
> > >
>
> 
>
> >
> > Things we'd need to know more about before making a decision:
> > - compatibility: if we flip this switch, how much code breaks? In
> > general correct numpy-using code has to be prepared to handle
> > np.dtype(int) being 64-bits, and in fact there might be more code
> > that
> > accidentally assumes that np.dtype(int) is always 64-bits than there
> > is code that assumes it is always 32-bits. But that's theory; to know
> > how bad this is we would need to try actually running some projects
> > test suites and see whether they break or not.
> > - speed: there's probably some cost to using 64-bit integers on 32-
> > bit
> > systems; how big is the penalty in practice?
> >
>
> I agree with trying to switch the default in general first, I don't
> like the idea of having two different "defaults".
>
> There are two issues, one is the change on Python 2 (no inheritance of
> Python int by default numpy type) and any issues due to increased
> precision (more RAM usage, code actually expects lower precision
> somehow, etc.).
> Cannot say I know for sure, but I would be extremely surprised if there
> is a speed difference between 32bit vs. 64bit architectures, except the
> general slowdown you get due to bus speeds, etc. when going to higher
> bit width.
>
> If the inheritance for some reason is a bigger issue, we might limit
> the change to Python 3. For other possible problems, I think we may
> have difficulties assessing how much is affected. The problem is, that
> the most affected thing should be projects only being used on windows,
> or so. Bigger projects should work fine already (they are more likely
> to get better due to not being tested as well on 32bit long platforms,
> especially 64bit windows).
>
> Of course limiting the change to python 3, could have the advantage of
> not affecting older projects which are possibly more likely to be
> specifically using the current behaviour.
>
> So, I would be open to trying the change, I think the idea of at least
> changing it in python 3 has been brought up a couple of times,
> including by Julian, so maybe it is time to give it a shot
>
> It would be interesting to see if anyone knows projects that may be
> affected (for example because they are designed to only run on windows
> or limited hardware), and if avoiding to change anything in python 2
> might mitigate problems here as well (additionally to avoiding the
> inheritance change)?
>

There have been a number of reports of problems due to the inheritance
stemming both from the changing precision and, IIRC, from differences in
print format or some such. So I don't expect that there will be no
problems, but they will probably not be difficult to fix.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Deprecating matrices.

2017-01-02 Thread Charles R Harris
On Mon, Jan 2, 2017 at 8:29 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Mon, Jan 2, 2017 at 7:12 PM, Charles R Harris
> <charlesr.har...@gmail.com> wrote:
> >
> >
> > On Mon, Jan 2, 2017 at 7:26 PM, <josef.p...@gmail.com> wrote:
> [...]
> >> How about dropping python 2 support at the same time, then we can all be
> >> in a @ world.
> >>
> >
> > The "@" operator works with matrices already, what causes problems is the
> > combination  of matrices with 1-D arrays. That can be fixed, I think. The
> > big problem is probably the lack of "@" in Python 2.7. I wonder if there
> is
> > any chance of getting it backported to 2.7 before support is dropped in
> > 2020? I expect it would be a fight, but I also suspect it would not be
> > difficult to do if the proposal was accepted. Then at some future date
> > sparse could simply start returning arrays.
>
> Unfortunately the chance of Python 2.7 adding support for "@" is best
> expressed as a denormal.
>

That's what I figured ;) Hmm, matrices would work fine with the current
combination of '*' (works for scalar muiltiplication) and '@' (works for
matrices). So for Python3 code currently written for matrices can be
reformed to be array compatible. But '@' for Python 2.7 would sure help...

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Deprecating matrices.

2017-01-02 Thread Charles R Harris
On Mon, Jan 2, 2017 at 8:12 PM, Charles R Harris <charlesr.har...@gmail.com>
wrote:

>
>
> On Mon, Jan 2, 2017 at 7:26 PM, <josef.p...@gmail.com> wrote:
>
>>
>>
>> On Mon, Jan 2, 2017 at 9:00 PM, Ralf Gommers <ralf.gomm...@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Tue, Jan 3, 2017 at 2:36 PM, Charles R Harris <
>>> charlesr.har...@gmail.com> wrote:
>>>
>>>> Hi All,
>>>>
>>>> Just throwing this click bait out for discussion. Now that the `@`
>>>> operator is available and things seem to be moving towards Python 3,
>>>> especially in the classroom, we should consider the real possibility of
>>>> deprecating the matrix type and later removing it. No doubt there are old
>>>> scripts that require them, but older versions of numpy are available for
>>>> those who need to run old scripts.
>>>>
>>>> Thoughts?
>>>>
>>>
>>> Clearly deprecate in the docs now, and warn only later imho. We can't
>>> warn before we have a good solution for scipy.sparse matrices, which have
>>> matrix semantics and return matrix instances.
>>>
>>> Ralf
>>>
>>
>> How about dropping python 2 support at the same time, then we can all be
>> in a @ world.
>>
>>
> The "@" operator works with matrices already, what causes problems is the
> combination  of matrices with 1-D arrays. That can be fixed, I think. The
> big problem is probably the lack of "@" in Python 2.7. I wonder if there is
> any chance of getting it backported to 2.7 before support is dropped in
> 2020? I expect it would be a fight, but I also suspect it would not be
> difficult to do if the proposal was accepted. Then at some future date
> sparse could simply start returning arrays.
>

Hmm, matrix-scalar multiplication will be a problem.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Deprecating matrices.

2017-01-02 Thread Charles R Harris
On Mon, Jan 2, 2017 at 7:26 PM, <josef.p...@gmail.com> wrote:

>
>
> On Mon, Jan 2, 2017 at 9:00 PM, Ralf Gommers <ralf.gomm...@gmail.com>
> wrote:
>
>>
>>
>> On Tue, Jan 3, 2017 at 2:36 PM, Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> Just throwing this click bait out for discussion. Now that the `@`
>>> operator is available and things seem to be moving towards Python 3,
>>> especially in the classroom, we should consider the real possibility of
>>> deprecating the matrix type and later removing it. No doubt there are old
>>> scripts that require them, but older versions of numpy are available for
>>> those who need to run old scripts.
>>>
>>> Thoughts?
>>>
>>
>> Clearly deprecate in the docs now, and warn only later imho. We can't
>> warn before we have a good solution for scipy.sparse matrices, which have
>> matrix semantics and return matrix instances.
>>
>> Ralf
>>
>
> How about dropping python 2 support at the same time, then we can all be
> in a @ world.
>
>
The "@" operator works with matrices already, what causes problems is the
combination  of matrices with 1-D arrays. That can be fixed, I think. The
big problem is probably the lack of "@" in Python 2.7. I wonder if there is
any chance of getting it backported to 2.7 before support is dropped in
2020? I expect it would be a fight, but I also suspect it would not be
difficult to do if the proposal was accepted. Then at some future date
sparse could simply start returning arrays.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Default type for functions that accumulate integers

2017-01-02 Thread Charles R Harris
Hi All,

Currently functions like trace use the C long type as the default
accumulator for integer types of lesser precision:

dtype : dtype, optional
> Determines the data-type of the returned array and of the accumulator
> where the elements are summed. If dtype has the value None and `a` is
> of integer type of precision less than the default integer
> precision, then the default integer precision is used. Otherwise,
> the precision is the same as that of `a`.
>

The problem with this is that the precision of long varies with the
platform so that the result varies,  see gh-8433
 for a complaint about this.
There are two possible alternatives that seem reasonable to me:


   1. Use 32 bit accumulators on 32 bit platforms and 64 bit accumulators
   on 64 bit platforms.
   2. Always use 64 bit accumulators.

Thoughts?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Deprecating matrices.

2017-01-02 Thread Charles R Harris
Hi All,

Just throwing this click bait out for discussion. Now that the `@` operator
is available and things seem to be moving towards Python 3, especially in
the classroom, we should consider the real possibility of deprecating the
matrix type and later removing it. No doubt there are old scripts that
require them, but older versions of numpy are available for those who need
to run old scripts.

Thoughts?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] NumPy 1.12.0rc2 release.

2017-01-01 Thread Charles R Harris
Hi All,

I'm pleased to announce the NumPy 1.12.0rc2 New Year's release. This
release supports Python 2.7 and 3.4-3.6. Wheels for all supported Python
versions may be downloaded from PiPY
, the tarball
and zip files may be downloaded from Github
. The release notes
and files hashes may also be found at Github
 .

NumPy 1.12.0rc 2 is the result of 413 pull requests submitted by 139
contributors and comprises a large number of fixes and improvements. Among
the many improvements it is difficult to  pick out just a few as standing
above the others, but the following may be of particular interest or
indicate areas likely to have future consequences.

* Order of operations in ``np.einsum`` can now be optimized for large speed
improvements.
* New ``signature`` argument to ``np.vectorize`` for vectorizing with core
dimensions.
* The ``keepdims`` argument was added to many functions.
* New context manager for testing warnings
* Support for BLIS in numpy.distutils
* Much improved support for PyPy (not yet finished)

Enjoy,

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] NumPy 1.12.0rc1

2016-12-19 Thread Charles R Harris
Hi All,

I am pleased to announce the release of NumPy 1.12.0rc1.  This release
supports  Python 2.7 and 3.4 - 3.6 and is the result of 406 pull requests
submitted by 139 contributors and comprises a large number of fixes and
improvements. Among the many improvements it is difficult to  pick out just
a few as standing above the others, but the following may be of particular
interest or indicate areas likely to have future consequences.

* Order of operations in ``np.einsum`` can now be optimized for large speed
improvements.
* New ``signature`` argument to ``np.vectorize`` for vectorizing with core
dimensions.
* The ``keepdims`` argument was added to many functions.
* New context manager for testing warnings
* Support for BLIS in numpy.distutils
* Much improved support for PyPy (not yet finished)

The release notes are quite sizable and rather than put them inline I've
attached them as a file. They may also be viewed at Github
. Zip files and
tarballs may also be found the Github link. Wheels and a zip archive are
available from PyPI, which is the recommended method of installation.

Cheers,

Charles Harris
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

==
NumPy 1.12.0 Release Notes
==

This release supports Python 2.7 and 3.4 - 3.6.

Highlights
==
The NumPy 1.12.0 release contains a large number of fixes and improvements, but
few that stand out above all others. That makes picking out the highlights
somewhat arbitrary but the following may be of particular interest or indicate
areas likely to have future consequences.

* Order of operations in ``np.einsum`` can now be optimized for large speed 
improvements.
* New ``signature`` argument to ``np.vectorize`` for vectorizing with core 
dimensions.
* The ``keepdims`` argument was added to many functions.
* New context manager for testing warnings
* Support for BLIS in numpy.distutils
* Much improved support for PyPy (not yet finished)

Dropped Support
===

* Support for Python 2.6, 3.2, and 3.3 has been dropped.


Added Support
=

* Support for PyPy 2.7 v5.6.0 has been added. While not complete (nditer
  ``updateifcopy`` is not supported yet), this is a milestone for PyPy's
  C-API compatibility layer.


Build System Changes


* Library order is preserved, instead of being reordered to match that of
  the directories.


Deprecations


Assignment of ndarray object's ``data`` attribute
~
Assigning the 'data' attribute is an inherently unsafe operation as pointed
out in gh-7083. Such a capability will be removed in the future.

Unsafe int casting of the num attribute in ``linspace``
~~~
``np.linspace`` now raises DeprecationWarning when num cannot be safely
interpreted as an integer.

Insufficient bit width parameter to ``binary_repr``
~~~
If a 'width' parameter is passed into ``binary_repr`` that is insufficient to
represent the number in base 2 (positive) or 2's complement (negative) form,
the function used to silently ignore the parameter and return a representation
using the minimal number of bits needed for the form in question. Such behavior
is now considered unsafe from a user perspective and will raise an error in the
future.


Future Changes
==

* In 1.13 NAT will always compare False except for ``NAT != NAT``,
  which will be True.  In short, NAT will behave like NaN
* In 1.13 np.average will preserve subclasses, to match the behavior of most
  other numpy functions such as np.mean. In particular, this means calls which
  returned a scalar may return a 0-d subclass object instead.

Multiple-field manipulation of structured arrays

In 1.13 the behavior of structured arrays involving multiple fields will change
in two ways:

First, indexing a structured array with multiple fields (eg,
``arr[['f1', 'f3']]``) will return a view into the original array in 1.13,
instead of a copy. Note the returned view will have extra padding bytes
corresponding to intervening fields in the original array, unlike the copy in
1.12, which will affect code such as ``arr[['f1', 'f3']].view(newdtype)``.

Second, for numpy versions 1.6 to 1.12 assignment between structured arrays
occurs "by field name": Fields in the destination array are set to the
identically-named field in the source array or to 0 if the source does not have
a field::

>>> a = np.array([(1,2),(3,4)], dtype=[('x', 'i4'), ('y', 'i4')])
>>> b = np.ones(2, dtype=[('z', 'i4'), ('y', 'i4'), ('x', 'i4')])
>>> b[:] = a
>>> b
array([(0, 2, 1), (0, 4, 3)],
  dtype=[('z', '

[Numpy-discussion] NumPy 1.11.3 release.

2016-12-18 Thread Charles R Harris
Hi All,

I'm please to annouce the release of NumPy 1.11.3. This is a one bug fix
release to take care of a bug that could corrupt large files opened in
append mode and then used as an argument to ndarray.tofile. Thanks to Pavel
Potocek for the fix.

Cheers,

Chuck

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

==
NumPy 1.11.3 Release Notes
==

Numpy 1.11.3 fixes a bug that leads to file corruption when very large files
opened in append mode are used in ``ndarray.tofile``. It supports Python
versions 2.6 - 2.7 and 3.2 - 3.5. Wheels for Linux, Windows, and OS X can be
found on PyPI.


Contributors to maintenance/1.11.3
==

A total of 2 people contributed to this release.  People with a "+" by their
names contributed a patch for the first time.

- - Charles Harris
- - Pavel Potocek +

Pull Requests Merged


- - `#8341 `__: BUG: Fix
ndarray.tofile large file corruption in append mode.
- - `#8346 `__: TST: Fix tests in
PR #8341 for NumPy 1.11.x


Checksums
=

MD5
~~~

f36503c6665701e1ca0fd2953b6419dd
numpy-1.11.3-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
ada01f12b747c0669be00be843fde6dd
numpy-1.11.3-cp27-cp27m-manylinux1_i686.whl
e3f454dc204b90015e4d8991b12069fb
numpy-1.11.3-cp27-cp27m-manylinux1_x86_64.whl
cccfb3f765fa2eb4759590467a5f3fb1
numpy-1.11.3-cp27-cp27mu-manylinux1_i686.whl
479c0c8b50ab0ed4acca0a66887fe74c
numpy-1.11.3-cp27-cp27mu-manylinux1_x86_64.whl
110b93cc26ca556b075316bee81f8652  numpy-1.11.3-cp27-none-win32.whl
33bfb4c5f5608d3966a6600fa3d7623c  numpy-1.11.3-cp27-none-win_amd64.whl
81df8e91c06595572583cd67fcb7d68f
numpy-1.11.3-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
194d8903cb3fd3b17af4093089b1a154
numpy-1.11.3-cp34-cp34m-manylinux1_i686.whl
837d9d7c911d4589172d19d0d8fb4eaf
numpy-1.11.3-cp34-cp34m-manylinux1_x86_64.whl
f6b24305ab3edba245106b49b97fd9d7  numpy-1.11.3-cp34-none-win32.whl
2f3fdd08d9ad43304d67c16182ff92de  numpy-1.11.3-cp34-none-win_amd64.whl
f90839ad86e3ccda9a409ce93ca1
numpy-1.11.3-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
3b2268154e405f895402cbd4cbcaad7a
numpy-1.11.3-cp35-cp35m-manylinux1_i686.whl
3d6754274af48c1c19154dd370ddb569
numpy-1.11.3-cp35-cp35m-manylinux1_x86_64.whl
f8b64f46cc0e9a3fc877f24efd5e3b7c  numpy-1.11.3-cp35-none-win32.whl
b1a53851dde805a233e6c4eafe116e82  numpy-1.11.3-cp35-none-win_amd64.whl
b8a9dec6901c046edaea706bad1448b1  numpy-1.11.3.tar.gz
aa70cd5bba81b78382694d654ed10036  numpy-1.11.3.zip

SHA256
~~

5941d3dbd0afed1ecd3746c0371b2a8b79977d084004cc320c2a4cf9d88589d8
numpy-1.11.3-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
ca37b5bebcc4ebde39dfbff0bda69fdc28785a8ff21155fd7adacf473c7b40dd
numpy-1.11.3-cp27-cp27m-manylinux1_i686.whl
276cbb35b69eb2f0d5f264b7c71bdc1f4e91ecd3125d32cd1839873268239892
numpy-1.11.3-cp27-cp27m-manylinux1_x86_64.whl
1226e259d796207e8ef36762dce139e7da1cc0bb78f5d54e739252acd07834e5
numpy-1.11.3-cp27-cp27mu-manylinux1_i686.whl
674d0c1318890357f27ce3a8939e643eaf55140cfb8e84730aeee1dd769b0c21
numpy-1.11.3-cp27-cp27mu-manylinux1_x86_64.whl
f8b30c76e0f805da7ea641f52c3f6bade55d50a0767f9c89c50e4c42b2a1b34c
numpy-1.11.3-cp27-none-win32.whl
8cd184b0341e1db3a5619c85f875ce511ef0eb7ec01ec320116959a3de77f1b8
numpy-1.11.3-cp27-none-win_amd64.whl
f0824beb03aff58d4062508b1dd4f737f08f5d2369f25a73c2350fe081beab2c
numpy-1.11.3-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
9e4228ac322743dea101a90305ee6d54b4bf82f15d6499e55d1d9cef17bccdbb
numpy-1.11.3-cp34-cp34m-manylinux1_i686.whl
195604fc19a9333f3342fcad93094b6a21bc6e6b28d7bfec14d120cb4391a032
numpy-1.11.3-cp34-cp34m-manylinux1_x86_64.whl
71a6aa8b8c9f666b541208d38b30c84df1666e4cc02fb33b59086aaea10affad
numpy-1.11.3-cp34-none-win32.whl
135586ce1966dbecd9494ba30cb9beca93fad323ef9264c21efc2a0b59e449d2
numpy-1.11.3-cp34-none-win_amd64.whl
cca8af884cbf220656ca2f8f9120a634e5cfb5fdcb0a21fd83ec279cc4f46654
numpy-1.11.3-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
ab810c942ead3f5988a7bef95dc6e85b586b6e814b83d571dfbca879e245bd45
numpy-1.11.3-cp35-cp35m-manylinux1_i686.whl
7c6eb737dc3d53977c558d57625dfbecd9900a5807ff17edd6842a102cb95c3b
numpy-1.11.3-cp35-cp35m-manylinux1_x86_64.whl
ab2af03dabecb97de27badfa944c56d799774a1fa975d52083197bb81858b742
numpy-1.11.3-cp35-none-win32.whl
dd1800ec19192fd853bc255917eb3ecb34de268551b9c561f36d089023883807
numpy-1.11.3-cp35-none-win_amd64.whl
6e89f41217028452977cddb2a6c614e2210214bf3efb8494e7a9137b26985d41
numpy-1.11.3.tar.gz
2e0fc5248246a64628656fe14fcab0a959741a2820e003bd15538226501b82f7

Re: [Numpy-discussion] PyPI source files.

2016-12-18 Thread Charles R Harris
On Sun, Dec 18, 2016 at 6:39 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Sun, Dec 18, 2016 at 5:21 PM, Charles R Harris
> <charlesr.har...@gmail.com> wrote:
> > Hi All,
> >
> > It seems that PyPI will only accept one source file at this time, e.g.,
> > numpy-1.11.3.zip and numpy-1.11.3.tar.gz are considered duplicates. Does
> > anyone know if this is intentional or a bug on the PyPI end? It makes
> sense
> > in a screwy sort of way.
>
> It's intentional: see PEP 527 and in particular:
>https://www.python.org/dev/peps/pep-0527/#limiting-
> number-of-sdists-per-release


Thanks for the info Nathaniel ;)

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] PyPI source files.

2016-12-18 Thread Charles R Harris
Hi All,

It seems that PyPI will only accept one source file at this time, e.g.,
numpy-1.11.3.zip and numpy-1.11.3.tar.gz are considered duplicates. Does
anyone know if this is intentional or a bug on the PyPI end? It makes sense
in a screwy sort of way.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] NumPy 1.12.0b1 released.

2016-11-16 Thread Charles R Harris
Hi All,

I'm pleased to annouce the release of NumPy 1.12.0b1. This release
supports  Python 2.7 and 3.4 - 3.6 and is the result of 388 pull requests
submitted by 133 contributors. It is quite sizeable and rather than put the
release notes inline I've attached them as a file and they may also be
viewed at Github .
Zip files and tarballs may also be found the Github link. Wheels and source
archives may be downloaded from PyPI, which is the recommended method.

This release is a large collection of fixes, enhancements, and improvements
and it is difficult to select just a few as highlights. However, the
following enhancements may be of particular interest

   - Order of operations in ``np.einsum`` now can be optimized for large
   speed improvements.
   - New ``signature`` argument to ``np.vectorize`` for vectorizing with
   core dimensions.
   - The ``keepdims`` argument was added to many functions.
   - Support for PyPy 2.7 v5.6.0 has been added. While not complete, this
   is a milestone for PyPy's C-API compatibility layer.

Thanks to all,

Chuck
NumPy 1.12.0 Release Notes
**

This release supports Python 2.7 and 3.4 - 3.6.

Highlights
==

* Order of operations in ``np.einsum`` now can be optimized for large speed improvements.
* New ``signature`` argument to ``np.vectorize`` for vectorizing with core dimensions.
* The ``keepdims`` argument was added to many functions.

Dropped Support
===

* Support for Python 2.6, 3.2, and 3.3 has been dropped.


Added Support
=

* Support for PyPy 2.7 v5.6.0 has been added. While not complete (nditer
  ``updateifcopy`` is not supported yet), this is a milestone for PyPy's
  C-API compatibility layer.


Build System Changes


* Library order is preserved, instead of being reordered to match that of
  the directories.


Deprecations


Assignment of ndarray object's ``data`` attribute
~
Assigning the 'data' attribute is an inherently unsafe operation as pointed
out in gh-7083. Such a capability will be removed in the future.

Unsafe int casting of the num attribute in ``linspace``
~~~
``np.linspace`` now raises DeprecationWarning when num cannot be safely
interpreted as an integer.

Insufficient bit width parameter to ``binary_repr``
~~~
If a 'width' parameter is passed into ``binary_repr`` that is insufficient to
represent the number in base 2 (positive) or 2's complement (negative) form,
the function used to silently ignore the parameter and return a representation
using the minimal number of bits needed for the form in question. Such behavior
is now considered unsafe from a user perspective and will raise an error in the
future.


Future Changes
==

* In 1.13 NAT will always compare False except for ``NAT != NAT``,
  which will be True.  In short, NAT will behave like NaN
* In 1.13 np.average will preserve subclasses, to match the behavior of most
  other numpy functions such as np.mean. In particular, this means calls which
  returned a scalar may return a 0-d subclass object instead.

Multiple-field manipulation of structured arrays

In 1.13 the behavior of structured arrays involving multiple fields will change
in two ways:

First, indexing a structured array with multiple fields (eg,
``arr[['f1', 'f3']]``) will return a view into the original array in 1.13,
instead of a copy. Note the returned view will have extra padding bytes
corresponding to intervening fields in the original array, unlike the copy in
1.12, which will affect code such as ``arr[['f1', 'f3']].view(newdtype)``.

Second, for numpy versions 1.6 to 1.12 assignment between structured arrays
occurs "by field name": Fields in the destination array are set to the
identically-named field in the source array or to 0 if the source does not have
a field::

>>> a = np.array([(1,2),(3,4)], dtype=[('x', 'i4'), ('y', 'i4')])
>>> b = np.ones(2, dtype=[('z', 'i4'), ('y', 'i4'), ('x', 'i4')])
>>> b[:] = a
>>> b
array([(0, 2, 1), (0, 4, 3)],
  dtype=[('z', '

Re: [Numpy-discussion] Numpy 1.12.x branched

2016-11-10 Thread Charles R Harris
On Thu, Nov 10, 2016 at 9:06 AM, Frédéric Bastien <
frederic.bast...@gmail.com> wrote:

> My change about numpy.mean in float16 aren't in the doc.
>
> Should I make a PR again numpy master or maintenance/1.12.x?
>

Make it against master. I may cut and paste the content into a bigger PR I
will merge before the first beta.



Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.12.x branched

2016-11-07 Thread Charles R Harris
On Mon, Nov 7, 2016 at 11:32 AM, Matti Picus <matti.pi...@gmail.com> wrote:

> On 07/11/16 10:19, numpy-discussion-requ...@scipy.org wrote:
>
>> Date: Sun, 06 Nov 2016 17:56:12 +0100
>> From: Sebastian Berg<sebast...@sipsolutions.net>
>> To:numpy-discussion@scipy.org
>> Subject: Re: [Numpy-discussion] Numpy 1.12.x branched
>> Message-ID:<1478451372.3875.5.ca...@sipsolutions.net>
>> Content-Type: text/plain; charset="utf-8"
>>
>> On Sa, 2016-11-05 at 17:04 -0600, Charles R Harris wrote:
>>
>>> >Hi All,
>>> >
>>> >Numpy 1.12.x has been branched and the 1.13 development branch is
>>> >open. It would be helpful if folks could review the release notes as
>>> >it is likely I've missed something.? I'd like to make the first beta
>>> >release in a couple of days.
>>> >
>>>
>> Very cool, thanks for all the hard work!
>>
>> - Sebastian
>>
>>
>> >Chuck
>>>
>> Thanks for managing this. I don't know where, but it would be nice if the
> release notes could mention the PyPy support - we are down to only a few
> failures on the test suite, the only real oustanding issue is nditer using
> UPDATEIFCOPY which depends on refcounting semantics to trigger the copy.
> Other than that PyPy + NumPy 1.12 is a working thing, we (PyPy devs) will
> soon try to make it work faster :).
>

A PR updating the release notes would be welcome. This might be one of the
highlights for those interested in PyPy.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] __numpy_ufunc__

2016-11-07 Thread Charles R Harris
On Mon, Nov 7, 2016 at 1:08 AM, Ralf Gommers <ralf.gomm...@gmail.com> wrote:

>
>
> On Mon, Nov 7, 2016 at 9:10 AM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Sun, Nov 6, 2016 at 11:44 AM, Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> For those interested in continuing the __numpy_ufunc__ saga, there is a pull
>>> request enabling it <https://github.com/numpy/numpy/pull/8247>. Likely
>>> we will want to make some changes up front before merging that, so some
>>> discussion is in order.
>>>
>>>
>> As a first order of business, let's decide whether to remove the index
>> and rename `__numpy_ufunc__`. The motivation for this is discussed in issue
>> #5986. <https://github.com/numpy/numpy/issues/5986>
>> If we decide positive on that (I'm in favor),
>>
>
> It seems like everyone on that issue is in favor or at least +0. So +1
> from me too.
>
>
>> I would be happy with the proposed name `__array_ufunc__`, although
>> something more descriptive like `__handle_ufunc__` might be better.
>>
>
>
>
>> This is a wonderful opportunity for bike shedding for those so inclined ;)
>>
>
Let me try to break things down an bit.


*Uncontroversial*


   - Rename __numpy_ufunc__ to __array_ufunc__
   - Remove index
   - Out is always a tuple


I think this much is useful even if nothing else is done.

*Goals*


   - Deprecate __array_priority__
   - Ufuncs should succeed or error, never return NotImplemented
   - Add __array_ufunc__ stub to ndarray.


I don't think these are controversial either, but they are longer term
except possibly the last. Note that never returning NotImplemented
disentangles ufuncs from ndarray binops, which I think is a good thing.

*Binops*


Here we come to the crux of the last arguments. The functions used for
binops can currently be set dynamically, the method that is currently used
to set them when the ufunc module is loaded. I think we want to do away
with that at some point and fix a set of ufuncs with which they are
implemented. This allows folks to overide the binop behavior using
__array_ufunc__. I think that is mostly of interest to people who are
subclassing ndarray and with that restriction doesn't bother me except that
it entangles ufuncs with binops. However, what I'd like to see as well is
an opt out for objects that don't care about ufuncs, but want to override
the python numeric operators, something simple like `__me_me_me__`, or,
more seriously, `__array_opt_out__`  that will only come into play if the
defining object is on the right hand side of an instance of ndarray. In
that case the binop would return NotImplemented so as to defer to the
Python machinery.  Note that  __array_priority__ is currently (ab)used for
this.

*Numpy scalars*

Numpy scalars default to the corresponding PyArray_Type or
PyGenericArrType_Type unless both arguments can be converted to the same c
type as the calling scalar, so I don't think there is a problem there. Note
that they do check _array_priority_ before trying to convert unknown
objects to array scalars in a fallback case.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] __numpy_ufunc__

2016-11-06 Thread Charles R Harris
On Sun, Nov 6, 2016 at 11:44 AM, Charles R Harris <charlesr.har...@gmail.com
> wrote:

> Hi All,
>
> For those interested in continuing the __numpy_ufunc__ saga, there is a pull
> request enabling it <https://github.com/numpy/numpy/pull/8247>. Likely we
> will want to make some changes up front before merging that, so some
> discussion is in order.
>
>
As a first order of business, let's decide whether to remove the index and
rename `__numpy_ufunc__`. The motivation for this is discussed in issue
#5986. <https://github.com/numpy/numpy/issues/5986>
If we decide positive on that (I'm in favor), I would be happy with the
proposed name `__array_ufunc__`, although something more descriptive like
`__handle_ufunc__` might be better. This is a wonderful opportunity for
bike shedding for those so inclined ;)

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] __numpy_ufunc__

2016-11-06 Thread Charles R Harris
Hi All,

For those interested in continuing the __numpy_ufunc__ saga, there is a pull
request enabling it . Likely we
will want to make some changes up front before merging that, so some
discussion is in order.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy 1.12.x branched

2016-11-05 Thread Charles R Harris
Hi All,

Numpy 1.12.x has been branched and the 1.13 development branch is open. It
would be helpful if folks could review the release notes as it is likely
I've missed something.  I'd like to make the first beta release in a couple
of days.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Branching NumPy 1.12.x

2016-11-03 Thread Charles R Harris
Hi All,

I'm thinking that it is time to branch NumPy 1.12.x. I haven't got
everything in it that I would have liked, in particular __numpy_ufunc__,
but I think there is plenty of material and not branching is holding up
some of the more risky stuff.  My current thoughts on __numpy_ufunc__ is
that it would be best to work it out over the 1.13.0 release cycle,
starting with enabling it again right after the branch. Julian's work on
avoiding temporary copies and Pauli's overlap handling PR are two other
changes I've been putting off but don't want to delay further. There are
some other smaller things that I had scheduled for 1.12.0, but would like
to spend more time looking at. If there are some things that you think just
have to be in 1.12.0, please mention them, but I'd rather aim at getting
1.13.0 out in a timely manner.

Thoughts?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using numpy.rint() with scalars

2016-11-03 Thread Charles R Harris
On Thu, Nov 3, 2016 at 9:17 AM, Yuri Sukhov  wrote:

> Hi all,
>
> According to the documentation for numpy.rint() (
> https://docs.scipy.org/doc/numpy/reference/generated/numpy.rint.html),
> it's a ufunc that accepts an array-like object as an input.
>
> But it also works with scalar inputs. Could anyone clarify if such use
> case is considered to be common and acceptable? It it just the
> documentation that does not cover one of the possible scenarios, or is it a
> side effect and such use case should be avoided?
>
> I want to use rint() with scalars as I have not found an alternative with
> the same behavior in cPython, but if it's not how it should be used, I
> don't want to rewrite the app when that behavior will change.
>
>
Scalars are array_like, they can be converted to arrays.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] missing from contributor list?

2016-11-02 Thread Charles R Harris
On Wed, Nov 2, 2016 at 4:38 PM, Sturla Molden 
wrote:

> Why am I missing from the contributor hist here?
>
> https://github.com/numpy/numpy/blob/master/numpy/_
> build_utils/src/apple_sgemv_fix.c
>
>
>
You still show up in the commit log of if you follow the file
```
git log --follow numpy/_build_utils/apple_accelerate.py
```

So I have to agree with others that the problem is on the github end.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] __numpy_ufunc__

2016-10-31 Thread Charles R Harris
On Mon, Oct 31, 2016 at 11:39 AM, Stephan Hoyer  wrote:

> Recall that I think we wanted to rename this to __array_ufunc__, so we
> could change the function signature: https://github.com/numpy/
> numpy/issues/5986
>
> I'm still a little nervous about this. Chunk -- what is your proposal for
> resolving the outstanding issues from https://github.com/numpy/
> numpy/issues/5844?
>

We were pretty close. IIRC, the outstanding issue was some sort of
override. At the developer meeting at scipy 2015 it was agreed that it
would be easy to finish things up under the rubric "make Pauli happy". But
that wasn't happening which is why I asked Nathaniel to disable it for
1.10.0. It is now a year later, things have cooled, and, IMHO, it is time
to take another shot at it.



Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] __numpy_ufunc__

2016-10-31 Thread Charles R Harris
On Mon, Oct 31, 2016 at 11:08 AM, Marten van Kerkwijk <
m.h.vankerkw...@gmail.com> wrote:

> Hi Chuck,
>
> I've revived my Quantity PRs that use __numpy_ufunc__ but is it
> correct that at present in *dev, one cannot use it?
>

It's not enabled yet.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] __numpy_ufunc__

2016-10-29 Thread Charles R Harris
On Sat, Oct 29, 2016 at 7:03 PM, Stephan Hoyer  wrote:

> I'm happy to revisit the __numpy_ufunc__ discussion (I still want to see
> it happen!), but I don't recall scalars being a point of contention.
>

The __numpy_ufunc__ functionality is the last bit I want for 1.12.0, the
rest of the remaining changes I can kick forward to 1.13.0. I will start
taking a look tomorrow, probably starting with Nathaniel's work.


>
> The obvious thing to do with scalars would be to treat them the same as
> 0-dimensional arrays, though I might be missing some nuance...
>

That's my thought. Currently they just look at __array_priority__ and call
the corresponding array method if needed, so that maybe needs some
improvement and a formal statement of intent.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] __numpy_ufunc__

2016-10-29 Thread Charles R Harris
Hi All,

Does anyone remember discussion of numpy scalars apropos __numpy_ufunc__?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy scalar integers to negative scalar integer powers.

2016-10-28 Thread Charles R Harris
Hi All,

I've put up a PR to deal with the numpy scalar integer powers at
https://github.com/numpy/numpy/pull/8221. Note that for now everything goes
through the np.power function.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy integers to integer powers again again

2016-10-26 Thread Charles R Harris
On Wed, Oct 26, 2016 at 1:39 PM, <josef.p...@gmail.com> wrote:

>
>
> On Wed, Oct 26, 2016 at 3:23 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Tue, Oct 25, 2016 at 10:14 AM, Stephan Hoyer <sho...@gmail.com> wrote:
>>
>>> I am also concerned about adding more special cases for NumPy scalars vs
>>> arrays. These cases are already confusing (e.g., making no distinction
>>> between 0d arrays and scalars) and poorly documented.
>>>
>>> On Mon, Oct 24, 2016 at 4:30 PM, Nathaniel Smith <n...@pobox.com> wrote:
>>>
>>>> On Mon, Oct 24, 2016 at 3:41 PM, Charles R Harris
>>>> <charlesr.har...@gmail.com> wrote:
>>>> > Hi All,
>>>> >
>>>> > I've been thinking about this some (a lot) more and have an alternate
>>>> > proposal for the behavior of the `**` operator
>>>> >
>>>> > if both base and power are numpy/python scalar integers, convert to
>>>> python
>>>> > integers and call the `**` operator. That would solve both the
>>>> precision and
>>>> > compatibility problems and I think is the option of least surprise.
>>>> For
>>>> > those who need type preservation and modular arithmetic, the np.power
>>>> > function remains, although the type conversions can be surpirising as
>>>> it
>>>> > seems that the base and power should  play different roles in
>>>> determining
>>>> > the type, at least to me.
>>>> > Array, 0-d or not, are treated differently from scalars and integers
>>>> raised
>>>> > to negative integer powers always raise an error.
>>>> >
>>>> > I think this solves most problems and would not be difficult to
>>>> implement.
>>>> >
>>>> > Thoughts?
>>>>
>>>> My main concern about this is that it adds more special cases to numpy
>>>> scalars, and a new behavioral deviation between 0d arrays and scalars,
>>>> when ideally we should be trying to reduce the
>>>> duplication/discrepancies between these. It's also inconsistent with
>>>> how other operations on integer scalars work, e.g. regular addition
>>>> overflows rather than promoting to Python int:
>>>>
>>>> In [8]: np.int64(2 ** 63 - 1) + 1
>>>> /home/njs/.user-python3.5-64bit/bin/ipython:1: RuntimeWarning:
>>>> overflow encountered in long_scalars
>>>>   #!/home/njs/.user-python3.5-64bit/bin/python3.5
>>>> Out[8]: -9223372036854775808
>>>>
>>>> So I'm inclined to try and keep it simple, like in your previous
>>>> proposal... theoretically of course it would be nice to have the
>>>> perfect solution here, but at this point it feels like we might be
>>>> overthinking this trying to get that last 1% of improvement. The thing
>>>> where 2 ** -1 returns 0 is just broken and bites people so we should
>>>> definitely fix it, but beyond that I'm not sure it really matters
>>>> *that* much what we do, and "special cases aren't special enough to
>>>> break the rules" and all that.
>>>>
>>>>
>> What I have been concerned about are the follow combinations that
>> currently return floats
>>
>> num: , exp: , res: > 'numpy.float32'>
>> num: , exp: , res: > 'numpy.float32'>
>> num: , exp: , res: > 'numpy.float32'>
>> num: , exp: , res: > 'numpy.float64'>
>> num: , exp: , res: > 'numpy.float64'>
>> num: , exp: , res: > 'numpy.float64'>
>> num: , exp: , res: > 'numpy.float64'>
>> num: , exp: , res: > 'numpy.float64'>
>> num: , exp: , res: > 'numpy.float64'>
>> num: , exp: , res: > 'numpy.float64'>
>> num: , exp: , res: > 'numpy.float64'>
>> num: , exp: , res: > 'numpy.float64'>
>> num: , exp: , res: > 'numpy.float64'>
>> num: , exp: , res: > 'numpy.float64'>
>> num: , exp: , res: > 'numpy.float64'>
>> num: , exp: , res: > 'numpy.float64'>
>>
>> The other combinations of signed and unsigned integers to signed powers
>> currently raise ValueError due to the change to the power ufunc. The
>> exceptions that aren't covered by uint64 + signed (which won't change) seem
>> to occur when the exponent can be safely cast to the base type. I suspect
>> that people have already come to depend on that, especially as python
>&g

Re: [Numpy-discussion] Numpy integers to integer powers again again

2016-10-26 Thread Charles R Harris
On Wed, Oct 26, 2016 at 1:39 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Wed, Oct 26, 2016 at 12:23 PM, Charles R Harris
> <charlesr.har...@gmail.com> wrote:
> [...]
> > What I have been concerned about are the follow combinations that
> currently
> > return floats
> >
> > num: , exp: , res:  > 'numpy.float32'>
> > num: , exp: , res:  > 'numpy.float32'>
> > num: , exp: , res:  > 'numpy.float32'>
> > num: , exp: , res:  > 'numpy.float64'>
> > num: , exp: , res:  > 'numpy.float64'>
> > num: , exp: , res:  > 'numpy.float64'>
> > num: , exp: , res:  > 'numpy.float64'>
> > num: , exp: , res:  > 'numpy.float64'>
> > num: , exp: , res:  > 'numpy.float64'>
> > num: , exp: , res:  > 'numpy.float64'>
> > num: , exp: , res:  > 'numpy.float64'>
> > num: , exp: , res:  > 'numpy.float64'>
> > num: , exp: , res:  > 'numpy.float64'>
> > num: , exp: , res:  > 'numpy.float64'>
> > num: , exp: , res:  > 'numpy.float64'>
> > num: , exp: , res:  > 'numpy.float64'>
>
> What's this referring to? For both arrays and scalars I get:
>
> In [8]: (np.array(2, dtype=np.int8) ** np.array(2, dtype=np.int8)).dtype
> Out[8]: dtype('int8')
>
> In [9]: (np.int8(2) ** np.int8(2)).dtype
> Out[9]: dtype('int8')
>
>
You need a negative exponent to see the effect.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy integers to integer powers again again

2016-10-26 Thread Charles R Harris
On Tue, Oct 25, 2016 at 10:14 AM, Stephan Hoyer <sho...@gmail.com> wrote:

> I am also concerned about adding more special cases for NumPy scalars vs
> arrays. These cases are already confusing (e.g., making no distinction
> between 0d arrays and scalars) and poorly documented.
>
> On Mon, Oct 24, 2016 at 4:30 PM, Nathaniel Smith <n...@pobox.com> wrote:
>
>> On Mon, Oct 24, 2016 at 3:41 PM, Charles R Harris
>> <charlesr.har...@gmail.com> wrote:
>> > Hi All,
>> >
>> > I've been thinking about this some (a lot) more and have an alternate
>> > proposal for the behavior of the `**` operator
>> >
>> > if both base and power are numpy/python scalar integers, convert to
>> python
>> > integers and call the `**` operator. That would solve both the
>> precision and
>> > compatibility problems and I think is the option of least surprise. For
>> > those who need type preservation and modular arithmetic, the np.power
>> > function remains, although the type conversions can be surpirising as it
>> > seems that the base and power should  play different roles in
>> determining
>> > the type, at least to me.
>> > Array, 0-d or not, are treated differently from scalars and integers
>> raised
>> > to negative integer powers always raise an error.
>> >
>> > I think this solves most problems and would not be difficult to
>> implement.
>> >
>> > Thoughts?
>>
>> My main concern about this is that it adds more special cases to numpy
>> scalars, and a new behavioral deviation between 0d arrays and scalars,
>> when ideally we should be trying to reduce the
>> duplication/discrepancies between these. It's also inconsistent with
>> how other operations on integer scalars work, e.g. regular addition
>> overflows rather than promoting to Python int:
>>
>> In [8]: np.int64(2 ** 63 - 1) + 1
>> /home/njs/.user-python3.5-64bit/bin/ipython:1: RuntimeWarning:
>> overflow encountered in long_scalars
>>   #!/home/njs/.user-python3.5-64bit/bin/python3.5
>> Out[8]: -9223372036854775808
>>
>> So I'm inclined to try and keep it simple, like in your previous
>> proposal... theoretically of course it would be nice to have the
>> perfect solution here, but at this point it feels like we might be
>> overthinking this trying to get that last 1% of improvement. The thing
>> where 2 ** -1 returns 0 is just broken and bites people so we should
>> definitely fix it, but beyond that I'm not sure it really matters
>> *that* much what we do, and "special cases aren't special enough to
>> break the rules" and all that.
>>
>>
What I have been concerned about are the follow combinations that currently
return floats

num: , exp: , res: 
num: , exp: , res: 
num: , exp: , res: 
num: , exp: , res: 
num: , exp: , res: 
num: , exp: , res: 
num: , exp: , res: 
num: , exp: , res: 
num: , exp: , res: 
num: , exp: , res: 
num: , exp: , res: 
num: , exp: , res: 
num: , exp: , res: 
num: , exp: , res: 
num: , exp: , res: 
num: , exp: , res: 

The other combinations of signed and unsigned integers to signed powers
currently raise ValueError due to the change to the power ufunc. The
exceptions that aren't covered by uint64 + signed (which won't change) seem
to occur when the exponent can be safely cast to the base type. I suspect
that people have already come to depend on that, especially as python
integers on 64 bit linux convert to int64. So in those cases we should
perhaps raise a FutureWarning instead of an error.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Intel random number package

2016-10-25 Thread Charles R Harris
On Tue, Oct 25, 2016 at 10:41 PM, Robert Kern <robert.k...@gmail.com> wrote:

> On Tue, Oct 25, 2016 at 9:34 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
> >
> > Hi All,
> >
> > There is a proposed random number package PR now up on github:
> https://github.com/numpy/numpy/pull/8209. It is from
> > oleksandr-pavlyk and implements the number random number package using
> MKL for increased speed. I think we are definitely interested in the
> improved speed, but I'm not sure numpy is the best place to put the
> package. I'd welcome any comments on the PR itself, as well as any thoughts
> on the best way organize or use of this work. Maybe scikit-random
>
> This is what ng-numpy-randomstate is for.
>
> https://github.com/bashtage/ng-numpy-randomstate
>

Interesting, despite old fashioned original ziggurat implementation of the
normal and gnu c style... Does that project seek to preserve all the
bytestreams or is it still in flux?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Intel random number package

2016-10-25 Thread Charles R Harris
Hi All,

There is a proposed random number package PR now up on github:
https://github.com/numpy/numpy/pull/8209. It is from
oleksandr-pavlyk  and implements the
number random number package using MKL for increased speed. I think we are
definitely interested in the improved speed, but I'm not sure numpy is the
best place to put the package. I'd welcome any comments on the PR itself,
as well as any thoughts on the best way organize or use of this work. Maybe
scikit-random

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fpower ufunc

2016-10-21 Thread Charles R Harris
On Fri, Oct 21, 2016 at 1:45 AM, Sebastian Berg <sebast...@sipsolutions.net>
wrote:

> On Do, 2016-10-20 at 21:38 -0600, Charles R Harris wrote:
> >
> >
> > On Thu, Oct 20, 2016 at 9:11 PM, Nathaniel Smith <n...@pobox.com>
> > wrote:
> > > On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
> > > <charlesr.har...@gmail.com> wrote:
> > > > Hi All,
> > > >
> > > > I've put up a preliminary PR for the proposed fpower ufunc. Apart
> > > from
> > > > adding more tests and documentation, I'd like to settle a few
> > > other things.
> > > > The first is the name, two names have been proposed and we should
> > > settle on
> > > > one
> > > >
> > > > fpower (short)
> > > > float_power (obvious)
> > >
> > > +0.6 for float_power
> > >
> > > > The second thing is the minimum precision. In the preliminary
> > > version I have
> > > > used float32, but perhaps it makes more sense for the intended
> > > use to make
> > > > the minimum precision float64 instead.
> > >
> > > Can you elaborate on what you're thinking? I guess this is because
> > > float32 has limited range compared to float64, so is more likely to
> > > see overflow? float32 still goes up to 10**38 which is <
> > > int64_max**2,
> > > FWIW. Or maybe there's some subtlety with the int->float casting
> > > here?
> > logical, (u)int8, (u)int16, and float16 get converted to float32,
> > which is probably sufficient to avoid overflow and such. My thought
> > was that float32 is something of a "specialized" type these days,
> > while float64 is the standard floating point precision for everyday
> > computation.
> >
>
>
> Isn't the behaviour we already have (e.g. such as mean).
>
> ints -> float64
> inexacts do not get upcast?
>
>
Hmm... The best way to do that would be to put the function in
`fromnumeric` and do it in python rather than as a ufunc, then for integer
types call power with `dtype=float64`. I like that idea better than the
current implementation, my mind was stuck in the ufunc universe.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fpower ufunc

2016-10-20 Thread Charles R Harris
On Thu, Oct 20, 2016 at 9:11 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
> <charlesr.har...@gmail.com> wrote:
> > Hi All,
> >
> > I've put up a preliminary PR for the proposed fpower ufunc. Apart from
> > adding more tests and documentation, I'd like to settle a few other
> things.
> > The first is the name, two names have been proposed and we should settle
> on
> > one
> >
> > fpower (short)
> > float_power (obvious)
>
> +0.6 for float_power
>
> > The second thing is the minimum precision. In the preliminary version I
> have
> > used float32, but perhaps it makes more sense for the intended use to
> make
> > the minimum precision float64 instead.
>
> Can you elaborate on what you're thinking? I guess this is because
> float32 has limited range compared to float64, so is more likely to
> see overflow? float32 still goes up to 10**38 which is < int64_max**2,
> FWIW. Or maybe there's some subtlety with the int->float casting here?
>

logical, (u)int8, (u)int16, and float16 get converted to float32, which is
probably sufficient to avoid overflow and such. My thought was that float32
is something of a "specialized" type these days, while float64 is the
standard floating point precision for everyday computation.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] fpower ufunc

2016-10-20 Thread Charles R Harris
Hi All,

I've put up a preliminary PR  for
the proposed fpower ufunc. Apart from adding more tests and documentation,
I'd like to settle a few other things. The first is the name, two names
have been proposed and we should settle on one

   - fpower (short)
   - float_power (obvious)

The second thing is the minimum precision. In the preliminary version I
have used float32, but perhaps it makes more sense for the intended use to
make the minimum precision float64 instead.

Thoughts?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] assert_allclose equal_nan default value.

2016-10-20 Thread Charles R Harris
Hi All,

Just a heads up that there is a PR changing the default value of
`equal_nan` to `True` in the `assert_allclose` test function. The
`equal_nan` argument was previously ineffective due to a bug that has
recently been fixed. The current default value of `False` is not backward
compatible and causes test failures in  scipy. See the extended argument at
https://github.com/numpy/numpy/pull/8184. I think this change is the right
thing to do but want to make sure everyone is aware of it.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to negative integer powers, time for a decision.

2016-10-08 Thread Charles R Harris
On Sat, Oct 8, 2016 at 1:31 PM, Krisztián Horváth 
wrote:

> Hello,
>
> I think it should be consistent with Python3. So, it should give back a
> float.
>
> Best regards,
> Krisztian
>
>
Can't do that and also return integers for positive powers. It isn't
possible to have behavior completely compatible with python for arrays:
can't have mixed type returns, can't have arbitrary precision integers.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to negative integer powers, time for a decision.

2016-10-08 Thread Charles R Harris
On Sat, Oct 8, 2016 at 9:12 AM, Nathaniel Smith <n...@pobox.com> wrote:

> On Sat, Oct 8, 2016 at 6:59 AM, Charles R Harris
> <charlesr.har...@gmail.com> wrote:
> >
> >
> > On Sat, Oct 8, 2016 at 4:40 AM, Nathaniel Smith <n...@pobox.com> wrote:
> >>
> >> On Fri, Oct 7, 2016 at 6:12 PM, Charles R Harris
> >> <charlesr.har...@gmail.com> wrote:
> >> > Hi All,
> >> >
> >> > The time for NumPy 1.12.0 approaches and I like to have a final
> decision
> >> > on
> >> > the treatment of integers to negative integer powers with the `**`
> >> > operator.
> >> > The two alternatives looked to be
> >> >
> >> > Raise an error for arrays and numpy scalars, including 1 and -1 to
> >> > negative
> >> > powers.
> >> >
> >> > Pluses
> >> >
> >> > Backward compatible
> >> > Allows common powers to be integer, e.g., arange(3)**2
> >> > Consistent with inplace operators
> >> > Fixes current wrong behavior.
> >> > Preserves type
> >> >
> >> >
> >> > Minuses
> >> >
> >> > Integer overflow
> >> > Computational inconvenience
> >> > Inconsistent with Python integers
> >> >
> >> >
> >> > Always return a float
> >> >
> >> > Pluses
> >> >
> >> > Computational convenience
> >> >
> >> >
> >> > Minuses
> >> >
> >> > Loss of type
> >> > Possible backward incompatibilities
> >> > Not applicable to inplace operators
> >>
> >> I guess I could be wrong, but I think the backwards incompatibilities
> >> are going to be *way* too severe to make option 2 possible in
> >> practice.
> >>
> >
> > Backwards compatibility is also a major concern for me.  Here are my
> current
> > thoughts
> >
> > Add an fpow ufunc that always converts to float, it would not accept
> object
> > arrays.
>
> Maybe call it `fpower` or even `float_power`, for consistency with `power`?
>
> > Raise errors in current power ufunc (**), for ints to negative ints.
> >
> > The power ufunc will change in the following ways
> >
> > +1, -1 to negative ints will error, currently they work
> > n > 1 ints to negative ints will error, currently warn and return zero
> > 0 to negative ints will error, they currently return the minimum integer
> >
> > The `**` operator currently calls the power ufunc, leave that as is for
> > backward almost compatibility. The remaining question is numpy scalars,
> > which we can make either compatible with Python, or with NumPy arrays.
> I'm
> > leaning towards NumPy array compatibility mostly on account of type
> > preservation and the close relationship between zero dimensionaly arrays
> and
> > scalars.
>
> Sounds good to me. I agree that we should prioritize within-numpy
> consistency over consistency with Python.
>
> > The fpow function could be backported to NumPy 1.11 if that would be
> helpful
> > going forward.
>
> I'm not a big fan of this kind of backport. Violating the
> "bug-fixes-only" rule makes it hard for people to understand our
> release versions. And it creates the situation where people can write
> code that they think requires numpy 1.11 (because it works with their
> numpy 1.11!), but then breaks on other people's computers (because
> those users have 1.11.(x-1)). And if there's some reason why people
> aren't willing to upgrade to 1.12 for new features, then probably
> better to spend energy addressing those instead of on putting together
> 1.11-and-a-half releases.
>

The power ufunc is updated in  https://github.com/numpy/numpy/pull/8127.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to negative integer powers, time for a decision.

2016-10-08 Thread Charles R Harris
On Sat, Oct 8, 2016 at 4:40 AM, Nathaniel Smith <n...@pobox.com> wrote:

> On Fri, Oct 7, 2016 at 6:12 PM, Charles R Harris
> <charlesr.har...@gmail.com> wrote:
> > Hi All,
> >
> > The time for NumPy 1.12.0 approaches and I like to have a final decision
> on
> > the treatment of integers to negative integer powers with the `**`
> operator.
> > The two alternatives looked to be
> >
> > Raise an error for arrays and numpy scalars, including 1 and -1 to
> negative
> > powers.
> >
> > Pluses
> >
> > Backward compatible
> > Allows common powers to be integer, e.g., arange(3)**2
> > Consistent with inplace operators
> > Fixes current wrong behavior.
> > Preserves type
> >
> >
> > Minuses
> >
> > Integer overflow
> > Computational inconvenience
> > Inconsistent with Python integers
> >
> >
> > Always return a float
> >
> > Pluses
> >
> > Computational convenience
> >
> >
> > Minuses
> >
> > Loss of type
> > Possible backward incompatibilities
> > Not applicable to inplace operators
>
> I guess I could be wrong, but I think the backwards incompatibilities
> are going to be *way* too severe to make option 2 possible in
> practice.
>
>
Backwards compatibility is also a major concern for me.  Here are my
current thoughts


   - Add an fpow ufunc that always converts to float, it would not accept
   object arrays.
   - Raise errors in current power ufunc (**), for ints to negative ints.

The power ufunc will change in the following ways


   - +1, -1 to negative ints will error, currently they work
   - n > 1 ints to negative ints will error, currently warn and return zero
   - 0 to negative ints will error, they currently return the minimum
   integer

The `**` operator currently calls the power ufunc, leave that as is for
backward almost compatibility. The remaining question is numpy scalars,
which we can make either compatible with Python, or with NumPy arrays. I'm
leaning towards NumPy array compatibility mostly on account of type
preservation and the close relationship between zero dimensionaly arrays
and scalars.


The fpow function could be backported to NumPy 1.11 if that would be
helpful going forward.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Integers to negative integer powers, time for a decision.

2016-10-07 Thread Charles R Harris
Hi All,

The time for NumPy 1.12.0 approaches and I like to have a final decision on
the treatment of integers to negative integer powers with the `**`
operator. The two alternatives looked to be


*Raise an error for arrays and numpy scalars, including 1 and -1 to
negative powers.*
*Pluses*

   - Backward compatible
   - Allows common powers to be integer, e.g., arange(3)**2
   - Consistent with inplace operators
   - Fixes current wrong behavior.
   - Preserves type


*Minuses*

   - Integer overflow
   - Computational inconvenience
   - Inconsistent with Python integers


*Always return a float *

*Pluses*

   - Computational convenience


*Minuses*

   - Loss of type
   - Possible backward incompatibilities
   - Not applicable to inplace operators



Thoughts?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Dropping sourceforge for releases.

2016-10-03 Thread Charles R Harris
On Sun, Oct 2, 2016 at 5:53 PM, Vincent Davis <vinc...@vincentdavis.net>
wrote:

> +1, I am very skeptical of anything on SourceForge, it negatively impacts
> my opinion of any project that requires me to download from sourceforge.
>
>
> On Saturday, October 1, 2016, Charles R Harris <charlesr.har...@gmail.com>
> wrote:
>
>> Hi All,
>>
>> Ralf has suggested dropping sourceforge as a NumPy release site. There
>> was discussion of doing that some time back but we have not yet done it.
>> Now that we put wheels up on PyPI for all supported architectures source
>> forge is not needed. I note that there are still some 15,000 downloads a
>> week from the site, so it is still used.
>>
>> Thoughts?
>>
>> Chuck
>>
>
I've uploaded the NumPy 1.11.2 release to sourceforge and made a note on
the summary page that that will be the last release to be found there.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Dropping sourceforge for releases.

2016-10-01 Thread Charles R Harris
Hi All,

Ralf has suggested dropping sourceforge as a NumPy release site. There was
discussion of doing that some time back but we have not yet done it. Now
that we put wheels up on PyPI for all supported architectures source forge
is not needed. I note that there are still some 15,000 downloads a week
from the site, so it is still used.

Thoughts?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Vendorize tempita

2016-09-30 Thread Charles R Harris
On Fri, Sep 30, 2016 at 10:36 AM, Charles R Harris <
charlesr.har...@gmail.com> wrote:

>
>
> On Fri, Sep 30, 2016 at 10:10 AM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Fri, Sep 30, 2016 at 9:48 AM, Evgeni Burovski <
>> evgeny.burovs...@gmail.com> wrote:
>>
>>> On Fri, Sep 30, 2016 at 6:29 PM, Charles R Harris
>>> <charlesr.har...@gmail.com> wrote:
>>> >
>>> >
>>> > On Fri, Sep 30, 2016 at 9:21 AM, Benjamin Root <ben.v.r...@gmail.com>
>>> wrote:
>>> >>
>>> >> This is the first I am hearing of tempita (looks to be a templating
>>> >> language). How is it a dependency of numpy? Do I now need tempita in
>>> order
>>> >> to use numpy, or is it a build-time-only dependency?
>>> >
>>> >
>>> > Build time only. The virtue of tempita is that it can be used to
>>> generate
>>> > cython sources. We could adapt one of our current templating scripts
>>> to do
>>> > that also, but that would seem to be more work. Note that tempita is
>>> > currently included in cython, but the cython folks consider that an
>>> > implemention detail that should not be depended upon.
>>> >
>>> > 
>>> >
>>> > Chuck
>>> >
>>> > ___
>>> > NumPy-Discussion mailing list
>>> > NumPy-Discussion@scipy.org
>>> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>> >
>>>
>>>
>>> Ideally, it's packaged in such a way that it's usable for scipy too --
>>> at the moment it's used in scipy.sparse via Cython.Tempita + a
>>> fallback to system installed tempita if Cython.Tempita is not
>>> available (however I'm not sure that fallback is ever exercised).
>>> Since scipy needs to support numpy down to 1.8.2, a vendorized copy
>>> will not be usable for scipy for quite a while.
>>>
>>> So, it'd be great to handle it like numpydoc: to have npy_tempita as a
>>> small self-contained package with the repo under the numpy
>>> organization and include it via a git submodule. Chuck, do you think
>>> tempita would need much in terms of maintenance?
>>>
>>> To put some money where my mouth is, I can offer to do some legwork
>>> for packaging it up.
>>>
>>>
>> It might be better to keep tempita and cythonize together so that the
>> search path works out right. It is also possible that other scripts might
>> be wanted as cythonize is currently restricted to cython files (*.pyx.in,
>> *.pxi.in). There are two other templating scripts in numpy/distutils,
>> and I think f2py has a dependency on one of those.
>>
>> If there is a set of tools that would be common to both scipy and numpy,
>> having them included as a submodule would be a good idea.
>>
>>
> Hmm, I suppose it just depends on where submodule is, so a npy_tempita
> alone would work fine.  There isn't much maintenance needed if you resist
> the urge to refactor the code. I removed a six dependency, but that is now
> upstream as well.
>

There don't seem to be any objections, so I will put the current
vendorization in. Evgeni, if you think it a good idea to make a repo for
this and use submodules, go ahead with that. I have left out the testing
infrastructure at https://github.com/gjhiggins/tempita which runs a sparse
set of doctests.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Vendorize tempita

2016-09-30 Thread Charles R Harris
On Fri, Sep 30, 2016 at 10:10 AM, Charles R Harris <
charlesr.har...@gmail.com> wrote:

>
>
> On Fri, Sep 30, 2016 at 9:48 AM, Evgeni Burovski <
> evgeny.burovs...@gmail.com> wrote:
>
>> On Fri, Sep 30, 2016 at 6:29 PM, Charles R Harris
>> <charlesr.har...@gmail.com> wrote:
>> >
>> >
>> > On Fri, Sep 30, 2016 at 9:21 AM, Benjamin Root <ben.v.r...@gmail.com>
>> wrote:
>> >>
>> >> This is the first I am hearing of tempita (looks to be a templating
>> >> language). How is it a dependency of numpy? Do I now need tempita in
>> order
>> >> to use numpy, or is it a build-time-only dependency?
>> >
>> >
>> > Build time only. The virtue of tempita is that it can be used to
>> generate
>> > cython sources. We could adapt one of our current templating scripts to
>> do
>> > that also, but that would seem to be more work. Note that tempita is
>> > currently included in cython, but the cython folks consider that an
>> > implemention detail that should not be depended upon.
>> >
>> > 
>> >
>> > Chuck
>> >
>> > ___
>> > NumPy-Discussion mailing list
>> > NumPy-Discussion@scipy.org
>> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
>> >
>>
>>
>> Ideally, it's packaged in such a way that it's usable for scipy too --
>> at the moment it's used in scipy.sparse via Cython.Tempita + a
>> fallback to system installed tempita if Cython.Tempita is not
>> available (however I'm not sure that fallback is ever exercised).
>> Since scipy needs to support numpy down to 1.8.2, a vendorized copy
>> will not be usable for scipy for quite a while.
>>
>> So, it'd be great to handle it like numpydoc: to have npy_tempita as a
>> small self-contained package with the repo under the numpy
>> organization and include it via a git submodule. Chuck, do you think
>> tempita would need much in terms of maintenance?
>>
>> To put some money where my mouth is, I can offer to do some legwork
>> for packaging it up.
>>
>>
> It might be better to keep tempita and cythonize together so that the
> search path works out right. It is also possible that other scripts might
> be wanted as cythonize is currently restricted to cython files (*.pyx.in,
> *.pxi.in). There are two other templating scripts in numpy/distutils, and
> I think f2py has a dependency on one of those.
>
> If there is a set of tools that would be common to both scipy and numpy,
> having them included as a submodule would be a good idea.
>
>
Hmm, I suppose it just depends on where submodule is, so a npy_tempita
alone would work fine.  There isn't much maintenance needed if you resist
the urge to refactor the code. I removed a six dependency, but that is now
upstream as well.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Vendorize tempita

2016-09-30 Thread Charles R Harris
On Fri, Sep 30, 2016 at 9:48 AM, Evgeni Burovski <evgeny.burovs...@gmail.com
> wrote:

> On Fri, Sep 30, 2016 at 6:29 PM, Charles R Harris
> <charlesr.har...@gmail.com> wrote:
> >
> >
> > On Fri, Sep 30, 2016 at 9:21 AM, Benjamin Root <ben.v.r...@gmail.com>
> wrote:
> >>
> >> This is the first I am hearing of tempita (looks to be a templating
> >> language). How is it a dependency of numpy? Do I now need tempita in
> order
> >> to use numpy, or is it a build-time-only dependency?
> >
> >
> > Build time only. The virtue of tempita is that it can be used to generate
> > cython sources. We could adapt one of our current templating scripts to
> do
> > that also, but that would seem to be more work. Note that tempita is
> > currently included in cython, but the cython folks consider that an
> > implemention detail that should not be depended upon.
> >
> > 
> >
> > Chuck
> >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
>
>
> Ideally, it's packaged in such a way that it's usable for scipy too --
> at the moment it's used in scipy.sparse via Cython.Tempita + a
> fallback to system installed tempita if Cython.Tempita is not
> available (however I'm not sure that fallback is ever exercised).
> Since scipy needs to support numpy down to 1.8.2, a vendorized copy
> will not be usable for scipy for quite a while.
>
> So, it'd be great to handle it like numpydoc: to have npy_tempita as a
> small self-contained package with the repo under the numpy
> organization and include it via a git submodule. Chuck, do you think
> tempita would need much in terms of maintenance?
>
> To put some money where my mouth is, I can offer to do some legwork
> for packaging it up.
>
>
It might be better to keep tempita and cythonize together so that the
search path works out right. It is also possible that other scripts might
be wanted as cythonize is currently restricted to cython files (*.pyx.in, *.
pxi.in). There are two other templating scripts in numpy/distutils, and I
think f2py has a dependency on one of those.

If there is a set of tools that would be common to both scipy and numpy,
having them included as a submodule would be a good idea.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Vendorize tempita

2016-09-30 Thread Charles R Harris
On Fri, Sep 30, 2016 at 9:21 AM, Benjamin Root  wrote:

> This is the first I am hearing of tempita (looks to be a templating
> language). How is it a dependency of numpy? Do I now need tempita in order
> to use numpy, or is it a build-time-only dependency?
>

Build time only. The virtue of tempita is that it can be used to generate
cython sources. We could adapt one of our current templating scripts to do
that also, but that would seem to be more work. Note that tempita is
currently included in cython, but the cython folks consider that an
implemention detail that should not be depended upon.



Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Vendorize tempita

2016-09-30 Thread Charles R Harris
On Fri, Sep 30, 2016 at 9:13 AM, Stephan Hoyer  wrote:

> One way to do this is to move to vendorized dependencies into an submodule
> of numpy itself (e.g., sklearn.externals.joblib, though maybe even a little
> more indirection than that would be valuable to make it clear that it isn't
> part of NumPy public API). This would avoid further enlarging the set of
> namespaces we use.
>
> In any case, I'm perfectly OK with using something like npy_tempita
> internally, too, as long as we can be sure that we're using NumPy's
> vendorized version, not whatever version is installed locally. We're not
> planning to actually install "npy_tempita" when installing numpy (even for
> dev installs), right?
>
>
>
The only thing in the tools directory included in a source distribution is
the swig directory. Tempita is only currently used by the cythonize script
also in the tools directory. The search path for the cythonize script is 1)
installed modules, 2) modules in same directory, which is why it might be
good to rename the module npy_tempita` so that is always the one used.



Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Vendorize tempita

2016-09-30 Thread Charles R Harris
Hi All,

There is a PR  to vendorize
tempita. This removes tempita as a dependency and simplifies some things.
Feedback on this step is welcome. One question is whether the package
should be renamed to something like `npy_tempita`, as otherwise installed
tempita, if any has priority.

Thoughts?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] testing

2016-09-26 Thread Charles R Harris
Testing if this gets posted... Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] NumPy 1.11.2rc1

2016-09-12 Thread Charles R Harris
Hi All,

I'm pleased to announce the release of Numpy 1.11.2rc1. This release
supports Python 2.6 - 2.7, and 3.2 - 3.5 and fixes bugs and regressions
found in Numpy 1.11.1.  Wheels for Linux, Windows, and OSX can be found on
PyPI. Sources are available on both PyPI and Sourceforge
.

Thanks to all who were involved in this release.

The following pull requests have been merged. PRs overridden by later
merges and trivial release notes updates have been omitted.


   -  #7736 BUG: Many functions silently drop 'keepdims' kwarg.
   -  #7738 ENH: Add extra kwargs and update doc of many MA methods.
   -  #7778 DOC: Update Numpy 1.11.1 release notes.
   -  #7793 BUG: MaskedArray.count treats negative axes incorrectly.
   -  #7816 BUG: Fix array too big error for wide dtypes.
   -  #7821 BUG: Make sure npy_mul_with_overflow_ detects overflow.
   -  #7824 MAINT: Allocate fewer bytes for empty arrays.
   -  #7847 MAINT,DOC: Fix some imp module uses and update f2py.compile
   docstring.
   -  #7849 MAINT: Fix remaining uses of deprecated Python imp module.
   -  #7851 BLD: Fix ATLAS version detection.
   -  #7896 BUG: Construct ma.array from np.array which contains padding.
   -  #7904 BUG: Fix float16 type not being called due to wrong ordering.
   -  #7917 BUG: Production install of numpy should not require nose.
   -  #7919 BLD: Fixed MKL detection for recent versions of this library.
   -  #7920 BUG: Fix for issue #7835 (ma.median of 1d).
   -  #7932 BUG: Monkey-patch _msvccompile.gen_lib_option like other
   compilers.
   -  #7939 BUG: Check for HAVE_LDOUBLE_DOUBLE_DOUBLE_LE in
   npy_math_complex.
   -  #7953 BUG: Guard against buggy comparisons in generic quicksort.
   -  #7954 BUG: Use keyword arguments to initialize Extension base class.
   -  #7955 BUG: Make sure numpy globals keep identity after reload.
   -  #7972 BUG: MSVCCompiler grows 'lib' & 'include' env strings
   exponentially.
   -  #8005 BLD: Remove __NUMPY_SETUP__ from builtins at end of setup.py.
   -  #8010 MAINT: Remove leftover imp module imports.
   -  #8020 BUG: Fix return of np.ma.count if keepdims is True and axis is
   None.
   -  #8024 BUG: Fix numpy.ma.median.
   -  #8031 BUG: Fix np.ma.median with only one non-masked value.
   -  #8044 BUG: Fix bug in NpyIter buffering with discontinuous arrays.

The following people contributed to this release. The '+' marks first time
contributors.

   - Allan Haldane
   - Bertrand Lefebvre
   - Charles Harris
   - Julian Taylor
   - Loïc Estève
   - Marshall Bockrath-Vandegrift+
   - Michael Seifert+
   - Pauli Virtanen
   - Ralf Gommers
   - Sebastian Berg
   - Shota Kawabuchi+
   - Thomas A Caswell
   - Valentin Valls+
   - Xavier Abellan Ecija+

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] gmane

2016-09-09 Thread Charles R Harris
Hi All,

Looks like gmane is going down
. Does anyone
know of an alternative for searching and referencing the NumPy mail
archives?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.11.2

2016-09-09 Thread Charles R Harris
On Fri, Sep 9, 2016 at 1:20 AM, Sandro Tosi  wrote:

> what is the status for this? i checked on GH and
> https://github.com/numpy/numpy/milestone/43 seems to report no issue
> pending. the reason i'm asking is that i still have to package 1.11.1
> for debian, but i dont want to do all the work and then the next day
> you release a new version (oh, dear Murphy :) )
>

I'm planning on putting out 1.11.2rc1 this weekend, then 1-2 weeks to the
final.



Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Correct error of invalid axis arguments.

2016-09-05 Thread Charles R Harris
Hi All,

At the moment there are two error types raised when invalid axis arguments
are encountered: IndexError and ValueError. I prefer ValueError for
arguments, IndexError seems more appropriate when the bad axis value is
used as an index. In any case, having mixed error types is inconvenient,
but also inconvenient to change. Should we worry about that? If so, what
should the error be? Note that some of the mixup arises because the axis
values are not checked before use, in which case IndexError is raised.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.distutils issue

2016-08-24 Thread Charles R Harris
On Wed, Aug 24, 2016 at 5:05 PM, Charles R Harris <charlesr.har...@gmail.com
> wrote:

>
>
> On Wed, Aug 24, 2016 at 1:41 PM, Pavlyk, Oleksandr <
> oleksandr.pav...@intel.com> wrote:
>
>> Hi All,
>>
>>
>>
>> According to the documentation page:
>>
>>
>>
>>http://docs.scipy.org/doc/numpy/reference/distutils.html
>>
>>
>>
>> Function add_library allows the following keywords:
>>
>>   extra_f77_compiler_args
>>
>>   extra_f90_compiler_args
>>
>>
>>
>> however setting them seem to have no effect for my extension. Digging
>> deeper, I discovered,
>>
>> the documentation is inconsistent with the implementation, as per
>>
>>
>>
>> https://github.com/numpy/numpy/blob/v1.11.0/numpy/distutils/
>> fcompiler/__init__.py#L569
>>
>>
>>
>> https://github.com/numpy/numpy/blob/v1.11.0/numpy/distutils/
>> fcompiler/__init__.py#L583
>>
>>
>>
>> And indeed, setting extra_f77_compile_arg has the effect I was looking
>> for.
>>
>> Fixing it is easy, but I am less certain whether we should fix the docs,
>> or the code.
>>
>>
>>
>> Given that add_extension lists extra_compile_args,
>> extra_f77_compile_args, etc, I would think it
>>
>> Is the documentation that need to change.
>>
>>
>>
>> Please confirm, and I will open up a pull request for this.
>>
>
> That's rather unfortunate, "compiler" would be better than "compile", but
> it is best to document the actual behavior. If we later settle on changing
> the argument we can do that, but it is a more involved process.
>
>
Although I suppose we could allow either in the future.

Chuck

>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.distutils issue

2016-08-24 Thread Charles R Harris
On Wed, Aug 24, 2016 at 1:41 PM, Pavlyk, Oleksandr <
oleksandr.pav...@intel.com> wrote:

> Hi All,
>
>
>
> According to the documentation page:
>
>
>
>http://docs.scipy.org/doc/numpy/reference/distutils.html
>
>
>
> Function add_library allows the following keywords:
>
>   extra_f77_compiler_args
>
>   extra_f90_compiler_args
>
>
>
> however setting them seem to have no effect for my extension. Digging
> deeper, I discovered,
>
> the documentation is inconsistent with the implementation, as per
>
>
>
> https://github.com/numpy/numpy/blob/v1.11.0/numpy/
> distutils/fcompiler/__init__.py#L569
>
>
>
> https://github.com/numpy/numpy/blob/v1.11.0/numpy/
> distutils/fcompiler/__init__.py#L583
>
>
>
> And indeed, setting extra_f77_compile_arg has the effect I was looking
> for.
>
> Fixing it is easy, but I am less certain whether we should fix the docs,
> or the code.
>
>
>
> Given that add_extension lists extra_compile_args, extra_f77_compile_args,
> etc, I would think it
>
> Is the documentation that need to change.
>
>
>
> Please confirm, and I will open up a pull request for this.
>

That's rather unfortunate, "compiler" would be better than "compile", but
it is best to document the actual behavior. If we later settle on changing
the argument we can do that, but it is a more involved process.

Chuck

>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.11.2

2016-08-14 Thread Charles R Harris
"Events, dear boy, events" ;) There were a couple of bugs that turned up at
the last moment that needed fixing. At the moment there are two, possibly
three, bugs that need finishing off.

   - A fix for compilation on PPC running RHEL 7.2 (done, but not verified)
   - Roll back Numpy reload error: more than one project was reloading.
   - Maybe fix crash for quicksort of object arrays with bogus comparison.

Chuck


On Sun, Aug 14, 2016 at 11:11 AM, Sandro Tosi <mo...@debian.org> wrote:

> hey there, what happened here? do you still plan to release a 1.11.2rc1
> soon?
>
> On Wed, Aug 3, 2016 at 9:09 PM, Charles R Harris
> <charlesr.har...@gmail.com> wrote:
> > Hi All,
> >
> > I would like to release Numpy 1.11.2rc1 this weekend. It will contain a
> few
> > small fixes and enhancements for windows and the last Scipy release. If
> > there are any pending PRs that you think should go in or be backported
> for
> > this release, please speak up.
> >
> > Chuck
> >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
>
>
>
> --
> Sandro "morph" Tosi
> My website: http://sandrotosi.me/
> Me at Debian: http://wiki.debian.org/SandroTosi
> G+: https://plus.google.com/u/0/+SandroTosi
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy 1.11.2

2016-08-03 Thread Charles R Harris
Hi All,

I would like to release Numpy 1.11.2rc1 this weekend. It will contain a few
small fixes and enhancements for windows and the last Scipy release. If
there are any pending PRs that you think should go in or be backported for
this release, please speak up.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] StackOverflow documentation

2016-07-21 Thread Charles R Harris
On Thu, Jul 21, 2016 at 5:47 AM, Jaime Fernández del Río <
jaime.f...@gmail.com> wrote:

> StackOverflow now also has documentation, and there already is a NumPy tag:
>
> http://stackoverflow.com/documentation/numpy
>
> Not sure what, if anything, do we want to do with this, nor how to handle
> not having to different sources with the same information. Any thoughts?
>
>
That's interesting. Not sure what to do there, maybe upload some of our
documentation? I'm a bit worried as numpy documentation changes with every
release.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Custom Dtype/Units discussion

2016-07-13 Thread Charles R Harris
Evening would work for me. Dinner?
On Jul 13, 2016 2:43 PM, "Ryan May"  wrote:

> On Mon, Jul 11, 2016 at 12:39 PM, Chris Barker 
> wrote:
>
>>
>>
>> On Sun, Jul 10, 2016 at 8:12 PM, Nathan Goldbaum 
>> wrote:
>>
>>>
>>> Maybe this can be an informal BOF session?
>>>
>>
>> or  maybe a formal BoF? after all, how formal do they get?
>>
>> Anyway, it was my understanding that we really needed to do some
>> significant refactoring of how numpy deals with dtypes in order to do this
>> kind of thing cleanly -- so where has that gone since last year?
>>
>> Maybe this conversation should be about how to build a more flexible
>> dtype system generally, rather than specifically about unit support.
>> (though unit support is a great use-case to focus on)
>>
>>
> So Thursday's options seem to be in the standard BOF slot (up against the
> Numfocus BOF), or doing something that evening, which would overlap at
> least part of multiple happy hour events. I lean towards evening. Thoughts?
>
> Ryan
>
> --
> Ryan May
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Custom Dtype/Units discussion

2016-07-11 Thread Charles R Harris
On Mon, Jul 11, 2016 at 11:39 AM, Chris Barker 
wrote:

>
>
> On Sun, Jul 10, 2016 at 8:12 PM, Nathan Goldbaum 
> wrote:
>
>>
>> Maybe this can be an informal BOF session?
>>
>
> or  maybe a formal BoF? after all, how formal do they get?
>
> Anyway, it was my understanding that we really needed to do some
> significant refactoring of how numpy deals with dtypes in order to do this
> kind of thing cleanly -- so where has that gone since last year?
>
> Maybe this conversation should be about how to build a more flexible dtype
> system generally, rather than specifically about unit support. (though unit
> support is a great use-case to focus on)
>

Note that Mark Wiebe will also be giving a talk Friday, so he may be
around. As the last person to add a type to Numpy and the designer of DyND
he might have some useful input. DyND development is pretty active and I'm
always curious how we can somehow move in that direction.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Custom Dtype/Units discussion

2016-07-10 Thread Charles R Harris
On Sun, Jul 10, 2016 at 12:20 AM, Nathaniel Smith  wrote:

> Hi Ryan,
>
> I'll be and SciPy and I'd love to talk about this :-). Things are a
> bit hectic for me on Mon/Tue/Wed between the Python Compilers Workshop
> and my talk, but do you want to meet up Thursday maybe?
>
>
I'll be at scipy also and Thursday sounds fine.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Accelerate or OpenBLAS for numpy / scipy wheels?

2016-06-28 Thread Charles R Harris
On Mon, Jun 27, 2016 at 9:46 PM, Matthew Brett 
wrote:

> Hi,
>
> I just succeeded in getting an automated dual arch build of numpy and
> scipy, using OpenBLAS.  See the last three build jobs in these two
> build matrices:
>
> https://travis-ci.org/matthew-brett/numpy-wheels/builds/140388119
> https://travis-ci.org/matthew-brett/scipy-wheels/builds/140684673
>
> Tests are passing on 32 and 64-bit.
>
> I didn't upload these to the usual Rackspace container at
> wheels.scipy.org to avoid confusion.
>
> So, I guess the question now is - should we switch to shipping
> OpenBLAS wheels for the next release of numpy and scipy?  Or should we
> stick with the Accelerate framework that comes with OSX?
>
> In favor of the Accelerate build : faster to build, it's what we've
> been doing thus far.
>
> In favor of OpenBLAS build : allows us to commit to one BLAS / LAPACK
> library cross platform, when we have the Windows builds working.
> Faster to fix bugs with good support from main developer.  No
> multiprocessing crashes for Python 2.7.
>

I'm still a bit nervous about OpenBLAS, see
https://github.com/scipy/scipy/issues/6286. That was with version 0.2.18,
which is pretty recent.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy 1.11.1 release

2016-06-26 Thread Charles R Harris
Hi All,

I'm pleased to announce the release of Numpy 1.11.1. This release supports
Python 2.6 - 2.7, and 3.2 - 3.5 and fixes bugs and regressions found in
Numpy 1.11.0 as well as making several build related improvements.  Wheels
for Linux, Windows, and OSX can be found on PyPI. Sources are available on
both PyPI and Sourceforge
.

Thanks to all who were involved in this release, and a special thanks to
Matthew Brett for his work on the Linux and Windows wheel infrastructure.

The following pull requests have been merged:


   - 7506 BUG: Make sure numpy imports on python 2.6 when nose is
   unavailable.
   - 7530 BUG: Floating exception with invalid axis in np.lexsort.
   - 7535 BUG: Extend glibc complex trig functions blacklist to glibc <
   2.18.
   - 7551 BUG: Allow graceful recovery for no compiler.
   - 7558 BUG: Constant padding expected wrong type in constant_values.
   - 7578 BUG: Fix OverflowError in Python 3.x. in swig interface.
   - 7590 BLD: Fix configparser.InterpolationSyntaxError.
   - 7597 BUG: Make np.ma.take work on scalars.
   - 7608 BUG: linalg.norm(): Don't convert object arrays to float.
   - 7638 BLD: Correct C compiler customization in system_info.py.
   - 7654 BUG: ma.median of 1d array should return a scalar.
   - 7656 BLD: Remove hardcoded Intel compiler flag -xSSE4.2.
   - 7660 BUG: Temporary fix for str(mvoid) for object field types.
   - 7665 BUG: Fix incorrect printing of 1D masked arrays.
   - 7670 BUG: Correct initial index estimate in histogram.
   - 7671 BUG: Boolean assignment no GIL release when transfer needs API.
   - 7676 BUG: Fix handling of right edge of final histogram bin.
   - 7680 BUG: Fix np.clip bug NaN handling for Visual Studio 2015.
   - 7724 BUG: Fix segfaults in np.random.shuffle.
   - 7731 MAINT: Change mkl_info.dir_env_var from MKL to MKLROOT.
   - 7737 BUG: Fix issue on OS X with Python 3.x, npymath.ini not installed.

The following developers contributed to this release, developers marked
with a '+' are first time contributors.

   - Allan Haldane
   - Amit Aronovitch+
   - Andrei Kucharavy+
   - Charles Harris
   - Eric Wieser+
   - Evgeni Burovski
   - Loïc Estève+
   - Mathieu Lamarre+
   - Matthew Brett
   - Matthias Geier
   - Nathaniel J. Smith
   - Nikola Forró+
   - Ralf Gommers
   - Ray Donnelly+
   - Robert Kern
   - Sebastian Berg
   - Simon Conseil
   - Simon Gibbons
   - Sorin Sbarnea+

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] block function

2016-06-21 Thread Charles R Harris
Hi All,

I've updated Stefan Otte's block function enhancement at
https://github.com/numpy/numpy/pull/7768. Could folks interested in that
function review the proposed grammar for the creation of blocked arrays.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Design feedback solicitation

2016-06-17 Thread Charles R Harris
On Fri, Jun 17, 2016 at 9:22 AM, Robert Kern  wrote:

> On Fri, Jun 17, 2016 at 4:08 PM, Pavlyk, Oleksandr <
> oleksandr.pav...@intel.com> wrote:
> >
> > Hi,
> >
> > I am new to this list, so I will start with an introduction. My name is
> Oleksandr Pavlyk. I now work at Intel Corp. on the Intel Distribution for
> Python, and previously worked at Wolfram Research for 12 years. My latest
> project was to write a mirror to numpy.random, named numpy.random_intel.
> The module uses MKL to sample from different distributions for efficiency.
> It provides support for different underlying algorithms for basic
> pseudo-random number generation, i.e. in addition to MT19937, it also
> provides SFMT19937, MT2203, etc.
> >
> > I recently published a blog about it:
> >
> >
> https://software.intel.com/en-us/blogs/2016/06/15/faster-random-number-generation-in-intel-distribution-for-python
> >
> > I originally attempted to simply replace numpy.random in the Intel
> Distribution for Python with the new module, but due to fixed seed
> backwards incompatibility this results in numerous test failures in numpy,
> scipy, pandas and other modules.
> >
> > Unlike numpy.random, the new module generates a vector of random numbers
> at a time, which can be done faster than repeatedly generating the same
> number of variates one at a time.
> >
> > The source code for the new module is not upstreamed yet, and this email
> is meant to solicit early community feedback to allow for faster acceptance
> of the proposed changes.
>
> Cool! You can find pertinent discussion here:
>
>   https://github.com/numpy/numpy/issues/6967
>
> And the current effort for adding new core PRNGs here:
>
>   https://github.com/bashtage/ng-numpy-randomstate
>

I wonder if the easiest thing to do at this point might be to implement a
new redesigned random module and keep the old one around for backward
compatibility? Not that that would make everything easy, but at least folks
could choose to use the new functions for speed and versatility if they
needed them. The current random module is pretty stable so maintenance
should not be too onerous.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers, let's make a decision

2016-06-06 Thread Charles R Harris
On Mon, Jun 6, 2016 at 2:11 PM, Marten van Kerkwijk <
m.h.vankerkw...@gmail.com> wrote:

> Hi Chuck,
>
> I consider either proposal an improvement, but among the two I favour
> returning float for `**`, because, like for `/`, it ensures one gets
> closest to the (mathematically) true answer in most cases, and makes
> duck-typing that much easier -- I'd like to be able to do x** y without
> having to worry whether x and y are python scalars or numpy arrays of
> certain type.
>
> I do agree with Nathaniel that it would be good to check what actually
> breaks. Certainly, if anybody is up to making a PR that implements either
> suggestion, I'd gladly check whether it breaks anything in astropy.
>
> I  should add that I have no idea how to assuage the fear that new code
> would break with old versions of numpy, but on the other hand, I don't know
> its vailidity either, as it seems one either develops larger projects  for
> multiple versions and tests, or writes more scripty things for whatever the
> current versions are. Certainly, by this argument I better not start using
> the new `@` operator!
>
> I do think the argument that for division it was easier because there was
> `//` already available is a red herring: here one can use `np.power(a, b,
> dtype=...)` if one really needs to.
>

It looks to me like users want floats, while developers want the easy path
of raising an error. Darn those users, they just make life sooo difficult...

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ENH: compute many inner products quickly

2016-06-05 Thread Charles R Harris
On Sun, Jun 5, 2016 at 6:41 PM, Stephan Hoyer  wrote:

> If possible, I'd love to add new functions for "generalized ufunc" linear
> algebra, and then deprecate (or at least discourage) using the older
> versions with inferior broadcasting rules. Adding a new keyword arg means
> we'll be stuck with an awkward API for a long time to come.
>
> There are three types of matrix/vector products for which ufuncs would be
> nice:
> 1. matrix-matrix product (covered by matmul)
> 2. matrix-vector product
> 3. vector-vector (inner) product
>
> It's straightful to implement either of the later two options by inserting
> dummy dimensions and then calling matmul, but that's a pretty awkward API,
> especially for inner products. Unfortunately, we already use the two most
> obvious one word names for vector inner products (inner and dot). But on
> the other hand, one word names are not very descriptive, and the short name
> "dot" probably mostly exists because of the lack of an infix operator.
>
> So I'll start by throwing out some potential new names:
>
> For matrix-vector products:
> matvecmul (if it's worth making a new operator)
>
> For inner products:
> vecmul (similar to matmul, but probably too ambiguous)
> dot_product
> inner_prod
> inner_product
>

I was using mulmatvec, mulvecmat, mulvecvec back when I was looking at
this. I suppose the mul could also go in the middle, or maybe change it to
x and put it in the middle: matxvec, vecxmat, vecxvec.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers, let's make a decision

2016-06-04 Thread Charles R Harris
On Sat, Jun 4, 2016 at 9:26 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Jun 4, 2016 7:23 PM, "Charles R Harris" <charlesr.har...@gmail.com>
> wrote:
> >
> [...]
> > We could always try the float option and see what breaks, but I expect
> there is a fair amount of code using small exponents like 2 or 3 where it
> is expected that the result is still integer. I would like more input from
> users than we have seen so far...
>
> Just to highlight this, if anyone wants to strengthen the argument for
> switching to float then this is something you can literally do: tweak a
> local checkout of numpy to return float from int**int and
> array-of-int**array-of-int, and then try running the test suites of
> projects like scikit-learn, astropy, nipy, scikit-image, ...
>
> (The reason I'm phrasing this as something that people who like the float
> idea should do is that generally when proposing a risky
> compatibility-breaking change, the onus is on the ones proposing it to
> demonstrate that the risk is ok.)
>

I was tempted for a bit, but I think the biggest compatibility problem is
not current usage, but the fact that code written assuming float results
will not work for earlier versions of numpy, and that would be a nasty
situation. Given that integers raised to negative integer powers is already
pretty much broken, making folks write around an exception will result in
code compatible with previous numpy versions.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers, let's make a decision

2016-06-04 Thread Charles R Harris
On Sat, Jun 4, 2016 at 7:54 PM, <josef.p...@gmail.com> wrote:

>
>
> On Sat, Jun 4, 2016 at 9:16 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Sat, Jun 4, 2016 at 6:17 PM, <josef.p...@gmail.com> wrote:
>>
>>>
>>>
>>> On Sat, Jun 4, 2016 at 8:07 PM, Charles R Harris <
>>> charlesr.har...@gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On Sat, Jun 4, 2016 at 5:27 PM, <josef.p...@gmail.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Sat, Jun 4, 2016 at 6:10 PM, Nathaniel Smith <n...@pobox.com> wrote:
>>>>>
>>>>>> On Sat, Jun 4, 2016 at 2:07 PM, V. Armando Sole <s...@esrf.fr> wrote:
>>>>>> > Also in favor of 2. Always return a float for '**'
>>>>>>
>>>>>> Even if we did want to switch to this, it's such a major
>>>>>> backwards-incompatible change that I'm not sure how we could actually
>>>>>> make the transition without first making it an error for a while.
>>>>>>
>>>>>
>>>>> AFAIU, only the dtype for int**int would change. So, what would be the
>>>>> problem with FutureWarnings as with other dtype changes that were done in
>>>>> recent releases.
>>>>>
>>>>>
>>>> The main problem I see with that is that numpy integers would behave
>>>> differently than Python integers, and the difference would be silent. With
>>>> option 1 it is possible to write code that behaves the same up to overflow
>>>> and the error message would supply a warning when the exponent should be
>>>> float. One could argue that numpy scalar integer types could be made to
>>>> behave like python integers, but then their behavior would differ from
>>>> numpy arrays and numpy scalar arrays.
>>>>
>>>
>>> I'm not sure I understand.
>>>
>>> Do you mean
>>>
>>> np.arange(5)**2 would behave differently than np.arange(5)**np.int_(2)
>>>
>>> or 2**2 would behave differently than np.int_(2)**np.int(2)
>>>
>>
>> The second case. Python returns ints for non-negative integer powers of
>> ints.
>>
>>
>>>
>>> ?
>>>
>>>
>>> AFAICS, there are many cases where numpy scalars don't behave like
>>> python scalars. Also, does different behavior mean different type/dtype or
>>> different numbers.  (The first I can live with, the second requires human
>>> memory usage, which is a scarce resource.)
>>>
>>> >>> 2**(-2)
>>> 0.25
>>>
>>>
>> But we can't mix types in np.arrays and we can't depend on the element
>> values of arrays in the exponent, but only on their type, so 2 ** array([1,
>> -1]) must contain a single type and making that type float would surely
>> break code.  Scalar arrays, which are arrays, have the same problem. We
>> can't do what Python does with ndarrays and numpy scalars, and it would be
>> best to be consistent. Division was a simpler problem to deal with, as
>> there were two operators, `//` and `/`. If there were two exponential
>> operators life would be simpler.
>>
>
> What bothers me with the entire argument is that you are putting higher
> priority on returning a dtype than on returning the correct numbers.
>

Overflow in integer powers would be correct in modular arithmetic, at least
for unsigned. Signed is a bit trickier. But overflow is a known property of
numpy integer types. If we raise an exception for the negative exponents we
at least aren't returning incorrect numbers.


>
> Reverse the argument: Because we cannot make the return type value
> dependent we **have** to return float, in order to get the correct number.
> (It's an argument not what we really have to do.)
>

>From my point of view, backwards compatibility is the main reason for
choosing 1, otherwise I'd pick 2. If it weren't so easy to get floating
point by using floating exponents I'd probably choose differently.


>
>
> Which code really breaks, code that gets a float instead of an int, and
> with some advance warning users that really need to watch their memory can
> use np.power.
>
> My argument before was that I think a simple operator like `**` should
> work for 90+% of the users and match their expectation, and the users that
> need to watch dtypes can as well use the function.
>
> (I can also live with the exception from case 1., but I really think this
> is like the python 2 integer division "surprise")
>

Well, that is why we would raise an exception, making it less surprising ;)

We could always try the float option and see what breaks, but I expect
there is a fair amount of code using small exponents like 2 or 3 where it
is expected that the result is still integer. I would like more input from
users than we have seen so far...

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers, let's make a decision

2016-06-04 Thread Charles R Harris
On Sat, Jun 4, 2016 at 6:17 PM, <josef.p...@gmail.com> wrote:

>
>
> On Sat, Jun 4, 2016 at 8:07 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Sat, Jun 4, 2016 at 5:27 PM, <josef.p...@gmail.com> wrote:
>>
>>>
>>>
>>> On Sat, Jun 4, 2016 at 6:10 PM, Nathaniel Smith <n...@pobox.com> wrote:
>>>
>>>> On Sat, Jun 4, 2016 at 2:07 PM, V. Armando Sole <s...@esrf.fr> wrote:
>>>> > Also in favor of 2. Always return a float for '**'
>>>>
>>>> Even if we did want to switch to this, it's such a major
>>>> backwards-incompatible change that I'm not sure how we could actually
>>>> make the transition without first making it an error for a while.
>>>>
>>>
>>> AFAIU, only the dtype for int**int would change. So, what would be the
>>> problem with FutureWarnings as with other dtype changes that were done in
>>> recent releases.
>>>
>>>
>> The main problem I see with that is that numpy integers would behave
>> differently than Python integers, and the difference would be silent. With
>> option 1 it is possible to write code that behaves the same up to overflow
>> and the error message would supply a warning when the exponent should be
>> float. One could argue that numpy scalar integer types could be made to
>> behave like python integers, but then their behavior would differ from
>> numpy arrays and numpy scalar arrays.
>>
>
> I'm not sure I understand.
>
> Do you mean
>
> np.arange(5)**2 would behave differently than np.arange(5)**np.int_(2)
>
> or 2**2 would behave differently than np.int_(2)**np.int(2)
>

The second case. Python returns ints for non-negative integer powers of
ints.


>
> ?
>
>
> AFAICS, there are many cases where numpy scalars don't behave like python
> scalars. Also, does different behavior mean different type/dtype or
> different numbers.  (The first I can live with, the second requires human
> memory usage, which is a scarce resource.)
>
> >>> 2**(-2)
> 0.25
>
>
But we can't mix types in np.arrays and we can't depend on the element
values of arrays in the exponent, but only on their type, so 2 ** array([1,
-1]) must contain a single type and making that type float would surely
break code.  Scalar arrays, which are arrays, have the same problem. We
can't do what Python does with ndarrays and numpy scalars, and it would be
best to be consistent. Division was a simpler problem to deal with, as
there were two operators, `//` and `/`. If there were two exponential
operators life would be simpler.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers, let's make a decision

2016-06-04 Thread Charles R Harris
On Sat, Jun 4, 2016 at 5:27 PM,  wrote:

>
>
> On Sat, Jun 4, 2016 at 6:10 PM, Nathaniel Smith  wrote:
>
>> On Sat, Jun 4, 2016 at 2:07 PM, V. Armando Sole  wrote:
>> > Also in favor of 2. Always return a float for '**'
>>
>> Even if we did want to switch to this, it's such a major
>> backwards-incompatible change that I'm not sure how we could actually
>> make the transition without first making it an error for a while.
>>
>
> AFAIU, only the dtype for int**int would change. So, what would be the
> problem with FutureWarnings as with other dtype changes that were done in
> recent releases.
>
>
The main problem I see with that is that numpy integers would behave
differently than Python integers, and the difference would be silent. With
option 1 it is possible to write code that behaves the same up to overflow
and the error message would supply a warning when the exponent should be
float. One could argue that numpy scalar integer types could be made to
behave like python integers, but then their behavior would differ from
numpy arrays and numpy scalar arrays.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers, let's make a decision

2016-06-04 Thread Charles R Harris
On Sat, Jun 4, 2016 at 11:22 AM, Charles R Harris <charlesr.har...@gmail.com
> wrote:

> Hi All,
>
> I've made a new post so that we can make an explicit decision. AFAICT, the
> two proposals are
>
>
>1. Integers to negative integer powers raise an error.
>2. Integers to integer powers always results in floats.
>
> My own sense is that 1. would be closest to current behavior and using a
> float exponential when a float is wanted is an explicit way to indicate
> that desire. OTOH, 2. would be the most convenient default for everyday
> numerical computation, but I think would more likely break current code. I
> am going to come down on the side of 1., which I don't think should cause
> too many problems if we start with a {Future, Deprecation}Warning
> explaining the workaround.
>

Note that current behavior in 1.11 is such a mess
```
In [5]: array([0], dtype=int64) ** -1
Out[5]: array([-9223372036854775808])

In [6]: array([0], dtype=uint64) ** -1
Out[6]: array([ inf])
```
That the simplest approach might be to start by raising an error rather
than by trying to maintain current behavior and issuing a warning.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Integers to integer powers, let's make a decision

2016-06-04 Thread Charles R Harris
Hi All,

I've made a new post so that we can make an explicit decision. AFAICT, the
two proposals are


   1. Integers to negative integer powers raise an error.
   2. Integers to integer powers always results in floats.

My own sense is that 1. would be closest to current behavior and using a
float exponential when a float is wanted is an explicit way to indicate
that desire. OTOH, 2. would be the most convenient default for everyday
numerical computation, but I think would more likely break current code. I
am going to come down on the side of 1., which I don't think should cause
too many problems if we start with a {Future, Deprecation}Warning
explaining the workaround.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-06-04 Thread Charles R Harris
On Tue, May 24, 2016 at 2:33 PM, R Schumacher  wrote:

> At 01:15 PM 5/24/2016, you wrote:
>
> On 5/24/2016 3:57 PM, Eric Moore wrote:
>
> Changing np.arange(10)**3 to have a non-integer dtype seems like a big
> change.
>
>
>
> What about np.arange(100)**5?
>
>
> Interesting, one warning per instantiation (Py2.7):
>
> >>> import numpy
> >>> a=numpy.arange(100)**5
> :1: RuntimeWarning: invalid value encountered in power
> >>> a=numpy.arange(100)**5.
> >>> b=numpy.arange(100.)**5
> >>> a==b
> array([ True,  True,  True,  True,  True,  True,  True,  True,  True,
> True,  True,  True,  True,  True,  True,  True,  True,  True,
> True,  True,  True,  True,  True,  True,  True,  True,  True,
> True,  True,  True,  True,  True,  True,  True,  True,  True,
> True,  True,  True,  True,  True,  True,  True,  True,  True,
> True,  True,  True,  True,  True,  True,  True,  True,  True,
> True,  True,  True,  True,  True,  True,  True,  True,  True,
> True,  True,  True,  True,  True,  True,  True,  True,  True,
> True,  True,  True,  True,  True,  True,  True,  True,  True,
> True,  True,  True,  True,  True,  True,  True,  True,  True,
> True,  True,  True,  True,  True,  True,  True,  True,  True,
> True], dtype=bool)
> >>> numpy.arange(100)**5
> array([  0,   1,  32, 243,1024,
>   3125,7776,   16807,   32768,   59049,
> 10,  161051,  248832,  371293,  537824,
> 759375, 1048576, 1419857, 1889568, 2476099,
>320, 4084101, 5153632, 6436343, 7962624,
>9765625,11881376,14348907,17210368,20511149,
>   2430,28629151,33554432,39135393,45435424,
>   52521875,60466176,69343957,79235168,90224199,
>  10240,   115856201,   130691232,   147008443,   164916224,
>  184528125,   205962976,   229345007,   254803968,   282475249,
>  31250,   345025251,   380204032,   418195493,   459165024,
>  503284375,   550731776,   601692057,   656356768,   714924299,
>  77760,   844596301,   916132832,   992436543,  1073741824,
> 1160290625,  1252332576,  1350125107,  1453933568,  1564031349,
> 168070,  1804229351,  1934917632,  2073071593, -2147483648,
>-2147483648, -2147483648, -2147483648, -2147483648, -2147483648,
>-2147483648, -2147483648, -2147483648, -2147483648, -2147483648,
>-2147483648, -2147483648, -2147483648, -2147483648, -2147483648,
>-2147483648, -2147483648, -2147483648, -2147483648, -2147483648,
>-2147483648, -2147483648, -2147483648, -2147483648, -2147483648])
> >>>
> >>> numpy.arange(100, dtype=numpy.int64)**5
> array([ 0,  1, 32,243,   1024,
>  3125,   7776,  16807,  32768,  59049,
>10, 161051, 248832, 371293, 537824,
>759375,1048576,1419857,1889568,2476099,
>   320,4084101,5153632,6436343,7962624,
>   9765625,   11881376,   14348907,   17210368,   20511149,
>  2430,   28629151,   33554432,   39135393,   45435424,
>  52521875,   60466176,   69343957,   79235168,   90224199,
> 10240,  115856201,  130691232,  147008443,  164916224,
> 184528125,  205962976,  229345007,  254803968,  282475249,
> 31250,  345025251,  380204032,  418195493,  459165024,
> 503284375,  550731776,  601692057,  656356768,  714924299,
> 77760,  844596301,  916132832,  992436543, 1073741824,
>1160290625, 1252332576, 1350125107, 1453933568, 1564031349,
>168070, 1804229351, 1934917632, 2073071593, 2219006624,
>2373046875, 2535525376, 2706784157, 2887174368, 3077056399,
>327680, 3486784401, 3707398432, 3939040643, 4182119424,
>4437053125, 4704270176, 4984209207, 5277319168, 5584059449,
>590490, 6240321451, 6590815232, 6956883693, 7339040224,
>7737809375, 8153726976, 8587340257, 9039207968, 9509900499],
> dtype=int64)
>

That is the Python default. To always see warnings do
`warnings.simplefilter('always')` before running.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy 1.11.1rc1 release

2016-05-26 Thread Charles R Harris
Hi All,

I am pleased to announce the release of Numpy 1.11.1rc1. The sources may be
found on sourceforge
 and wheels
for OS X, Windows, and Linux will be available on pypi sometime in the next
few days. The pypi release is delayed due to the decision that the wheels
should go up before the sources in order that people not get a source
install when what they want are wheels. The Python versions supported are
2.6-2.7 and 3.2-3.5.

This release has mostly small fixes and build enhancements and should be
good out of the starting gate, but prudence requires a release candidate as
there are a few bits not tested in master. The following fixes have been
applied:

   - #7506 BUG: Make sure numpy imports on python 2.6 when nose is
   unavailable.
   - #7530 BUG: Floating exception with invalid axis in np.lexsort.
   - #7535 BUG: Extend glibc complex trig functions blacklist to glibc <
   2.18.
   - #7551 BUG: Allow graceful recovery for no compiler.
   - #7558 BUG: Constant padding expected wrong type in constant_values.
   - #7578 BUG: Fix OverflowError in Python 3.x. in swig interface.
   - #7590 BLD: Fix configparser.InterpolationSyntaxError.
   - #7597 BUG: Make np.ma.take work on scalars.
   - #7608 BUG: linalg.norm(): Don't convert object arrays to float.
   - #7638 BLD: Correct C compiler customization in system_info.py.
   - #7654 BUG: ma.median of 1d array should return a scalar.
   - #7656 BLD: Remove hardcoded Intel compiler flag -xSSE4.2.
   - #7660 BUG: Temporary fix for str(mvoid) for object field types.
   - #7665 BUG: Fix incorrect printing of 1D masked arrays.
   - #7670 BUG: Correct initial index estimate in histogram.
   - #7671 BUG: Boolean assignment no GIL release when transfer needs API.
   - #7676 BUG: Fix handling of right edge of final histogram bin.
   - #7680 BUG: Fix np.clip bug NaN handling for Visual Studio 2015.


The following people have contributed to this release


   - Allan Haldane
   - Amit Aronovitch
   - Charles Harris
   - Eric Wieser
   - Evgeni Burovski
   - Loïc Estève
   - Mathieu Lamarre
   - Matthew Brett
   - Matthias Geier
   - Nathaniel J. Smith
   - Nikola Forró
   - Ralf Gommers
   - Robert Kern
   - Sebastian Berg
   - Simon Conseil
   - Simon Gibbons
   - Sorin Sbarnea
   - chiffa

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread Charles R Harris
On Fri, May 20, 2016 at 1:15 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Fri, May 20, 2016 at 11:35 AM, Charles R Harris
> <charlesr.har...@gmail.com> wrote:
> >
> >
> > On Thu, May 19, 2016 at 9:30 PM, Nathaniel Smith <n...@pobox.com> wrote:
> >>
> >> So I guess what makes this tricky is that:
> >>
> >> - We want the behavior to the same for multiple-element arrays,
> >> single-element arrays, zero-dimensional arrays, and scalars -- the
> >> shape of the data shouldn't affect the semantics of **
> >>
> >> - We also want the numpy scalar behavior to match the Python scalar
> >> behavior
> >>
> >> - For Python scalars, int ** (positive int) returns an int, but int **
> >> (negative int) returns a float.
> >>
> >> - For arrays, int ** (positive int) and int ** (negative int) _have_
> >> to return the same type, because in general output types are always a
> >> function of the input types and *can't* look at the specific values
> >> involved, and in specific because if you do array([2, 3]) ** array([2,
> >> -2]) you can't return an array where the first element is int and the
> >> second is float.
> >>
> >> Given these immutable and contradictory constraints, the last bad
> >> option IMHO would be that we make int ** (negative int) an error in
> >> all cases, and the error message can suggest that instead of writing
> >>
> >> np.array(2) ** -2
> >>
> >> they should instead write
> >>
> >> np.array(2) ** -2.0
> >>
> >> (And similarly for np.int64(2) ** -2 versus np.int64(2) ** -2.0.)
> >>
> >> Definitely annoying, but all the other options seem even more
> >> inconsistent and confusing, and likely to encourage the writing of
> >> subtly buggy code...
> >>
> >> (I especially have in mind numpy's habit of silently switching between
> >> scalars and zero-dimensional arrays -- so it's easy to write code that
> >> you think handles arbitrary array dimensions, and it even passes all
> >> your tests, but then it fails when someone passes in a different shape
> >> data and triggers some scalar/array inconsistency. E.g. if we make **
> >> -2 work for scalars but not arrays, then this code:
> >>
> >> def f(arr):
> >> return np.sum(arr, axis=0) ** -2
> >>
> >> works as expected for 1-d input, tests pass, everyone's happy... but
> >> as soon as you try to pass in higher dimensional integer input it will
> >> fail.)
> >>
> >
> > Hmm, the Alexandrian solution. The main difficulty with this solution
> that
> > this will likely to break working code. We could try it, or take the safe
> > route of raising a (Visible)DeprecationWarning.
>
> Right, sorry, I was talking about the end goal -- there's a separate
> question of how we get there. Pretty much any solution is going to
> require some sort of deprecation cycle though I guess, and at least
> the deprecate -> error transition is a lot easier than the working ->
> working different transition.
>
> > The other option is to
> > simply treat the negative power case uniformly as floor division and
> raise
> > an error on zero division, but the difference from Python power would be
> > highly confusing. I think I would vote for the second option with a
> > DeprecationWarning.
>
> So "floor division" here would mean that k ** -n == 0 for all k and n
> except for k == 1, right? In addition to the consistency issue, that
> doesn't seem like a behavior that's very useful to anyone...
>

And -1 as well. The virtue is consistancy while deprecating. Or we could
just back out the current changes in master and throw in deprecation
warnings. That has the virtue of simplicity and not introducing possible
code breaks.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread Charles R Harris
On Fri, May 20, 2016 at 12:35 PM, Charles R Harris <
charlesr.har...@gmail.com> wrote:

>
>
> On Thu, May 19, 2016 at 9:30 PM, Nathaniel Smith <n...@pobox.com> wrote:
>
>> So I guess what makes this tricky is that:
>>
>> - We want the behavior to the same for multiple-element arrays,
>> single-element arrays, zero-dimensional arrays, and scalars -- the
>> shape of the data shouldn't affect the semantics of **
>>
>> - We also want the numpy scalar behavior to match the Python scalar
>> behavior
>>
>> - For Python scalars, int ** (positive int) returns an int, but int **
>> (negative int) returns a float.
>>
>> - For arrays, int ** (positive int) and int ** (negative int) _have_
>> to return the same type, because in general output types are always a
>> function of the input types and *can't* look at the specific values
>> involved, and in specific because if you do array([2, 3]) ** array([2,
>> -2]) you can't return an array where the first element is int and the
>> second is float.
>>
>> Given these immutable and contradictory constraints, the last bad
>> option IMHO would be that we make int ** (negative int) an error in
>> all cases, and the error message can suggest that instead of writing
>>
>> np.array(2) ** -2
>>
>> they should instead write
>>
>> np.array(2) ** -2.0
>>
>> (And similarly for np.int64(2) ** -2 versus np.int64(2) ** -2.0.)
>>
>> Definitely annoying, but all the other options seem even more
>> inconsistent and confusing, and likely to encourage the writing of
>> subtly buggy code...
>>
>> (I especially have in mind numpy's habit of silently switching between
>> scalars and zero-dimensional arrays -- so it's easy to write code that
>> you think handles arbitrary array dimensions, and it even passes all
>> your tests, but then it fails when someone passes in a different shape
>> data and triggers some scalar/array inconsistency. E.g. if we make **
>> -2 work for scalars but not arrays, then this code:
>>
>> def f(arr):
>> return np.sum(arr, axis=0) ** -2
>>
>> works as expected for 1-d input, tests pass, everyone's happy... but
>> as soon as you try to pass in higher dimensional integer input it will
>> fail.)
>>
>>
> Hmm, the Alexandrian solution. The main difficulty with this solution that
> this will likely to break working code. We could try it, or take the safe
> route of raising a (Visible)DeprecationWarning. The other option is to
> simply treat the negative power case uniformly as floor division and raise
> an error on zero division, but the difference from Python power would be
> highly confusing. I think I would vote for the second option with a
> DeprecationWarning.
>
>
I suspect that the different behavior of int64 on my system is due to
inheritance from Python 2.7 int

In [1]: isinstance(int64(1), int)
Out[1]: True

That different behavior is also carried over for Python 3.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Integers to integer powers

2016-05-20 Thread Charles R Harris
On Thu, May 19, 2016 at 9:30 PM, Nathaniel Smith  wrote:

> So I guess what makes this tricky is that:
>
> - We want the behavior to the same for multiple-element arrays,
> single-element arrays, zero-dimensional arrays, and scalars -- the
> shape of the data shouldn't affect the semantics of **
>
> - We also want the numpy scalar behavior to match the Python scalar
> behavior
>
> - For Python scalars, int ** (positive int) returns an int, but int **
> (negative int) returns a float.
>
> - For arrays, int ** (positive int) and int ** (negative int) _have_
> to return the same type, because in general output types are always a
> function of the input types and *can't* look at the specific values
> involved, and in specific because if you do array([2, 3]) ** array([2,
> -2]) you can't return an array where the first element is int and the
> second is float.
>
> Given these immutable and contradictory constraints, the last bad
> option IMHO would be that we make int ** (negative int) an error in
> all cases, and the error message can suggest that instead of writing
>
> np.array(2) ** -2
>
> they should instead write
>
> np.array(2) ** -2.0
>
> (And similarly for np.int64(2) ** -2 versus np.int64(2) ** -2.0.)
>
> Definitely annoying, but all the other options seem even more
> inconsistent and confusing, and likely to encourage the writing of
> subtly buggy code...
>
> (I especially have in mind numpy's habit of silently switching between
> scalars and zero-dimensional arrays -- so it's easy to write code that
> you think handles arbitrary array dimensions, and it even passes all
> your tests, but then it fails when someone passes in a different shape
> data and triggers some scalar/array inconsistency. E.g. if we make **
> -2 work for scalars but not arrays, then this code:
>
> def f(arr):
> return np.sum(arr, axis=0) ** -2
>
> works as expected for 1-d input, tests pass, everyone's happy... but
> as soon as you try to pass in higher dimensional integer input it will
> fail.)
>
>
Hmm, the Alexandrian solution. The main difficulty with this solution that
this will likely to break working code. We could try it, or take the safe
route of raising a (Visible)DeprecationWarning. The other option is to
simply treat the negative power case uniformly as floor division and raise
an error on zero division, but the difference from Python power would be
highly confusing. I think I would vote for the second option with a
DeprecationWarning.



Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Integers to integer powers

2016-05-19 Thread Charles R Harris
Hi All,

There are currently several pull requests apropos integer arrays/scalars to
integer powers and, because the area is messy and involves tradeoffs, I'd
like to see some discussion here on the list before proceeding.

*Scalars in 1.10*

In [1]: 1 ** -1
Out[1]: 1.0

In [2]: int16(1) ** -1
Out[2]: 1

In [3]: int32(1) ** -1
Out[3]: 1

In [4]: int64(1) ** -1
Out[4]: 1.0

In [5]: 2 ** -1
Out[5]: 0.5

In [6]: int16(2) ** -1
Out[6]: 0

In [7]: int32(2) ** -1
Out[7]: 0

In [8]: int64(2) ** -1
Out[8]: 0.5

In [9]: 0 ** -1
---
ZeroDivisionError Traceback (most recent call last)
 in ()
> 1 0 ** -1

ZeroDivisionError: 0.0 cannot be raised to a negative power

In [10]: int16(0) ** -1
/home/charris/.local/bin/ipython:1: RuntimeWarning: divide by zero
encountered in power
  #!/usr/bin/python
/home/charris/.local/bin/ipython:1: RuntimeWarning: invalid value
encountered in power
  #!/usr/bin/python
Out[10]: -9223372036854775808

In [11]: int32(0) ** -1
Out[11]: -9223372036854775808

In [12]: int64(0) ** -1
/home/charris/.local/bin/ipython:1: RuntimeWarning: divide by zero
encountered in long_scalars
  #!/usr/bin/python
Out[12]: inf

Proposed

   - for non-zero numbers the return type should be float.
   - for zero numbers a zero division error should be raised.




*Scalar Arrays in 1.10*
In [1]: array(1, dtype=int16) ** -1
Out[1]: 1

In [2]: array(1, dtype=int32) ** -1
Out[2]: 1

In [3]: array(1, dtype=int64) ** -1
Out[3]: 1

In [4]: array(2, dtype=int16) ** -1
Out[4]: 0

In [5]: array(2, dtype=int32) ** -1
Out[5]: 0

In [6]: array(2, dtype=int64) ** -1
Out[6]: 0

In [7]: array(0, dtype=int16) ** -1
/home/charris/.local/bin/ipython:1: RuntimeWarning: divide by zero
encountered in power
  #!/usr/bin/python
/home/charris/.local/bin/ipython:1: RuntimeWarning: invalid value
encountered in power
  #!/usr/bin/python
Out[7]: -9223372036854775808

In [8]: array(0, dtype=int32) ** -1
Out[8]: -9223372036854775808

In [9]: array(0, dtype=int64) ** -1
Out[9]: -9223372036854775808

In [10]: type(array(1, dtype=int64) ** -1)
Out[10]: numpy.int64

In [11]: type(array(1, dtype=int32) ** -1)
Out[11]: numpy.int64

In [12]: type(array(1, dtype=int16) ** -1)
Out[12]: numpy.int64

Note that the return type is always int64 in all these cases. However, type
is preserved in non-scalar arrays, although the value of int16 is not
compatible with int32 and int64 for zero division.

In [22]: array([0]*2, dtype=int16) ** -1
Out[22]: array([0, 0], dtype=int16)

In [23]: array([0]*2, dtype=int32) ** -1
Out[23]: array([-2147483648, -2147483648], dtype=int32)

In [24]: array([0]*2, dtype=int64) ** -1
Out[24]: array([-9223372036854775808, -9223372036854775808])

Proposed:

   - Raise an ZeroDivisionError for zero division, that is, in the ufunc.
   - Scalar arrays to return scalar arrays


Thoughts?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Scipy 2016 attending

2016-05-18 Thread Charles R Harris
Hi All,

Out of curiosity, who all here intends to be at Scipy 2016?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Calling C code that assumes SIMD aligned data.

2016-05-05 Thread Charles R Harris
On Thu, May 5, 2016 at 2:10 PM, Øystein Schønning-Johansen <
oyste...@gmail.com> wrote:

> Thanks for your answer, Francesc. Knowing that there is no numpy solution
> saves the work of searching for this. I've not tried the solution described
> at SO, but it looks like a real performance killer. I'll rather try to
> override malloc with glibs malloc_hooks or LD_PRELOAD tricks. Do you think
> that will do it? I'll try it and report back.
>
> Thanks,
> -Øystein
>

Might take a look at how numpy handles this in
`numpy/core/src/umath/simd.inc.src`.



Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Preparing for 1.12 branch

2016-04-16 Thread Charles R Harris
Hi All,

This is just a request that numpy reviewers tag PRs that they think merit
inclusion in 1.12 with `1.12.0 release`. The tag doesn't mean that the PR
need be in 1.12, but it will help prioritize the review process.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linux wheels coming soon

2016-04-13 Thread Charles R Harris
On Wed, Apr 13, 2016 at 1:15 PM, Matthew Brett 
wrote:

> On Tue, Apr 12, 2016 at 7:15 PM, Matthew Brett 
> wrote:
> > Hi,
> >
> > On Sat, Apr 2, 2016 at 6:11 PM, Matthew Brett 
> wrote:
> >> On Fri, Mar 25, 2016 at 6:39 AM, Peter Cock 
> wrote:
> >>> On Fri, Mar 25, 2016 at 3:02 AM, Robert T. McGibbon <
> rmcgi...@gmail.com> wrote:
>  I suspect that many of the maintainers of major scipy-ecosystem
> projects are
>  aware of these (or other similar) travis wheel caches, but would
> guess that
>  the pool of travis-ci python users who weren't aware of these wheel
> caches
>  is much much larger. So there will still be a lot of travis-ci clock
> cycles
>  saved by manylinux wheels.
> 
>  -Robert
> >>>
> >>> Yes exactly. Availability of NumPy Linux wheels on PyPI is definitely
> something
> >>> I would suggest adding to the release notes. Hopefully this will help
> trigger
> >>> a general availability of wheels in the numpy-ecosystem :)
> >>>
> >>> In the case of Travis CI, their VM images for Python already have a
> version
> >>> of NumPy installed, but having the latest version of NumPy and SciPy
> etc
> >>> available as Linux wheels would be very nice.
> >>
> >> We're very nearly there now.
> >>
> >> The latest versions of numpy, scipy, scikit-image, pandas, numexpr,
> >> statsmodels wheels for testing at
> >>
> http://ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com/
> >>
> >> Please do test with:
> >>
> >> python -m pip install --upgrade pip
> >>
> >> pip install --trusted-host=
> ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com
> >> --find-links=
> http://ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com
> >> numpy scipy scikit-learn numexpr
> >>
> >> python -c 'import numpy; numpy.test("full")'
> >> python -c 'import scipy; scipy.test("full")'
> >>
> >> We would love to get any feedback as to whether these work on your
> machines.
> >
> > I've just rebuilt these wheels with the just-released OpenBLAS 0.2.18.
> >
> > OpenBLAS is now passing all its own tests and tests on numpy / scipy /
> > scikit-learn at http://build.openblas.net/builders
> >
> > Our tests of the wheels look good too:
> >
> > http://nipy.bic.berkeley.edu/builders/manylinux-2.7-debian
> > http://nipy.bic.berkeley.edu/builders/manylinux-2.7-debian
> > https://travis-ci.org/matthew-brett/manylinux-testing
> >
> > So I think these are ready to go.  I propose uploading these wheels
> > for numpy and scipy to pypi tomorrow unless anyone has an objection.
>
> Done.  If y'all are on linux, and you have pip >= 8.11,  you should
> now see this kind of thing:
>
> $ pip install numpy scipy
> Collecting numpy
>   Downloading numpy-1.11.0-cp27-cp27mu-manylinux1_x86_64.whl (15.3MB)
> 100% || 15.3MB 61kB/s
> Collecting scipy
>   Downloading scipy-0.17.0-cp27-cp27mu-manylinux1_x86_64.whl (39.5MB)
> 100% || 39.5MB 24kB/s
> Installing collected packages: numpy, scipy
> Successfully installed numpy-1.11.0 scipy-0.17.0
>

Great work. It is nice that we are finally getting the Windows thing
squared away after all these years.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Floor divison on int returns float

2016-04-13 Thread Charles R Harris
On Wed, Apr 13, 2016 at 2:48 PM, Nathaniel Smith  wrote:

> On Apr 13, 2016 9:08 AM, "Robert Kern"  wrote:
> >
> > On Wed, Apr 13, 2016 at 3:17 AM, Antony Lee 
> wrote:
> > >
> > > This kind of issue (see also
> https://github.com/numpy/numpy/issues/3511) has become more annoying now
> that indexing requires integers (indexing with a float raises a
> VisibleDeprecationWarning).  The argument "dividing an uint by an int may
> give a result that does not fit in an uint nor in an int" does not sound
> very convincing to me,
> >
> > It shouldn't because that's not the rule that numpy follows. The range
> of the result is never considered. Both *inputs* are cast to the same type
> that can represent the full range of either input type (for that matter,
> the actual *values* of the inputs are also never considered). In the case
> of uint64 and int64, there is no really good common type (the integer
> hierarchy has to top out somewhere), but float64 merely loses resolution
> rather than cutting off half of the range of uint64.
>
> Let me play devil's advocate for a moment, since I've just been
> playing out this debate in my own mind and you've done a good job of
> articulating the case for that side :-).
>
> The counter argument is: it doesn't really matter about having a
> common type or not; what matters is whether the operation can be
> defined sensibly. For uint64  int64, this is actually not a
> problem: we provide 2s complement signed ints, so uint64 and int64 are
> both integers-mod-2**64, just choosing different representatives for
> the equivalence classes in the upper half of the ring. In particular,
> the uint64 and int64 ranges are isomorphic to each other.
>
> or with less jargon: casting between uint64 and int64 commutes with
> all arithmetic operations, so you actually get the same result
> performing the operation in infinite precision and then casting to
> uint64 or int64, or casting both operations to uint64 or int64 and
> then casting the result to uint64 or int64. Basically the operations
> are totally well-defined even if we stick within integers, and the
> casting is just another form of integer wraparound; we're already
> happy to tolerate wraparound for int64  int64 or uint64 
> uint64, so it's not entirely clear why we go all the way to float to
> avoid it for uint64  int64.
>
> [On second thought... I'm actually not 100% sure that the
> all-operations-commute-with-casting thing is true in the case of //'s
> rounding behavior. I would have to squint a lot to figure that out. I
> guess comparison operations are another exception -- a < b !=
> np.uint64(a) < np.uint64(b) in general.]
>

I looked this up once, `C` returns unsigned in the scalar case when both
operands have the same width. See Usual Arithmetic Conversions
.
I think that is not a bad choice, but there is the back compatibility
problem, plus it is a bit exceptional.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Preliminary schedule for 1.12

2016-04-09 Thread Charles R Harris
Hi All,

As we are trying out an accelerated release schedule this year it is time
to start thinking of the 1.12 release. My current plan is to release a
Numpy 1.11.1 at the end of the month with a few small fixups.  Numpy 1.11.0
looks to have been one of our more successful releases and there are
currently only three fixups in 1.11.x, none of which are major, so I think
we can just release with no betas or release candidates unless something
terrible turns up. After that release I'm looking to branch 1.12.x in early
to mid May aiming at a final sometime in late July or early August.

The main thing I think we must have in 1.12 is `__numpy_ufunc__`, so unless
someone else wants to resurrect that topic I will do so myself starting
sometime next week. I don't think a lot of work is needed to finish things
up, Nathaniel's PR #6001  is a
good start and with the addition of some opt out code that adheres to the
Python convention should provide a solution we can all live with. Others
may disagree, which is why we are still discussing the topic at this late
date, but I'm hopeful.

If there are other PRs or issues that folks feel need to be in 1.12.x,
please reply to this post.


Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


  1   2   3   4   5   6   7   8   9   10   >