Re: [Numpy-discussion] ufunc reduceat behavior on empty slices

2016-07-29 Thread Stephan Hoyer
Jaime brought up the same issue recently, along with some other issues for
ufunc.reduceat:
https://mail.scipy.org/pipermail/numpy-discussion/2016-March/075199.html

I completely agree with both of you that the current behavior for empty
slices is very strange and should be changed to remove the special case.
Nathaniel Smith voiced the same opinion on the GitHub issue [1].

I think the path forward here (as Nathaniel writes) is pretty clear:
1. Start issuing a FutureWarning about a future change.
2. Fix this in a release or two.

[1] https://github.com/numpy/numpy/issues/834

On Fri, Jul 29, 2016 at 11:42 AM, Erik Brinkman 
wrote:

> Hi,
>
> The behavior of a ufuncs reduceat on empty slices seems a little strange,
> and I wonder if there's a reason behind it / if there's a route to
> potentially changing it. First, I'll go into a little background.
>
> I've been making a lot of use of ufuncs reduceat functionality on
> staggered arrays. In general, I'll have "n" arrays, each with size "s[n]"
> and I'll store them in one array "x", such that "s.sum() == x.size".
> reduceat is great because I use
>
> ufunc.reduceat(x, np.insert(s[:-1].cumsum(), 0, 0))
>
> to get some summary information about each array. However, reduceat seems
> to behave strangely for empty slices. To make things concrete, let's assume:
>
> import numpy as np
> s = np.array([3, 0, 2])
> x = np.arange(s.sum())
> inds = np.insert(s[:-1].cumsum(), 0, 0)
> # [0, 3, 3]
> np.add.reduceat(x, inds)
> # [3, 3, 7] not [3, 0, 7]
> # This is distinct from
> np.fromiter(map(np.add.reduce, np.array_split(x, inds[1:])), x.dtype,
> s.size - 1)
> # [3, 0, 7] what I wanted
>
> The current documentation
> 
> on reduceat first states:
>
> For i in range(len(indices)), reduceat computes
> ufunc.reduce(a[indices[i]:indices[i+1]])
>
> That would suggest the outcome, that I expected. However, in the examples
> section it goes into a bunch of examples which contradict that statement
> and instead suggest that the actual algorithm is more akin to:
>
> ufunc.reduce(a[indices[i]:indices[i+1]]) if indices[i+1] > indices[i] else
> indices[i]
>
> Looking at the source
> ,
> it seems like it's copying indices[i], and then while there are more
> elements to process it keeps reducing, resulting in this unexpected
> behavior. It seems like the proper thing to do would be start with
> ufunc.identity, and then reduce. This is slightly less performant than
> what's implemented, but more "correct." There could, of course, just be a
> switch to copy the identity only when the slice is empty.
>
> Is there a reason it's implemented like this? Is it just for performance,
> or is this strange behavior *useful* somewhere? It seems like "fixing"
> this would be bad because you'll be changing somewhat documented
> functionality in a backwards incompatible way. What would the best approach
> to "fixing" this be? add another function "reduceat_"? add a flag to
> reduceat to do the proper thing for empty slices?
>
> Finally, is there a good way to work around this? I think for now I'm just
> going to mask out the empty slices and use insert to add them back in, but
> if I'm missing an obvious solution, I'll look at that too. I need to mask
> them out because, np.add.reduceat(x, 5) would ideally return 0, but
> instead it throws an error since 5 is out of range...
>
> Thanks for indulging my curiosity,
> Erik
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ufunc reduceat behavior on empty slices

2016-07-29 Thread Erik Brinkman
Hi,

The behavior of a ufuncs reduceat on empty slices seems a little strange,
and I wonder if there's a reason behind it / if there's a route to
potentially changing it. First, I'll go into a little background.

I've been making a lot of use of ufuncs reduceat functionality on staggered
arrays. In general, I'll have "n" arrays, each with size "s[n]" and I'll
store them in one array "x", such that "s.sum() == x.size". reduceat is
great because I use

ufunc.reduceat(x, np.insert(s[:-1].cumsum(), 0, 0))

to get some summary information about each array. However, reduceat seems
to behave strangely for empty slices. To make things concrete, let's assume:

import numpy as np
s = np.array([3, 0, 2])
x = np.arange(s.sum())
inds = np.insert(s[:-1].cumsum(), 0, 0)
# [0, 3, 3]
np.add.reduceat(x, inds)
# [3, 3, 7] not [3, 0, 7]
# This is distinct from
np.fromiter(map(np.add.reduce, np.array_split(x, inds[1:])), x.dtype,
s.size - 1)
# [3, 0, 7] what I wanted

The current documentation

on reduceat first states:

For i in range(len(indices)), reduceat computes
ufunc.reduce(a[indices[i]:indices[i+1]])

That would suggest the outcome, that I expected. However, in the examples
section it goes into a bunch of examples which contradict that statement
and instead suggest that the actual algorithm is more akin to:

ufunc.reduce(a[indices[i]:indices[i+1]]) if indices[i+1] > indices[i] else
indices[i]

Looking at the source
,
it seems like it's copying indices[i], and then while there are more
elements to process it keeps reducing, resulting in this unexpected
behavior. It seems like the proper thing to do would be start with
ufunc.identity, and then reduce. This is slightly less performant than
what's implemented, but more "correct." There could, of course, just be a
switch to copy the identity only when the slice is empty.

Is there a reason it's implemented like this? Is it just for performance,
or is this strange behavior *useful* somewhere? It seems like "fixing" this
would be bad because you'll be changing somewhat documented functionality
in a backwards incompatible way. What would the best approach to "fixing"
this be? add another function "reduceat_"? add a flag to reduceat to do the
proper thing for empty slices?

Finally, is there a good way to work around this? I think for now I'm just
going to mask out the empty slices and use insert to add them back in, but
if I'm missing an obvious solution, I'll look at that too. I need to mask
them out because, np.add.reduceat(x, 5) would ideally return 0, but instead
it throws an error since 5 is out of range...

Thanks for indulging my curiosity,
Erik
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is there any official position on PEP484/mypy?

2016-07-29 Thread Stephan Hoyer
I'm a big fan of type annotations and would support moving your repo over
to the official typeshed repo or the NumPy GitHub organization to indicate
it's official status. This is excellent work -- thank you for putting in
the effort!

Like Ben, I have also wished for type annotation support for dimension
shapes/sizes. Someone recently suggested using datashape as potential
syntax for this on the Blaze mailing list [1]. I have no idea how hard it
would be to actually implement type inference for shape. Possibly an
interesting research project? I know it's out of scope for mypy / PEP 484
for now.

[1]
https://groups.google.com/a/continuum.io/forum/#!topic/blaze-dev/0vNo4f-tNSk

On Fri, Jul 29, 2016 at 9:31 AM, Daniel Moisset 
wrote:

> I don't think a tool like mypy nor PEP 484 can talk about specific sizes
> (like the MxN and NxP for a matrix multiplication), but probably there are
> things that can be done at least about dimensionality (saying "a and b are
> 2d matrices, v is a 1-d vector"). But that's much farther about the road.
> For now you'll be able to detect simpler errors like treating an ndarray as
> a python list, method names mispells, or wrong counts/order of method
> arguments.
>
> Best,
>D.
>
> On Fri, Jul 29, 2016 at 2:31 PM, Benjamin Root 
> wrote:
>
>> One thing that I have always wished for from a project like mypy is the
>> ability to annotate what the expected shape should be. Too often, I get a
>> piece of code from a coworker and it comes with no docstring explaining the
>> expected dimensions of the input arrays and what the output array is going
>> to be. What would be really awesome is the ability to do something like
>> annotate that "a" is MxN, and "b" is NxP, and that "c" is Px3. Even if the
>> linter can't really check to make sure that the shapes would be respected,
>> it would still be nice to have a common way of expressing the expected
>> shapes in this annotation format.
>>
>> As for matplotlib, we would need to express much more complicated
>> annotations, because our API is so flexible. It would be useful to keep an
>> eye out to those needs as well.
>>
>> Cheers!
>> Ben Root
>>
>>
>> On Fri, Jul 29, 2016 at 5:33 AM, Daniel Moisset 
>> wrote:
>>
>>> Hi Sebastian, thanks for your reply
>>>
>>> I'm glad to hear that you see value in having type annotations. Just to
>>> clarify, my question was aimed at surveying if there was interest in
>>> accepting the work we're already doing if we contribute it and if it has
>>> value for the numpy project. I'm aware there's effort involved; some
>>> colleagues and me are already involved doing that at
>>> https://github.com/machinalis/mypy-data because it's valuable for
>>> ourselves, so the volunteers are already here. You of course are invited to
>>> comment on the existing code and try it :) (or joining the effort, goes
>>> without saying)
>>>
>>> Running the checker on the test suite is probably the best way to
>>> validate the annotations (the normal way would be checking the annotations
>>> against the code, but that doesn't work with C extensions like numpy).
>>> That's something we haven't been doing yet but it's an obvious next step
>>> now that some simple examples are working.
>>> WRT "I wonder if all or most of numpy can be easily put into it.",
>>> we've covered ndarray (and matrix soon) which are the core types, things
>>> built upon that shouldn't be too hard. We found some snags along the way
>>> [1] [2], but no of it is a showstopper and I'm quite sure we'll fix those
>>> in time. But of course, if someone wants to try it out it will be a better
>>> validation than my optimism to see if this makes sense :)
>>>
>>> Thanks again and I'd be happy to hear more opinions from other numpy
>>> devs!
>>>
>>> Best,
>>>D.
>>>
>>> [1] http://www.machinalis.com/blog/writing-type-stubs-for-numpy/
>>> [2] https://github.com/machinalis/mypy-data/issues
>>>
>>>
>>> On 29 Jul 2016 08:31, "Sebastian Berg" 
>>> wrote:
>>>
 On Mi, 2016-07-27 at 20:07 +0100, Daniel Moisset wrote:
 >
 > Hi,
 >
 > I work at Machinalis were we use a lot of numpy (and the pydata stack
 > in general). Recently we've also been getting involved with mypy,
 > which is a tool to type check (not on runtime, think of it as a
 > linter) annotated python code (the way of annotating python types has
 > been recently standarized in PEP 484).
 >
 > As part of that involvement we've started creating type annotations
 > for the Python libraries we use most, which include numpy. Mypy
 > provides a way to specify types with annotations in separate files in
 > case you don't have control over a library, so we have created an
 > initial proof of concept at [1], and we are actively improving it.
 > You can find some additional information about it and some problems
 > we've found on the way at this blogpost [2].
 >
 > What I wanted to ask is if the people involved on the numpy project
 > are 

Re: [Numpy-discussion] Is there any official position on PEP484/mypy?

2016-07-29 Thread Daniel Moisset
I don't think a tool like mypy nor PEP 484 can talk about specific sizes
(like the MxN and NxP for a matrix multiplication), but probably there are
things that can be done at least about dimensionality (saying "a and b are
2d matrices, v is a 1-d vector"). But that's much farther about the road.
For now you'll be able to detect simpler errors like treating an ndarray as
a python list, method names mispells, or wrong counts/order of method
arguments.

Best,
   D.

On Fri, Jul 29, 2016 at 2:31 PM, Benjamin Root  wrote:

> One thing that I have always wished for from a project like mypy is the
> ability to annotate what the expected shape should be. Too often, I get a
> piece of code from a coworker and it comes with no docstring explaining the
> expected dimensions of the input arrays and what the output array is going
> to be. What would be really awesome is the ability to do something like
> annotate that "a" is MxN, and "b" is NxP, and that "c" is Px3. Even if the
> linter can't really check to make sure that the shapes would be respected,
> it would still be nice to have a common way of expressing the expected
> shapes in this annotation format.
>
> As for matplotlib, we would need to express much more complicated
> annotations, because our API is so flexible. It would be useful to keep an
> eye out to those needs as well.
>
> Cheers!
> Ben Root
>
>
> On Fri, Jul 29, 2016 at 5:33 AM, Daniel Moisset 
> wrote:
>
>> Hi Sebastian, thanks for your reply
>>
>> I'm glad to hear that you see value in having type annotations. Just to
>> clarify, my question was aimed at surveying if there was interest in
>> accepting the work we're already doing if we contribute it and if it has
>> value for the numpy project. I'm aware there's effort involved; some
>> colleagues and me are already involved doing that at
>> https://github.com/machinalis/mypy-data because it's valuable for
>> ourselves, so the volunteers are already here. You of course are invited to
>> comment on the existing code and try it :) (or joining the effort, goes
>> without saying)
>>
>> Running the checker on the test suite is probably the best way to
>> validate the annotations (the normal way would be checking the annotations
>> against the code, but that doesn't work with C extensions like numpy).
>> That's something we haven't been doing yet but it's an obvious next step
>> now that some simple examples are working.
>> WRT "I wonder if all or most of numpy can be easily put into it.", we've
>> covered ndarray (and matrix soon) which are the core types, things built
>> upon that shouldn't be too hard. We found some snags along the way [1] [2],
>> but no of it is a showstopper and I'm quite sure we'll fix those in time.
>> But of course, if someone wants to try it out it will be a better
>> validation than my optimism to see if this makes sense :)
>>
>> Thanks again and I'd be happy to hear more opinions from other numpy devs!
>>
>> Best,
>>D.
>>
>> [1] http://www.machinalis.com/blog/writing-type-stubs-for-numpy/
>> [2] https://github.com/machinalis/mypy-data/issues
>>
>>
>> On 29 Jul 2016 08:31, "Sebastian Berg" 
>> wrote:
>>
>>> On Mi, 2016-07-27 at 20:07 +0100, Daniel Moisset wrote:
>>> >
>>> > Hi,
>>> >
>>> > I work at Machinalis were we use a lot of numpy (and the pydata stack
>>> > in general). Recently we've also been getting involved with mypy,
>>> > which is a tool to type check (not on runtime, think of it as a
>>> > linter) annotated python code (the way of annotating python types has
>>> > been recently standarized in PEP 484).
>>> >
>>> > As part of that involvement we've started creating type annotations
>>> > for the Python libraries we use most, which include numpy. Mypy
>>> > provides a way to specify types with annotations in separate files in
>>> > case you don't have control over a library, so we have created an
>>> > initial proof of concept at [1], and we are actively improving it.
>>> > You can find some additional information about it and some problems
>>> > we've found on the way at this blogpost [2].
>>> >
>>> > What I wanted to ask is if the people involved on the numpy project
>>> > are aware of PEP484 and if you have some interest in starting using
>>> > them. The main benefit is that annotations serve as clear (and
>>> > automatically testable) documentation for users, and secondary
>>> > benefits is that users discovers bugs more quickly and that some IDEs
>>> > (like pycharm) are starting to use this information for smart editor
>>> > features (autocompletion, online checking, refactoring tools);
>>> > eventually tools like jupyter could take advantage of these
>>> > annotations in the future. And the cost of writing and including
>>> > these are relatively low.
>>> >
>>>
>>> There is currently no plan to do it as far as I know, but with these
>>> things it is often more of a problem that someone volunteers to
>>> maintain it then to convince everyone that it is a good idea.
>>> If there is enough interest we could

Re: [Numpy-discussion] Is there any official position on PEP484/mypy?

2016-07-29 Thread Benjamin Root
One thing that I have always wished for from a project like mypy is the
ability to annotate what the expected shape should be. Too often, I get a
piece of code from a coworker and it comes with no docstring explaining the
expected dimensions of the input arrays and what the output array is going
to be. What would be really awesome is the ability to do something like
annotate that "a" is MxN, and "b" is NxP, and that "c" is Px3. Even if the
linter can't really check to make sure that the shapes would be respected,
it would still be nice to have a common way of expressing the expected
shapes in this annotation format.

As for matplotlib, we would need to express much more complicated
annotations, because our API is so flexible. It would be useful to keep an
eye out to those needs as well.

Cheers!
Ben Root


On Fri, Jul 29, 2016 at 5:33 AM, Daniel Moisset 
wrote:

> Hi Sebastian, thanks for your reply
>
> I'm glad to hear that you see value in having type annotations. Just to
> clarify, my question was aimed at surveying if there was interest in
> accepting the work we're already doing if we contribute it and if it has
> value for the numpy project. I'm aware there's effort involved; some
> colleagues and me are already involved doing that at
> https://github.com/machinalis/mypy-data because it's valuable for
> ourselves, so the volunteers are already here. You of course are invited to
> comment on the existing code and try it :) (or joining the effort, goes
> without saying)
>
> Running the checker on the test suite is probably the best way to validate
> the annotations (the normal way would be checking the annotations against
> the code, but that doesn't work with C extensions like numpy). That's
> something we haven't been doing yet but it's an obvious next step now that
> some simple examples are working.
> WRT "I wonder if all or most of numpy can be easily put into it.", we've
> covered ndarray (and matrix soon) which are the core types, things built
> upon that shouldn't be too hard. We found some snags along the way [1] [2],
> but no of it is a showstopper and I'm quite sure we'll fix those in time.
> But of course, if someone wants to try it out it will be a better
> validation than my optimism to see if this makes sense :)
>
> Thanks again and I'd be happy to hear more opinions from other numpy devs!
>
> Best,
>D.
>
> [1] http://www.machinalis.com/blog/writing-type-stubs-for-numpy/
> [2] https://github.com/machinalis/mypy-data/issues
>
>
> On 29 Jul 2016 08:31, "Sebastian Berg"  wrote:
>
>> On Mi, 2016-07-27 at 20:07 +0100, Daniel Moisset wrote:
>> >
>> > Hi,
>> >
>> > I work at Machinalis were we use a lot of numpy (and the pydata stack
>> > in general). Recently we've also been getting involved with mypy,
>> > which is a tool to type check (not on runtime, think of it as a
>> > linter) annotated python code (the way of annotating python types has
>> > been recently standarized in PEP 484).
>> >
>> > As part of that involvement we've started creating type annotations
>> > for the Python libraries we use most, which include numpy. Mypy
>> > provides a way to specify types with annotations in separate files in
>> > case you don't have control over a library, so we have created an
>> > initial proof of concept at [1], and we are actively improving it.
>> > You can find some additional information about it and some problems
>> > we've found on the way at this blogpost [2].
>> >
>> > What I wanted to ask is if the people involved on the numpy project
>> > are aware of PEP484 and if you have some interest in starting using
>> > them. The main benefit is that annotations serve as clear (and
>> > automatically testable) documentation for users, and secondary
>> > benefits is that users discovers bugs more quickly and that some IDEs
>> > (like pycharm) are starting to use this information for smart editor
>> > features (autocompletion, online checking, refactoring tools);
>> > eventually tools like jupyter could take advantage of these
>> > annotations in the future. And the cost of writing and including
>> > these are relatively low.
>> >
>>
>> There is currently no plan to do it as far as I know, but with these
>> things it is often more of a problem that someone volunteers to
>> maintain it then to convince everyone that it is a good idea.
>> If there is enough interest we could talk about hosting it on the numpy
>> github group as a separate project to make it a bit more
>> visible/obvious that such a project exists.
>>
>> For inclusion in numpy, it seems to me that currently this would
>> probably be better of improved separately? In the long run, would it be
>> possible to do something like run all numpy tests and then check
>> whether the definitions cover (almost) everything, or test against the
>> documentation or so? Otherwise it might get tricky to keep things quite
>> up to date, at least until these type checks are very widely used. Also
>> I wonder if all or most of numpy can be easily pu

Re: [Numpy-discussion] Is there any official position on PEP484/mypy?

2016-07-29 Thread Daniel Moisset
Hi Sebastian, thanks for your reply

I'm glad to hear that you see value in having type annotations. Just to
clarify, my question was aimed at surveying if there was interest in
accepting the work we're already doing if we contribute it and if it has
value for the numpy project. I'm aware there's effort involved; some
colleagues and me are already involved doing that at
https://github.com/machinalis/mypy-data because it's valuable for
ourselves, so the volunteers are already here. You of course are invited to
comment on the existing code and try it :) (or joining the effort, goes
without saying)

Running the checker on the test suite is probably the best way to validate
the annotations (the normal way would be checking the annotations against
the code, but that doesn't work with C extensions like numpy). That's
something we haven't been doing yet but it's an obvious next step now that
some simple examples are working.
WRT "I wonder if all or most of numpy can be easily put into it.", we've
covered ndarray (and matrix soon) which are the core types, things built
upon that shouldn't be too hard. We found some snags along the way [1] [2],
but no of it is a showstopper and I'm quite sure we'll fix those in time.
But of course, if someone wants to try it out it will be a better
validation than my optimism to see if this makes sense :)

Thanks again and I'd be happy to hear more opinions from other numpy devs!

Best,
   D.

[1] http://www.machinalis.com/blog/writing-type-stubs-for-numpy/
[2] https://github.com/machinalis/mypy-data/issues


On 29 Jul 2016 08:31, "Sebastian Berg"  wrote:

> On Mi, 2016-07-27 at 20:07 +0100, Daniel Moisset wrote:
> >
> > Hi,
> >
> > I work at Machinalis were we use a lot of numpy (and the pydata stack
> > in general). Recently we've also been getting involved with mypy,
> > which is a tool to type check (not on runtime, think of it as a
> > linter) annotated python code (the way of annotating python types has
> > been recently standarized in PEP 484).
> >
> > As part of that involvement we've started creating type annotations
> > for the Python libraries we use most, which include numpy. Mypy
> > provides a way to specify types with annotations in separate files in
> > case you don't have control over a library, so we have created an
> > initial proof of concept at [1], and we are actively improving it.
> > You can find some additional information about it and some problems
> > we've found on the way at this blogpost [2].
> >
> > What I wanted to ask is if the people involved on the numpy project
> > are aware of PEP484 and if you have some interest in starting using
> > them. The main benefit is that annotations serve as clear (and
> > automatically testable) documentation for users, and secondary
> > benefits is that users discovers bugs more quickly and that some IDEs
> > (like pycharm) are starting to use this information for smart editor
> > features (autocompletion, online checking, refactoring tools);
> > eventually tools like jupyter could take advantage of these
> > annotations in the future. And the cost of writing and including
> > these are relatively low.
> >
>
> There is currently no plan to do it as far as I know, but with these
> things it is often more of a problem that someone volunteers to
> maintain it then to convince everyone that it is a good idea.
> If there is enough interest we could talk about hosting it on the numpy
> github group as a separate project to make it a bit more
> visible/obvious that such a project exists.
>
> For inclusion in numpy, it seems to me that currently this would
> probably be better of improved separately? In the long run, would it be
> possible to do something like run all numpy tests and then check
> whether the definitions cover (almost) everything, or test against the
> documentation or so? Otherwise it might get tricky to keep things quite
> up to date, at least until these type checks are very widely used. Also
> I wonder if all or most of numpy can be easily put into it.
>
> Anyway, it seems like a great project to have as much support for type
> annotations as possible. I have never used them, but with editors
> picking up on these things it sounds like it could be very useful in
> the future.
>
> - Sebastian
>
>
> > We're doing the work anyway, but contributing our typespecs back
> > could make it easier for users to benefit from this, and for us to
> > maintain it and keep it in sync with future releases.
> >
> > If you've never heard about PEP484 or mypy (it happens a lot) I'll be
> > happy to clarify anything about it that might helpunderstand this
> > situation
> >
> > Thanks!
> >
> > D.
> >
> >
> > [1] https://github.com/machinalis/mypy-data
> > [2] http://www.machinalis.com/blog/writing-type-stubs-for-numpy/
> >
> > --
> > Daniel F. Moisset - UK Country Manager
> > www.machinalis.com
> > Skype: @dmoisset
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy

Re: [Numpy-discussion] Is there any official position on PEP484/mypy?

2016-07-29 Thread Sebastian Berg
On Mi, 2016-07-27 at 20:07 +0100, Daniel Moisset wrote:
> 
> Hi, 
> 
> I work at Machinalis were we use a lot of numpy (and the pydata stack
> in general). Recently we've also been getting involved with mypy,
> which is a tool to type check (not on runtime, think of it as a
> linter) annotated python code (the way of annotating python types has
> been recently standarized in PEP 484).
> 
> As part of that involvement we've started creating type annotations
> for the Python libraries we use most, which include numpy. Mypy
> provides a way to specify types with annotations in separate files in
> case you don't have control over a library, so we have created an
> initial proof of concept at [1], and we are actively improving it.
> You can find some additional information about it and some problems
> we've found on the way at this blogpost [2].
> 
> What I wanted to ask is if the people involved on the numpy project
> are aware of PEP484 and if you have some interest in starting using
> them. The main benefit is that annotations serve as clear (and
> automatically testable) documentation for users, and secondary
> benefits is that users discovers bugs more quickly and that some IDEs
> (like pycharm) are starting to use this information for smart editor
> features (autocompletion, online checking, refactoring tools);
> eventually tools like jupyter could take advantage of these
> annotations in the future. And the cost of writing and including
> these are relatively low.
> 

There is currently no plan to do it as far as I know, but with these
things it is often more of a problem that someone volunteers to
maintain it then to convince everyone that it is a good idea.
If there is enough interest we could talk about hosting it on the numpy
github group as a separate project to make it a bit more
visible/obvious that such a project exists.

For inclusion in numpy, it seems to me that currently this would
probably be better of improved separately? In the long run, would it be
possible to do something like run all numpy tests and then check
whether the definitions cover (almost) everything, or test against the
documentation or so? Otherwise it might get tricky to keep things quite
up to date, at least until these type checks are very widely used. Also
I wonder if all or most of numpy can be easily put into it.

Anyway, it seems like a great project to have as much support for type
annotations as possible. I have never used them, but with editors
picking up on these things it sounds like it could be very useful in
the future.

- Sebastian


> We're doing the work anyway, but contributing our typespecs back
> could make it easier for users to benefit from this, and for us to
> maintain it and keep it in sync with future releases.
> 
> If you've never heard about PEP484 or mypy (it happens a lot) I'll be
> happy to clarify anything about it that might helpunderstand this
> situation
> 
> Thanks!
> 
> D.
> 
> 
> [1] https://github.com/machinalis/mypy-data 
> [2] http://www.machinalis.com/blog/writing-type-stubs-for-numpy/
> 
> -- 
> Daniel F. Moisset - UK Country Manager
> www.machinalis.com
> Skype: @dmoisset
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion

signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion