Re: [Numpy-discussion] axis parameter for count_nonzero

2016-06-16 Thread G Young
Just wanted to ping the mailing this again about this PR.  Just to recap,
some simplification has been done thanks to some suggestions by *@rgommers*,
though the question still remains whether or not leaving the *axis* parameter
in the Python API for now (given how complicated it is to add in the C API)
is acceptable.

I will say that in response to the concern of adding parameters such as
"out" and "keepdims" (should they be requested), we could avail ourselves
to functions like median
<https://github.com/numpy/numpy/blob/master/numpy/lib/function_base.py#L3528>
for
help as *@juliantaylor* pointed out.  The *scipy* library has dealt with
this problem as well in its *sparse* modules, so that is also a useful
resource.

Feedback on this issue would be much appreciated!  Thanks!

On Sun, May 22, 2016 at 1:36 PM, G Young <gfyoun...@gmail.com> wrote:

> After some discussion with *@rgommers*, I have simplified the code as
> follows:
>
> 1) the path to the original count_nonzero in the C API is essentially
> unchanged, save some small overhead with Python calling and the
> if-statement to check the *axis* parameter
>
> 2) All of the complicated validation of the *axis* parameter and
> acrobatics for getting the count is handled *only* after we cannot
> fast-track via a numerical, boolean, or string *dtype*.
>
> The question still remains whether or not leaving the *axis* parameter in
> the Python API for now (given how complicated it is to add in the C API) is
> acceptable.  I will say that in response to the concern of adding
> parameters such as "out" and "keepdims" (should they be requested), we
> could avail ourselves to functions like median
> <https://github.com/numpy/numpy/blob/master/numpy/lib/function_base.py#L3528> 
> for
> help as *@juliantaylor* pointed out.  The *scipy* library has dealt with
> this problem as well in its *sparse* modules, so that is also a useful
> resource.
>
> On Sun, May 22, 2016 at 1:35 PM, G Young <gfyoun...@gmail.com> wrote:
>
>> 1) Correction: The PR was not written with small arrays in mind.  I ran
>> some new timing tests, and it does perform worse on smaller arrays but
>> appears to scale better than the current implementation.
>>
>> 2) Let me put it out there that I am not opposed to moving it to C, but
>> right now, there seems to be a large technical brick wall up against such
>> an implementation.  So suggestions about how to move the code into C would
>> be welcome too!
>>
>> On Sun, May 22, 2016 at 10:32 AM, Ralf Gommers <ralf.gomm...@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Sun, May 22, 2016 at 3:05 AM, G Young <gfyoun...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> I have had a PR <https://github.com/numpy/numpy/pull/7177> open (first
>>>> draft can be found here <https://github.com/numpy/numpy/pull/7138>) for
>>>> quite some time now that adds an 'axis' parameter to *count_nonzero*.
>>>> While the functionality is fully in-place, very robust, and actually
>>>> higher-performing than the original *count_nonzero* function, the
>>>> obstacle at this point is the implementation, as most of the functionality
>>>> is now surfaced at the Python level instead of at the C level.
>>>>
>>>> I have made several attempts to move the code into C to no avail and
>>>> have not received much feedback from maintainers unfortunately to move this
>>>> forward, so I'm opening this up to the mailing list to see what you guys
>>>> think of the changes and whether or not it should be merged in as is or be
>>>> tabled until a more C-friendly solution can be found.
>>>>
>>>
>>> The discussion is spread over several PRs/issues, so maybe a summary is
>>> useful:
>>>
>>> - adding an axis parameter was a feature request that was generally
>>> approved of [1]
>>> - writing the axis selection/validation code in C, like the rest of
>>> count_nonzero, was preferred by several core devs
>>> - Writing that C code turns out to be tricky. Jaime had a PR for doing
>>> this for bincount [2], but closed it with final conclusion "the proper
>>> approach seems to me to build some intermediate layer over nditer that
>>> abstracts the complexity away".
>>> - Julian pointed out that this adds a ufunc-like param, so why not add
>>> other params like out/keepdims [3]
>>> - Stephan points out that the current PR has quite a few branches, would
>>> benefit from reusing a helper function (like _validate_axis, but

Re: [Numpy-discussion] broadcasting for randint

2016-06-16 Thread G Young
Hello all,

Thank you to those who commented on this PR and for pushing it to a *much
better place* in terms of templating with Tempita.  With that issue out of
the way it seems, the momentum has stalled a bit.  However, it would be
great to receive any additional feedback, *especially from maintainers* so
as to help get this merged!  Thanks!

On Tue, Jun 7, 2016 at 1:23 PM, G Young <gfyoun...@gmail.com> wrote:

> There seems to be a push in my PR now for using Tempita as a way to solve
> this issue with the ad-hoc templating.  However, before I go about
> attempting this, it would be great to receive feedback from other
> developers on this, especially from some of the numpy maintainers.  Thanks!
>
> On Tue, Jun 7, 2016 at 3:04 AM, G Young <gfyoun...@gmail.com> wrote:
>
>> Just wanted to ping the mailing list again in case this email (see below)
>> got lost in your inboxes.  Would be great to get some feedback on this!
>> Thanks!
>>
>> On Sun, May 22, 2016 at 2:15 AM, G Young <gfyoun...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I have had a PR <https://github.com/numpy/numpy/pull/6938> open for
>>> quite some time now that allows arguments to broadcast in *randint*.
>>> While the functionality is fully in-place and very robust, the obstacle at
>>> this point is the implementation.
>>>
>>> When the *dtype* parameter was added to *randint* (see here
>>> <https://github.com/numpy/numpy/pull/6910>), a big issue with the
>>> implementation was that it created so much duplicate code that it would be
>>> a huge maintenance nightmare.  However, this was dismissed in the original
>>> PR message because it was believed that template-ing would be trivial,
>>> which seemed reasonable at the time.
>>>
>>> When I added broadcasting, I introduced a template system to the code
>>> that dramatically cut down on the duplication.  However, the obstacle has
>>> been whether or not this template system is too *ad hoc* to be merged
>>> into the library.  Implementing a template in Cython was not considered
>>> sufficient and is in fact very tricky to do, and unfortunately, I have not
>>> received any constructive suggestions from maintainers about how to
>>> proceed, so I'm opening this up to the mailing to see whether or not there
>>> are better alternatives to what I did, whether this should be merged as it,
>>> or whether this should be tabled until a better template can be found.
>>>
>>> Thanks!
>>>
>>
>>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] broadcasting for randint

2016-06-07 Thread G Young
There seems to be a push in my PR now for using Tempita as a way to solve
this issue with the ad-hoc templating.  However, before I go about
attempting this, it would be great to receive feedback from other
developers on this, especially from some of the numpy maintainers.  Thanks!

On Tue, Jun 7, 2016 at 3:04 AM, G Young <gfyoun...@gmail.com> wrote:

> Just wanted to ping the mailing list again in case this email (see below)
> got lost in your inboxes.  Would be great to get some feedback on this!
> Thanks!
>
> On Sun, May 22, 2016 at 2:15 AM, G Young <gfyoun...@gmail.com> wrote:
>
>> Hi,
>>
>> I have had a PR <https://github.com/numpy/numpy/pull/6938> open for
>> quite some time now that allows arguments to broadcast in *randint*.
>> While the functionality is fully in-place and very robust, the obstacle at
>> this point is the implementation.
>>
>> When the *dtype* parameter was added to *randint* (see here
>> <https://github.com/numpy/numpy/pull/6910>), a big issue with the
>> implementation was that it created so much duplicate code that it would be
>> a huge maintenance nightmare.  However, this was dismissed in the original
>> PR message because it was believed that template-ing would be trivial,
>> which seemed reasonable at the time.
>>
>> When I added broadcasting, I introduced a template system to the code
>> that dramatically cut down on the duplication.  However, the obstacle has
>> been whether or not this template system is too *ad hoc* to be merged
>> into the library.  Implementing a template in Cython was not considered
>> sufficient and is in fact very tricky to do, and unfortunately, I have not
>> received any constructive suggestions from maintainers about how to
>> proceed, so I'm opening this up to the mailing to see whether or not there
>> are better alternatives to what I did, whether this should be merged as it,
>> or whether this should be tabled until a better template can be found.
>>
>> Thanks!
>>
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] broadcasting for randint

2016-06-06 Thread G Young
Just wanted to ping the mailing list again in case this email (see below)
got lost in your inboxes.  Would be great to get some feedback on this!
Thanks!

On Sun, May 22, 2016 at 2:15 AM, G Young <gfyoun...@gmail.com> wrote:

> Hi,
>
> I have had a PR <https://github.com/numpy/numpy/pull/6938> open for quite
> some time now that allows arguments to broadcast in *randint*.  While the
> functionality is fully in-place and very robust, the obstacle at this point
> is the implementation.
>
> When the *dtype* parameter was added to *randint* (see here
> <https://github.com/numpy/numpy/pull/6910>), a big issue with the
> implementation was that it created so much duplicate code that it would be
> a huge maintenance nightmare.  However, this was dismissed in the original
> PR message because it was believed that template-ing would be trivial,
> which seemed reasonable at the time.
>
> When I added broadcasting, I introduced a template system to the code that
> dramatically cut down on the duplication.  However, the obstacle has been
> whether or not this template system is too *ad hoc* to be merged into the
> library.  Implementing a template in Cython was not considered sufficient
> and is in fact very tricky to do, and unfortunately, I have not received
> any constructive suggestions from maintainers about how to proceed, so I'm
> opening this up to the mailing to see whether or not there are better
> alternatives to what I did, whether this should be merged as it, or whether
> this should be tabled until a better template can be found.
>
> Thanks!
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] axis parameter for count_nonzero

2016-05-22 Thread G Young
After some discussion with *@rgommers*, I have simplified the code as
follows:

1) the path to the original count_nonzero in the C API is essentially
unchanged, save some small overhead with Python calling and the
if-statement to check the *axis* parameter

2) All of the complicated validation of the *axis* parameter and acrobatics
for getting the count is handled *only* after we cannot fast-track via a
numerical, boolean, or string *dtype*.

The question still remains whether or not leaving the *axis* parameter in
the Python API for now (given how complicated it is to add in the C API) is
acceptable.  I will say that in response to the concern of adding
parameters such as "out" and "keepdims" (should they be requested), we
could avail ourselves to functions like median
<https://github.com/numpy/numpy/blob/master/numpy/lib/function_base.py#L3528>
for
help as *@juliantaylor* pointed out.  The *scipy* library has dealt with
this problem as well in its *sparse* modules, so that is also a useful
resource.

On Sun, May 22, 2016 at 1:35 PM, G Young <gfyoun...@gmail.com> wrote:

> 1) Correction: The PR was not written with small arrays in mind.  I ran
> some new timing tests, and it does perform worse on smaller arrays but
> appears to scale better than the current implementation.
>
> 2) Let me put it out there that I am not opposed to moving it to C, but
> right now, there seems to be a large technical brick wall up against such
> an implementation.  So suggestions about how to move the code into C would
> be welcome too!
>
> On Sun, May 22, 2016 at 10:32 AM, Ralf Gommers <ralf.gomm...@gmail.com>
> wrote:
>
>>
>>
>> On Sun, May 22, 2016 at 3:05 AM, G Young <gfyoun...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I have had a PR <https://github.com/numpy/numpy/pull/7177> open (first
>>> draft can be found here <https://github.com/numpy/numpy/pull/7138>) for
>>> quite some time now that adds an 'axis' parameter to *count_nonzero*.
>>> While the functionality is fully in-place, very robust, and actually
>>> higher-performing than the original *count_nonzero* function, the
>>> obstacle at this point is the implementation, as most of the functionality
>>> is now surfaced at the Python level instead of at the C level.
>>>
>>> I have made several attempts to move the code into C to no avail and
>>> have not received much feedback from maintainers unfortunately to move this
>>> forward, so I'm opening this up to the mailing list to see what you guys
>>> think of the changes and whether or not it should be merged in as is or be
>>> tabled until a more C-friendly solution can be found.
>>>
>>
>> The discussion is spread over several PRs/issues, so maybe a summary is
>> useful:
>>
>> - adding an axis parameter was a feature request that was generally
>> approved of [1]
>> - writing the axis selection/validation code in C, like the rest of
>> count_nonzero, was preferred by several core devs
>> - Writing that C code turns out to be tricky. Jaime had a PR for doing
>> this for bincount [2], but closed it with final conclusion "the proper
>> approach seems to me to build some intermediate layer over nditer that
>> abstracts the complexity away".
>> - Julian pointed out that this adds a ufunc-like param, so why not add
>> other params like out/keepdims [3]
>> - Stephan points out that the current PR has quite a few branches, would
>> benefit from reusing a helper function (like _validate_axis, but that may
>> not do exactly the right thing), and that he doesn't want to merge it as is
>> without further input from other devs [4].
>>
>> Points previously not raised that I can think of:
>> - count_nonzero is also in the C API [5], the axis parameter is now only
>> added to the Python API.
>> - Part of why the code in this PR is complex is to keep performance for
>> small arrays OK, but there's no benchmarks added or result given for the
>> existing benchmark [6]. A simple check with:
>>   x = np.arange(100)
>>   %timeit np.count_nonzero(x)
>> shows that that gets about 30x slower (330 ns vs 10.5 us on my machine).
>>
>> It looks to me like performance is a concern, and if that can be resolved
>> there's the broader discussion of whether it's a good idea to merge this PR
>> at all. That's a trade-off of adding a useful feature vs. technical debt /
>> maintenance burden plus divergence Python/C API. Also, what do we do when
>> we merge this and then next week someone else sends a PR adding a keepdims
>> or out keyword? For these kinds of additions it would feel bett

Re: [Numpy-discussion] axis parameter for count_nonzero

2016-05-22 Thread G Young
1) Correction: The PR was not written with small arrays in mind.  I ran
some new timing tests, and it does perform worse on smaller arrays but
appears to scale better than the current implementation.

2) Let me put it out there that I am not opposed to moving it to C, but
right now, there seems to be a large technical brick wall up against such
an implementation.  So suggestions about how to move the code into C would
be welcome too!

On Sun, May 22, 2016 at 10:32 AM, Ralf Gommers <ralf.gomm...@gmail.com>
wrote:

>
>
> On Sun, May 22, 2016 at 3:05 AM, G Young <gfyoun...@gmail.com> wrote:
>
>> Hi,
>>
>> I have had a PR <https://github.com/numpy/numpy/pull/7177> open (first
>> draft can be found here <https://github.com/numpy/numpy/pull/7138>) for
>> quite some time now that adds an 'axis' parameter to *count_nonzero*.
>> While the functionality is fully in-place, very robust, and actually
>> higher-performing than the original *count_nonzero* function, the
>> obstacle at this point is the implementation, as most of the functionality
>> is now surfaced at the Python level instead of at the C level.
>>
>> I have made several attempts to move the code into C to no avail and have
>> not received much feedback from maintainers unfortunately to move this
>> forward, so I'm opening this up to the mailing list to see what you guys
>> think of the changes and whether or not it should be merged in as is or be
>> tabled until a more C-friendly solution can be found.
>>
>
> The discussion is spread over several PRs/issues, so maybe a summary is
> useful:
>
> - adding an axis parameter was a feature request that was generally
> approved of [1]
> - writing the axis selection/validation code in C, like the rest of
> count_nonzero, was preferred by several core devs
> - Writing that C code turns out to be tricky. Jaime had a PR for doing
> this for bincount [2], but closed it with final conclusion "the proper
> approach seems to me to build some intermediate layer over nditer that
> abstracts the complexity away".
> - Julian pointed out that this adds a ufunc-like param, so why not add
> other params like out/keepdims [3]
> - Stephan points out that the current PR has quite a few branches, would
> benefit from reusing a helper function (like _validate_axis, but that may
> not do exactly the right thing), and that he doesn't want to merge it as is
> without further input from other devs [4].
>
> Points previously not raised that I can think of:
> - count_nonzero is also in the C API [5], the axis parameter is now only
> added to the Python API.
> - Part of why the code in this PR is complex is to keep performance for
> small arrays OK, but there's no benchmarks added or result given for the
> existing benchmark [6]. A simple check with:
>   x = np.arange(100)
>   %timeit np.count_nonzero(x)
> shows that that gets about 30x slower (330 ns vs 10.5 us on my machine).
>
> It looks to me like performance is a concern, and if that can be resolved
> there's the broader discussion of whether it's a good idea to merge this PR
> at all. That's a trade-off of adding a useful feature vs. technical debt /
> maintenance burden plus divergence Python/C API. Also, what do we do when
> we merge this and then next week someone else sends a PR adding a keepdims
> or out keyword? For these kinds of additions it would feel better if we
> were sure that the new version is the final/desired one for the foreseeable
> future.
>
> Ralf
>
>
> [1] https://github.com/numpy/numpy/issues/391
> [2] https://github.com/numpy/numpy/pull/4330#issuecomment-77791250
> [3] https://github.com/numpy/numpy/pull/7138#issuecomment-177202894
> [4] https://github.com/numpy/numpy/pull/7177
> [5]
> http://docs.scipy.org/doc/numpy/reference/c-api.array.html#c.PyArray_CountNonzero
> [6]
> https://github.com/numpy/numpy/blob/master/benchmarks/benchmarks/bench_ufunc.py#L70
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] broadcasting for randint

2016-05-21 Thread G Young
Hi,

I have had a PR  open for quite
some time now that allows arguments to broadcast in *randint*.  While the
functionality is fully in-place and very robust, the obstacle at this point
is the implementation.

When the *dtype* parameter was added to *randint* (see here
), a big issue with the
implementation was that it created so much duplicate code that it would be
a huge maintenance nightmare.  However, this was dismissed in the original
PR message because it was believed that template-ing would be trivial,
which seemed reasonable at the time.

When I added broadcasting, I introduced a template system to the code that
dramatically cut down on the duplication.  However, the obstacle has been
whether or not this template system is too *ad hoc* to be merged into the
library.  Implementing a template in Cython was not considered sufficient
and is in fact very tricky to do, and unfortunately, I have not received
any constructive suggestions from maintainers about how to proceed, so I'm
opening this up to the mailing to see whether or not there are better
alternatives to what I did, whether this should be merged as it, or whether
this should be tabled until a better template can be found.

Thanks!
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] axis parameter for count_nonzero

2016-05-21 Thread G Young
Hi,

I have had a PR  open (first
draft can be found here ) for
quite some time now that adds an 'axis' parameter to *count_nonzero*.
While the functionality is fully in-place, very robust, and actually
higher-performing than the original *count_nonzero* function, the obstacle
at this point is the implementation, as most of the functionality is now
surfaced at the Python level instead of at the C level.

I have made several attempts to move the code into C to no avail and have
not received much feedback from maintainers unfortunately to move this
forward, so I'm opening this up to the mailing list to see what you guys
think of the changes and whether or not it should be merged in as is or be
tabled until a more C-friendly solution can be found.

Thanks!
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linux wheels coming soon

2016-04-14 Thread G Young
Actually, conda pip will install the wheels that you put up.  The good news
is: they all (by which I mean *numpy* and *scipy* both on 2.7 and 3.5) pass!

On Thu, Apr 14, 2016 at 7:26 PM, Matthew Brett 
wrote:

> Hi,
>
> On Thu, Apr 14, 2016 at 11:11 AM, Benjamin Root 
> wrote:
> > Are we going to have to have documentation somewhere making it clear that
> > the numpy wheel shouldn't be used in a conda environment? Not that I
> would
> > expect this issue to come up all that often, but I could imagine a
> scenario
> > where a non-scientist is simply using a base conda distribution because
> that
> > is what IT put on their system. Then they do "pip install ipython" that
> > indirectly brings in numpy (through the matplotlib dependency), and end
> up
> > with an incompatible numpy because they would have been linked against
> > different pythons?
> >
> > Or is this not an issue?
>
> I'm afraid I don't know conda at all, but I'm guessing that pip will
> not install numpy when it is installed via conda.
>
> So the potential difference is that, pre-wheel, if numpy was not
> installed in your conda environment, then pip would build numpy from
> source, whereas now you'll get a binary install.
>
> I _think_ that Python's binary API specification
> (pip.pep425tags.get_abi_tag()) should prevent pip from installing an
> incompatible wheel.  Are there any conda experts out there who can
> give more detail, or more convincing assurance?
>
> Cheers,
>
> Matthew
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linux wheels coming soon

2016-04-04 Thread G Young
Matthew, you are correct.  A lot of things happened with random integer
generation recently (including deprecating random_integers), but I believe
those warnings should be squashed in the up and coming version of SciPy
from what I remember.

On Mon, Apr 4, 2016 at 6:47 PM, Matthew Brett 
wrote:

> Hi,
>
> On Mon, Apr 4, 2016 at 9:02 AM, Peter Cock 
> wrote:
> > On Sun, Apr 3, 2016 at 2:11 AM, Matthew Brett 
> wrote:
> >> On Fri, Mar 25, 2016 at 6:39 AM, Peter Cock 
> wrote:
> >>> On Fri, Mar 25, 2016 at 3:02 AM, Robert T. McGibbon <
> rmcgi...@gmail.com> wrote:
>  I suspect that many of the maintainers of major scipy-ecosystem
> projects are
>  aware of these (or other similar) travis wheel caches, but would
> guess that
>  the pool of travis-ci python users who weren't aware of these wheel
> caches
>  is much much larger. So there will still be a lot of travis-ci clock
> cycles
>  saved by manylinux wheels.
> 
>  -Robert
> >>>
> >>> Yes exactly. Availability of NumPy Linux wheels on PyPI is definitely
> something
> >>> I would suggest adding to the release notes. Hopefully this will help
> trigger
> >>> a general availability of wheels in the numpy-ecosystem :)
> >>>
> >>> In the case of Travis CI, their VM images for Python already have a
> version
> >>> of NumPy installed, but having the latest version of NumPy and SciPy
> etc
> >>> available as Linux wheels would be very nice.
> >>
> >> We're very nearly there now.
> >>
> >> The latest versions of numpy, scipy, scikit-image, pandas, numexpr,
> >> statsmodels wheels for testing at
> >>
> http://ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com/
> >>
> >> Please do test with:
> >> ...
> >>
> >> We would love to get any feedback as to whether these work on your
> machines.
> >
> > Hi Matthew,
> >
> > Testing on a 64bit CentOS 6 machine with Python 3.5 compiled
> > from source under my home directory:
> >
> >
> > $ python3.5 -m pip install --upgrade pip
> > Requirement already up-to-date: pip in ./lib/python3.5/site-packages
> >
> > $ python3.5 -m pip install
> > --trusted-host=
> ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com
> > --find-links=
> http://ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com
> > numpy scipy
> > Requirement already satisfied (use --upgrade to upgrade): numpy in
> > ./lib/python3.5/site-packages
> > Requirement already satisfied (use --upgrade to upgrade): scipy in
> > ./lib/python3.5/site-packages
> >
> > $ python3.5 -m pip install
> > --trusted-host=
> ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com
> > --find-links=
> http://ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com
> > numpy scipy --upgrade
> > Collecting numpy
> >   Downloading
> http://ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com/numpy-1.11.0-cp35-cp35m-manylinux1_x86_64.whl
> > (15.5MB)
> > 100% || 15.5MB 42.1MB/s
> > Collecting scipy
> >   Downloading
> http://ccdd0ebb5a931e58c7c5-aae005c4999d7244ac63632f8b80e089.r77.cf2.rackcdn.com/scipy-0.17.0-cp35-cp35m-manylinux1_x86_64.whl
> > (40.8MB)
> > 100% || 40.8MB 53.6MB/s
> > Installing collected packages: numpy, scipy
> >   Found existing installation: numpy 1.10.4
> > Uninstalling numpy-1.10.4:
> >   Successfully uninstalled numpy-1.10.4
> >   Found existing installation: scipy 0.16.0
> > Uninstalling scipy-0.16.0:
> >   Successfully uninstalled scipy-0.16.0
> > Successfully installed numpy-1.11.0 scipy-0.17.0
> >
> >
> > $ python3.5 -c 'import numpy; numpy.test("full")'
> > Running unit tests for numpy
> > NumPy version 1.11.0
> > NumPy relaxed strides checking option: False
> > NumPy is installed in /home/xxx/lib/python3.5/site-packages/numpy
> > Python version 3.5.0 (default, Sep 28 2015, 11:25:31) [GCC 4.4.7
> > 20120313 (Red Hat 4.4.7-16)]
> > nose version 1.3.7
> >
> 

Re: [Numpy-discussion] fromnumeric.py internal calls

2016-02-29 Thread G Young
Well I know pandas uses **kwargs (that was the motivation for this PR), but
I can certainly have a look at those other ones.  I am not too familiar
with all of the third party libraries that implement their own array-like
classes, so if there any others that come to mind, let me know!  Also,
could you also add that as a comment to the PR as well?  Thanks!

On Mon, Feb 29, 2016 at 7:44 AM, Stephan Hoyer <sho...@gmail.com> wrote:

> I think this is an improvement, but I do wonder if there are libraries out
> there that use *args instead of **kwargs to handle these extra arguments.
> Perhaps it's worth testing this change against third party array libraries
> that implement their own array like classes? Off the top of my head, maybe
> scipy, pandas, dask, astropy, pint, xarray?
> On Wed, Feb 24, 2016 at 3:40 AM G Young <gfyoun...@gmail.com> wrote:
>
>> Hello all,
>>
>> I have PR #7325 <https://github.com/numpy/numpy/pull/7325/files> up that
>> changes the internal calls for functions in *fromnumeric.py* from
>> positional arguments to keyword arguments.  I made this change for two
>> reasons:
>>
>> 1) It is consistent with the external function signature
>> 2)
>>
>> The inconsistency caused a breakage in *pandas* in its own
>> implementation of *searchsorted* in which the *sorter* argument is not
>> really used but is accepted so as to make it easier for *numpy* users
>> who may be used to the *searchsorted* signature in *numpy*.
>>
>> The standard in *pandas* is to "swallow" those unused arguments into a
>> *kwargs* argument so that we don't have to document an argument that we
>> don't really use.  However, that turned out not to be possible when
>> *searchsorted* is called from the *numpy* library.
>>
>> Does anyone have any objections to the changes I made?
>>
>> Thanks!
>>
>> Greg
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fromnumeric.py internal calls

2016-02-24 Thread G Young
Hello all,

I have PR #7325  up that
changes the internal calls for functions in *fromnumeric.py* from
positional arguments to keyword arguments.  I made this change for two
reasons:

1) It is consistent with the external function signature
2)

The inconsistency caused a breakage in *pandas* in its own implementation
of *searchsorted* in which the *sorter* argument is not really used but is
accepted so as to make it easier for *numpy* users who may be used to the
*searchsorted* signature in *numpy*.

The standard in *pandas* is to "swallow" those unused arguments into a
*kwargs* argument so that we don't have to document an argument that we
don't really use.  However, that turned out not to be possible when
*searchsorted* is called from the *numpy* library.

Does anyone have any objections to the changes I made?

Thanks!

Greg
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] making "low" optional in numpy.randint

2016-02-17 Thread G Young
Your statement is a little self-contradictory, but in any case, you
shouldn't worry about random_integers getting removed from the code-base.
However, it has been deprecated in favor of randint.

On Wed, Feb 17, 2016 at 11:48 PM, Juan Nunez-Iglesias 
wrote:

> Also fwiw, I think the 0-based, half-open interval is one of the best
> features of Python indexing and yes, I do use random integers to index into
> my arrays and would not appreciate having to litter my code with "-1"
> everywhere.
>
> On Thu, Feb 18, 2016 at 10:29 AM, Alan Isaac  wrote:
>
>> On 2/17/2016 3:42 PM, Robert Kern wrote:
>>
>>> random.randint() was the one big exception, and it was considered a
>>> mistake for that very reason, soft-deprecated in favor of
>>> random.randrange().
>>>
>>
>>
>> randrange also has its detractors:
>> https://code.activestate.com/lists/python-dev/138358/
>> and following.
>>
>> I think if we start citing persistant conventions, the
>> persistent convention across *many* languages that the bounds
>> provided for a random integer range are inclusive also counts for
>> something, especially when the names are essentially shared.
>>
>> But again, I am just trying to be clear about what is at issue,
>> not push for a change.  I think citing non-existent standards
>> is not helpful.  I think the discrepancy between the Python
>> standard library and numpy for a function going by a common
>> name is harmful.  (But then, I teach.)
>>
>> fwiw,
>>
>> Alan
>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] making "low" optional in numpy.randint

2016-02-17 Thread G Young
"Explicit is better than implicit" - can't argue with that.  It doesn't
seem like the PR has gained much traction, so I'll close it.

On Wed, Feb 17, 2016 at 9:27 PM, Sebastian Berg <sebast...@sipsolutions.net>
wrote:

> On Mi, 2016-02-17 at 22:10 +0100, Sebastian Berg wrote:
> > On Mi, 2016-02-17 at 20:48 +, Robert Kern wrote:
> > > On Wed, Feb 17, 2016 at 8:43 PM, G Young <gfyoun...@gmail.com>
> > > wrote:
> > >
> > > > Josef: I don't think we are making people think more.  They're
> > > > all
> > > keyword arguments, so if you don't want to think about them, then
> > > you
> > > leave them as the defaults, and everyone is happy.
> > >
> > > I believe that Josef has the code's reader in mind, not the code's
> > > writer. As a reader of other people's code (and I count 6-months
> > > -ago
> > > -me as one such "other people"), I am sure to eventually encounter
> > > all of the different variants, so I will need to know all of them.
> > >
> >
> > Completely agree. Greg, if you need more then a few minutes to
> > explain
> > it in this case, there seems little point. It seems to me even the
> > worst cases of your examples would be covered by writing code like:
> >
> > np.random.randint(np.iinfo(np.uint8).min, 10, dtype=np.uint8)
> >
> > And *everyone* will immediately know what is meant with just minor
> > extra effort for writing it. We should keep the analogy to "range" as
> > much as possible. Anything going far beyond that, can be confusing.
> > On
> > first sight I am not convinced that there is a serious convenience
> > gain
> > by doing magic here, but this is a simple case:
> >
> > "Explicit is better then implicit"
> >
> > since writing the explicit code is easy. It might also create weird
> > bugs if the completely unexpected (most users would probably not even
> > realize it existed) happens and you get huge numbers because you
> > happened to have a `low=0` in there. Especially your point 2) seems
> > confusing. As for 3) if I see `np.random.randint(high=3)` I think I
> > would assume [0, 3)
> >
>
> OK, that was silly, that is what happens of course. So it is explicit
> in the sense that you have pass in at least one `None` explicitly.
>
> But I am still not sure that the added convenience is big and easy to
> understand [1], if it was always lowest for low and highest for high, I
> remember get it, but it seems more complex (though None does also look
> a a bit like "default" and "default" is 0 for low).
>
> - Sebastian
>
> [1] As in the trade-off between added complexity vs. added convenience.
>
>
> > Additionally, I am not sure the maximum int range is such a common
> > need
> > anyway?
> >
> > - Sebastian
> >
> >
> > > --
> > > Robert Kern
> > > ___
> > > NumPy-Discussion mailing list
> > > NumPy-Discussion@scipy.org
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] making "low" optional in numpy.randint

2016-02-17 Thread G Young
I sense that this issue is now becoming more of "randint has become too
complicated"  I suppose we could always "add" more functions that present
simpler interfaces, though if you really do want simple, there's always
Python's random library you can use.

On Wed, Feb 17, 2016 at 8:48 PM, Robert Kern <robert.k...@gmail.com> wrote:

> On Wed, Feb 17, 2016 at 8:43 PM, G Young <gfyoun...@gmail.com> wrote:
>
> > Josef: I don't think we are making people think more.  They're all
> keyword arguments, so if you don't want to think about them, then you leave
> them as the defaults, and everyone is happy.
>
> I believe that Josef has the code's reader in mind, not the code's writer.
> As a reader of other people's code (and I count 6-months-ago-me as one such
> "other people"), I am sure to eventually encounter all of the different
> variants, so I will need to know all of them.
>
> --
> Robert Kern
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] making "low" optional in numpy.randint

2016-02-17 Thread G Young
Joe: fair enough.  A separate function seems more reasonable.  Perhaps it
was a wording thing, but you kept saying "wrapper," which is not the same
as a separate function.

Josef: I don't think we are making people think more.  They're all keyword
arguments, so if you don't want to think about them, then you leave them as
the defaults, and everyone is happy.  The 'dtype' keyword was needed by
someone who wanted to generate a large array of uint8 random integers and
could not just as call 'astype' due to memory constraints.  I would suggest
you read this issue here <https://github.com/numpy/numpy/issues/6790> and
the PR's that followed so that you have a better understanding as to why
this 'weird' behavior was chosen.


On Wed, Feb 17, 2016 at 8:30 PM, Alan Isaac <alan.is...@gmail.com> wrote:

> On 2/17/2016 12:28 PM, G Young wrote:
>
>> Perhaps, but we are not coding in Haskell.  We are coding in Python, and
>> the standard is that the endpoint is excluded, which renders your point
>> moot I'm afraid.
>>
>
>
> I am not sure what "standard" you are talking about.
> I thought we were talking about the user interface.
>
> Nobody is proposing changing the behavior of `range`.
> That is an entirely separate question.
>
> I'm not trying to change any minds, but let's not rely
> on spurious arguments.
>
>
> Cheers,
> Alan
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] making "low" optional in numpy.randint

2016-02-17 Thread G Young
Yes, you are correct in explaining my intentions.  However, as I also
mentioned in the PR discussion, I did not quite understand how your wrapper
idea would make things any more comprehensive at the cost of additional
overhead and complexity.  What do you mean by making the functions
"consistent" (i.e. outline the behavior *exactly* depending on the
inputs)?  As I've explained before, and I will state it again, the
different behavior for the high=None and low != None case is due to
backwards compatibility.

On Wed, Feb 17, 2016 at 6:52 PM, Joseph Fox-Rabinovitz <
jfoxrabinov...@gmail.com> wrote:

> On Wed, Feb 17, 2016 at 1:37 PM,  <josef.p...@gmail.com> wrote:
> >
> >
> > On Wed, Feb 17, 2016 at 10:01 AM, G Young <gfyoun...@gmail.com> wrote:
> >>
> >> Hello all,
> >>
> >> I have a PR open here that makes "low" an optional parameter in
> >> numpy.randint and introduces new behavior into the API as follows:
> >>
> >> 1) `low == None` and `high == None`
> >>
> >> Numbers are generated over the range `[lowbnd, highbnd)`, where `lowbnd
> =
> >> np.iinfo(dtype).min`, and `highbnd = np.iinfo(dtype).max`, where
> `dtype` is
> >> the provided integral type.
> >>
> >> 2) `low != None` and `high == None`
> >>
> >> If `low >= 0`, numbers are still generated over the range `[0,
> >> low)`, but if `low` < 0, numbers are generated over the range `[low,
> >> highbnd)`, where `highbnd` is defined as above.
> >>
> >> 3) `low == None` and `high != None`
> >>
> >> Numbers are generated over the range `[lowbnd, high)`, where `lowbnd` is
> >> defined as above.
> >
> >
> > My impression (*) is that this will be confusing, and uses a default
> that I
> > never ever needed.
> >
> > Maybe a better way would be to use low=-np.inf and high=np.inf  where inf
> > would be interpreted as the smallest and largest representable number.
> And
> > leave the defaults unchanged.
> >
> > (*) I didn't try to understand how it works for various cases.
> >
> > Josef
> >
>
> As I mentioned on the PR discussion, the thing that bothers me is the
> inconsistency between the new and the old functionality, specifically
> in #2. If high is, the behavior is completely different depending on
> the value of `low`. Using `np.inf` instead of `None` may fix that,
> although I think that the author's idea was to avoid having to type
> the bounds in the `None`/`+/-np.inf` cases. I think that a better
> option is to have a separate wrapper to `randint` that implements this
> behavior in a consistent manner and leaves the current function
> consistent as well.
>
> -Joe
>
>
> >
> >
> >>
> >>
> >> The primary motivation was the second case, as it is more convenient to
> >> specify a 'dtype' by itself when generating such numbers in a similar
> vein
> >> to numpy.empty, except with initialized values.
> >>
> >> Looking forward to your feedback!
> >>
> >> Greg
> >>
> >> ___
> >> NumPy-Discussion mailing list
> >> NumPy-Discussion@scipy.org
> >> https://mail.scipy.org/mailman/listinfo/numpy-discussion
> >>
> >
> >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> >
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] making "low" optional in numpy.randint

2016-02-17 Thread G Young
Perhaps, but we are not coding in Haskell.  We are coding in Python, and
the standard is that the endpoint is excluded, which renders your point
moot I'm afraid.

On Wed, Feb 17, 2016 at 5:10 PM, Alan Isaac  wrote:

> On 2/17/2016 11:46 AM, Robert Kern wrote:
>
>> some at least are 1-based indexing, so closed intervals do make sense.
>>
>
>
>
> Haskell is 0-indexed.
> And quite carefully thought out, imo.
>
> Cheers,
> Alan
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] making "low" optional in numpy.randint

2016-02-17 Thread G Young
Actually, it has already been deprecated because I did it myself. :)

On Wed, Feb 17, 2016 at 4:46 PM, Robert Kern  wrote:

> On Wed, Feb 17, 2016 at 4:40 PM, Alan Isaac  wrote:
> >
> > Behavior of random integer generation:
> > Python randint[a,b]
> > MATLAB randi  [a,b]
> > Mma RandomInteger [a,b]
> > haskell randomR   [a,b]
> > GAUSS rndi[a,b]
> > Maple rand[a,b]
> >
> > In short, NumPy's `randint` is non-standard (and,
> > I would add, non-intuitive).  Presumably was due
> > due to relying on a float draw from [0,1) along
> > with the use of floor.
>
> No, never was. It is implemented so because Python uses semi-open integer
> intervals by preference because it plays most nicely with 0-based indexing.
> Not sure about all of those systems, but some at least are 1-based
> indexing, so closed intervals do make sense.
>
> The Python stdlib's random.randint() closed interval is considered a
> mistake by python-dev leading to the implementation and preference for
> random.randrange() instead.
>
> > The divergence in behavior between the (later) Python
> > function of the same name is particularly unfortunate.
>
> Indeed, but unfortunately, this mistake dates way back to Numeric times,
> and easing the migration to numpy was a priority in the heady days of numpy
> 1.0.
>
> > So I suggest further work on this function is
> > not called for, and use of `random_integers`
> > should be encouraged.  Probably NumPy's `randint`
> > should be deprecated.
>
> Not while I'm here. Instead, `random_integers()` is discouraged and
> perhaps might eventually be deprecated.
>
> --
> Robert Kern
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] making "low" optional in numpy.randint

2016-02-17 Thread G Young
Hello all,

I have a PR open here  that makes
"low" an optional parameter in numpy.randint and introduces new behavior
into the API as follows:

1) `low == None` and `high == None`

Numbers are generated over the range `[lowbnd, highbnd)`, where `lowbnd =
np.iinfo(dtype).min`, and `highbnd = np.iinfo(dtype).max`, where `dtype` is
the provided integral type.

2) `low != None` and `high == None`

If `low >= 0`, numbers are still generated over the range `[0,
low)`, but if `low` < 0, numbers are generated over the range `[low,
highbnd)`, where `highbnd` is defined as above.

3) `low == None` and `high != None`

Numbers are generated over the range `[lowbnd, high)`, where `lowbnd` is
defined as above.

The primary motivation was the second case, as it is more convenient to
specify a 'dtype' by itself when generating such numbers in a similar vein
to numpy.empty, except with initialized values.

Looking forward to your feedback!

Greg
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] building NumPy with gcc if Python was built with icc?!?

2016-02-16 Thread G Young
I'm not sure about anyone else, but having been playing around with both
gcc and icc, I'm afraid you might be out of luck.  Is there any reason why
you can't use a Python distribution built with gcc?

On Tue, Feb 16, 2016 at 7:39 PM, BERGER Christian 
wrote:

> Hi All,
>
>
>
> Here's a potentially dumb question: is it possible to build NumPy with
> gcc, if python was built with icc?
>
> Right now, the build is failing in the toolchain check phase, because gcc
> doesn't know how to handle icc-specific c flags (like -fp-model, prec-sqrt,
> ...)
>
> In our environment we're providing an embedded python that our customers
> should be able to use and extend with 3rd party modules (like numpy).
> Problem is that our sw is built using icc, but we don't want to force our
> customers to do the same and we also don't want to build every possible 3rd
> party module for our customers.
>
>
>
> Thanks for your help,
>
> Christian
>
>
>
> This email and any attachments are intended solely for the use of the
> individual or entity to whom it is addressed and may be confidential and/or
> privileged.
>
> If you are not one of the named recipients or have received this email in
> error,
>
> (i) you should not read, disclose, or copy it,
>
> (ii) please notify sender of your receipt by reply email and delete this
> email and all attachments,
>
> (iii) Dassault Systemes does not accept or assume any liability or
> responsibility for any use of or reliance on this email.
>
> For other languages, go to http://www.3ds.com/terms/email-disclaimer
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fwd: Windows wheels for testing

2016-02-13 Thread G Young
I've actually had test failures on occasion (i.e. when I run
"numpy.test()") with his builds but overall, they are quite good.  Speaking
of MKL, for anyone who uses conda, does anyone know if it is possible to
link the "mkl" package to the numpy source?  My first guess is no since the
description appears to imply that the package provides runtime libraries
and not a static libraries that numpy would need, but perhaps someone who
knows better can illuminate.

On Sat, Feb 13, 2016 at 3:42 PM, R Schumacher  wrote:

> Have you all conferred with C Gohlke on his Windows build bot? I've never
> seen a description of his recipes.
> The MKL linking aside, his binaries always seem to work flawlessly.
>
> - Ray
>
>
> At 11:16 PM 2/12/2016, you wrote:
>
> AFAIK the vcvarsall.bat error occurs when your MSVC directories aren't
> properly linked in your system registry, so Python cannot find the file.Â
> This is not a numpy-specific issue, so I certainly would agree that that
> failure is not blocking.
>
> Other than that, this build contains the mingw32.lib bug that I fixedÂ
> here , but other than that,
> everything else passes on relevant Python versions for 32-bit!
>
> On Sat, Feb 13, 2016 at 4:23 AM, Matthew Brett 
> wrote:
> On Fri, Feb 12, 2016 at 8:18 PM, R Schumacher  wrote:
> > At 03:45 PM 2/12/2016, you wrote:
> >>
> >> PS C:\tmp> c:\Python35\python -m venv np-testing
> >> PS C:\tmp> .\np-testing\Scripts\Activate.ps1
> >> (np-testing) PS C:\tmp> pip install -f
> >> https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds numpy nose
> >
> >
> > C:\Python34\Scripts>pip install  "D:\Python
> > distros\numpy-1.10.4-cp34-none-win_amd64.whl"
> > Unpacking d:\python distros\numpy-1.10.4-cp34-none-win_amd64.whl
> > Installing collected packages: numpy
> > Successfully installed numpy
> > Cleaning up...
> >
> > C:\Python34\Scripts>..\python
> > Python 3.4.2 (v3.4.2:ab2c023a9432, Oct  6 2014, 22:16:31) [MSC v.1600
> 64 bit
> > (AMD64)] on win32
> > Type "help", "copyright", "credits" or "license" for more information.
>  import numpy
>  numpy.test()
> > Running unit tests for numpy
> > NumPy version 1.10.4
> > NumPy relaxed strides checking option: False
> > NumPy is installed in C:\Python34\lib\site-packages\numpy
> > Python version 3.4.2 (v3.4.2:ab2c023a9432, Oct  6 2014, 22:16:31) [MSC
> > v.1600 64 bit (AMD64)]
> > nose version 1.3.7
> >
> ...FS...
> >
> .S..
> >
> ..C:\Python34\lib\unittest\case.
> > py:162: DeprecationWarning: using a non-integer number instead of an
> integer
> > will result in an error in the future
> >Â  Â callable_obj(*args, **kwargs)
> > C:\Python34\lib\unittest\case.py:162: DeprecationWarning: using a
> > non-integer number instead of an integer will
> > result in an error in the future
> >Â  Â callable_obj(*args, **kwargs)
> > C:\Python34\lib\unittest\case.py:162: DeprecationWarning: using a
> > non-integer number instead of an integer will result i
> > n an error in the future
> >Â  Â callable_obj(*args, **kwargs)
> >
> ...S
> >
> 
> >
> ..C:\Python34\lib\unittest\case.py:162:
> > Deprecat
> > ionWarning: using a non-integer number instead of an integer will result
> in
> > an error in the future
> >Â  Â callable_obj(*args, **kwargs)
> > ..C:\Python34\lib\unittest\case.py:162: DeprecationWarning: using a
> > non-integer number instead of an integer will result
> >Â  in an error in the future
> >Â  Â callable_obj(*args, **kwargs)
> > C:\Python34\lib\unittest\case.py:162: DeprecationWarning: using a
> > non-integer number instead of an integer will result i
> > n an error in the future
> >Â  Â callable_obj(*args, **kwargs)
> > C:\Python34\lib\unittest\case.py:162: DeprecationWarning: using a
> > non-integer number instead of an integer will result i
> > n an error in the future
> >Â  Â callable_obj(*args, **kwargs)
> > C:\Python34\lib\unittest\case.py:162: DeprecationWarning: using a
> > non-integer number instead of an integer will result i
> > n an error in the future
> >Â  Â callable_obj(*args, **kwargs)
> >
> 
> >
> 

Re: [Numpy-discussion] Fwd: Windows wheels for testing

2016-02-12 Thread G Young
AFAIK the vcvarsall.bat error occurs when your MSVC directories aren't
properly linked in your system registry, so Python cannot find the file.
This is not a numpy-specific issue, so I certainly would agree that that
failure is not blocking.

Other than that, this build contains the mingw32.lib bug that I fixed here
, but other than that, everything
else passes on relevant Python versions for 32-bit!

On Sat, Feb 13, 2016 at 4:23 AM, Matthew Brett 
wrote:

> On Fri, Feb 12, 2016 at 8:18 PM, R Schumacher  wrote:
> > At 03:45 PM 2/12/2016, you wrote:
> >>
> >> PS C:\tmp> c:\Python35\python -m venv np-testing
> >> PS C:\tmp> .\np-testing\Scripts\Activate.ps1
> >> (np-testing) PS C:\tmp> pip install -f
> >> https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds numpy nose
> >
> >
> > C:\Python34\Scripts>pip install  "D:\Python
> > distros\numpy-1.10.4-cp34-none-win_amd64.whl"
> > Unpacking d:\python distros\numpy-1.10.4-cp34-none-win_amd64.whl
> > Installing collected packages: numpy
> > Successfully installed numpy
> > Cleaning up...
> >
> > C:\Python34\Scripts>..\python
> > Python 3.4.2 (v3.4.2:ab2c023a9432, Oct  6 2014, 22:16:31) [MSC v.1600 64
> bit
> > (AMD64)] on win32
> > Type "help", "copyright", "credits" or "license" for more information.
>  import numpy
>  numpy.test()
> > Running unit tests for numpy
> > NumPy version 1.10.4
> > NumPy relaxed strides checking option: False
> > NumPy is installed in C:\Python34\lib\site-packages\numpy
> > Python version 3.4.2 (v3.4.2:ab2c023a9432, Oct  6 2014, 22:16:31) [MSC
> > v.1600 64 bit (AMD64)]
> > nose version 1.3.7
> >
> ...FS...
> >
> .S..
> >
> ..C:\Python34\lib\unittest\case.
> > py:162: DeprecationWarning: using a non-integer number instead of an
> integer
> > will result in an error in the future
> >   callable_obj(*args, **kwargs)
> > C:\Python34\lib\unittest\case.py:162: DeprecationWarning: using a
> > non-integer number instead of an integer will
> > result in an error in the future
> >   callable_obj(*args, **kwargs)
> > C:\Python34\lib\unittest\case.py:162: DeprecationWarning: using a
> > non-integer number instead of an integer will result i
> > n an error in the future
> >   callable_obj(*args, **kwargs)
> >
> ...S
> >
> 
> >
> ..C:\Python34\lib\unittest\case.py:162:
> > Deprecat
> > ionWarning: using a non-integer number instead of an integer will result
> in
> > an error in the future
> >   callable_obj(*args, **kwargs)
> > ..C:\Python34\lib\unittest\case.py:162: DeprecationWarning: using a
> > non-integer number instead of an integer will result
> >  in an error in the future
> >   callable_obj(*args, **kwargs)
> > C:\Python34\lib\unittest\case.py:162: DeprecationWarning: using a
> > non-integer number instead of an integer will result i
> > n an error in the future
> >   callable_obj(*args, **kwargs)
> > C:\Python34\lib\unittest\case.py:162: DeprecationWarning: using a
> > non-integer number instead of an integer will result i
> > n an error in the future
> >   callable_obj(*args, **kwargs)
> > C:\Python34\lib\unittest\case.py:162: DeprecationWarning: using a
> > non-integer number instead of an integer will result i
> > n an error in the future
> >   callable_obj(*args, **kwargs)
> >
> 
> >
> 
> >
> 
> >
> 
> >
> 
> >
> 
> >
> 
> >
> 
> >
> 

Re: [Numpy-discussion] Building Numpy with OpenBLAS

2016-01-27 Thread G Young
That's great!  I look forward to seeing that.

Greg

On Wed, Jan 27, 2016 at 6:27 PM, Carl Kleffner <cmkleff...@gmail.com> wrote:

> In my timeplan the next mingwpy PR on numpy master is anticipated to take
> place at the weekend, together with a build description. This PR is
> targeted to build numpy with OpenBLAS.
>
> Carl
>
> 2016-01-27 16:01 GMT+01:00 Ralf Gommers <ralf.gomm...@gmail.com>:
>
>>
>>
>> On Wed, Jan 27, 2016 at 3:51 PM, G Young <gfyoun...@gmail.com> wrote:
>>
>>> I don't need it at this point.  I'm just going through the exercise for
>>> purposes of updating building from source on Windows.  But that's good to
>>> know though.  Thanks!
>>>
>>
>> That effort is much appreciated by the way. Updating the build info on
>> all platforms on http://scipy.org/scipylib/building/index.html is a
>> significant amount of work, and it has never been in a state one could call
>> complete. So more contributions definitely welcome!
>>
>> Ralf
>>
>>
>>
>>>
>>> Greg
>>>
>>> On Wed, Jan 27, 2016 at 2:48 PM, Ralf Gommers <ralf.gomm...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Wed, Jan 27, 2016 at 3:19 PM, G Young <gfyoun...@gmail.com> wrote:
>>>>
>>>>> NumPy will "build" successfully, but then when I type "import numpy",
>>>>> it cannot import the multiarray PYD file.
>>>>>
>>>>> I am using dependency walker, and that's how I know it's the
>>>>> libopenblas.dll file that it's not linking to properly, hence my original
>>>>> question.
>>>>>
>>>>
>>>> The support for MingwPy in numpy.distutils had to be temporarily
>>>> reverted (see https://github.com/numpy/numpy/pull/6536), because the
>>>> patch caused other issues. So likely it just won't work out of the box now.
>>>> If you need it, maybe you can reapply that reverted patch. But otherwise
>>>> I'd wait a little bit; we'll sort out the MingwPy build in the near future.
>>>>
>>>> Ralf
>>>>
>>>>
>>>>
>>>>> Greg
>>>>>
>>>>> On Wed, Jan 27, 2016 at 1:58 PM, Michael Sarahan <msara...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> When you say find/use, can you please clarify whether you have
>>>>>> completed the compilation/linking successfully?  I'm not clear on exactly
>>>>>> when you're having problems.  What is the error output?
>>>>>>
>>>>>> One very helpful tool in diagnosing dll problems is dependency
>>>>>> walker: http://www.dependencywalker.com/
>>>>>>
>>>>>> It may be that your openblas has a dependency that it can't load for
>>>>>> some reason.  Dependency walker works on .pyd files as well as .dll 
>>>>>> files.
>>>>>>
>>>>>> Hth,
>>>>>> Michael
>>>>>>
>>>>>> On Wed, Jan 27, 2016, 07:40 G Young <gfyoun...@gmail.com> wrote:
>>>>>>
>>>>>>> I do have my site.cfg file pointing to my library which contains a
>>>>>>> .lib file along with the appropriate include_dirs parameter.  However,
>>>>>>> NumPy can't seem to find / use the DLL file no matter where I put it
>>>>>>> (numpy/core, same directory as openblas.lib).  By the way, I should 
>>>>>>> mention
>>>>>>> that I am using a slightly dated version of OpenBLAS (0.2.9), but that
>>>>>>> shouldn't have any effect I would imagine.
>>>>>>>
>>>>>>> Greg
>>>>>>>
>>>>>>> On Wed, Jan 27, 2016 at 1:14 PM, Michael Sarahan <msara...@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> I'm not sure about the mingw tool chain, but usually on windows at
>>>>>>>> link time you need a .lib file, called the import library.  The .dll is
>>>>>>>> used at runtime, not at link time.  This is different from *nix, where 
>>>>>>>> the
>>>>>>>> .so serves both purposes.  The link you posted mentions import files, 
>>>>>>>> so I
>>>>>>>> hope this is h

Re: [Numpy-discussion] Building Numpy with OpenBLAS

2016-01-27 Thread G Young
Hello all,

I'm trying to update the documentation for building Numpy from source, and
I've hit a brick wall in trying to build the library using OpenBLAS because
I can't seem to link the libopenblas.dll file.  I tried following the
suggestion of placing the DLL in numpy/core as suggested here
 but it
still doesn't pick it up.  What am I doing wrong?

Thanks,

Greg
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building Numpy with OpenBLAS

2016-01-27 Thread G Young
I don't need it at this point.  I'm just going through the exercise for
purposes of updating building from source on Windows.  But that's good to
know though.  Thanks!

Greg

On Wed, Jan 27, 2016 at 2:48 PM, Ralf Gommers <ralf.gomm...@gmail.com>
wrote:

>
>
> On Wed, Jan 27, 2016 at 3:19 PM, G Young <gfyoun...@gmail.com> wrote:
>
>> NumPy will "build" successfully, but then when I type "import numpy", it
>> cannot import the multiarray PYD file.
>>
>> I am using dependency walker, and that's how I know it's the
>> libopenblas.dll file that it's not linking to properly, hence my original
>> question.
>>
>
> The support for MingwPy in numpy.distutils had to be temporarily reverted
> (see https://github.com/numpy/numpy/pull/6536), because the patch caused
> other issues. So likely it just won't work out of the box now. If you need
> it, maybe you can reapply that reverted patch. But otherwise I'd wait a
> little bit; we'll sort out the MingwPy build in the near future.
>
> Ralf
>
>
>
>> Greg
>>
>> On Wed, Jan 27, 2016 at 1:58 PM, Michael Sarahan <msara...@gmail.com>
>> wrote:
>>
>>> When you say find/use, can you please clarify whether you have completed
>>> the compilation/linking successfully?  I'm not clear on exactly when you're
>>> having problems.  What is the error output?
>>>
>>> One very helpful tool in diagnosing dll problems is dependency walker:
>>> http://www.dependencywalker.com/
>>>
>>> It may be that your openblas has a dependency that it can't load for
>>> some reason.  Dependency walker works on .pyd files as well as .dll files.
>>>
>>> Hth,
>>> Michael
>>>
>>> On Wed, Jan 27, 2016, 07:40 G Young <gfyoun...@gmail.com> wrote:
>>>
>>>> I do have my site.cfg file pointing to my library which contains a .lib
>>>> file along with the appropriate include_dirs parameter.  However, NumPy
>>>> can't seem to find / use the DLL file no matter where I put it (numpy/core,
>>>> same directory as openblas.lib).  By the way, I should mention that I am
>>>> using a slightly dated version of OpenBLAS (0.2.9), but that shouldn't have
>>>> any effect I would imagine.
>>>>
>>>> Greg
>>>>
>>>> On Wed, Jan 27, 2016 at 1:14 PM, Michael Sarahan <msara...@gmail.com>
>>>> wrote:
>>>>
>>>>> I'm not sure about the mingw tool chain, but usually on windows at
>>>>> link time you need a .lib file, called the import library.  The .dll is
>>>>> used at runtime, not at link time.  This is different from *nix, where the
>>>>> .so serves both purposes.  The link you posted mentions import files, so I
>>>>> hope this is helpful information.
>>>>>
>>>>> Best,
>>>>> Michael
>>>>>
>>>>> On Wed, Jan 27, 2016, 03:39 G Young <gfyoun...@gmail.com> wrote:
>>>>>
>>>>>> Hello all,
>>>>>>
>>>>>> I'm trying to update the documentation for building Numpy from
>>>>>> source, and I've hit a brick wall in trying to build the library using
>>>>>> OpenBLAS because I can't seem to link the libopenblas.dll file.  I tried
>>>>>> following the suggestion of placing the DLL in numpy/core as suggested
>>>>>> here
>>>>>> <https://github.com/numpy/numpy/wiki/Mingw-static-toolchain#notes> but
>>>>>> it still doesn't pick it up.  What am I doing wrong?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Greg
>>>>>> ___
>>>>>> NumPy-Discussion mailing list
>>>>>> NumPy-Discussion@scipy.org
>>>>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>>>>
>>>>>
>>>>> ___
>>>>> NumPy-Discussion mailing list
>>>>> NumPy-Discussion@scipy.org
>>>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>>>
>>>>>
>>>> ___
>>>> NumPy-Discussion mailing list
>>>> NumPy-Discussion@scipy.org
>>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>>
>>>
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building Numpy with OpenBLAS

2016-01-27 Thread G Young
NumPy will "build" successfully, but then when I type "import numpy", it
cannot import the multiarray PYD file.

I am using dependency walker, and that's how I know it's the
libopenblas.dll file that it's not linking to properly, hence my original
question.

Greg

On Wed, Jan 27, 2016 at 1:58 PM, Michael Sarahan <msara...@gmail.com> wrote:

> When you say find/use, can you please clarify whether you have completed
> the compilation/linking successfully?  I'm not clear on exactly when you're
> having problems.  What is the error output?
>
> One very helpful tool in diagnosing dll problems is dependency walker:
> http://www.dependencywalker.com/
>
> It may be that your openblas has a dependency that it can't load for some
> reason.  Dependency walker works on .pyd files as well as .dll files.
>
> Hth,
> Michael
>
> On Wed, Jan 27, 2016, 07:40 G Young <gfyoun...@gmail.com> wrote:
>
>> I do have my site.cfg file pointing to my library which contains a .lib
>> file along with the appropriate include_dirs parameter.  However, NumPy
>> can't seem to find / use the DLL file no matter where I put it (numpy/core,
>> same directory as openblas.lib).  By the way, I should mention that I am
>> using a slightly dated version of OpenBLAS (0.2.9), but that shouldn't have
>> any effect I would imagine.
>>
>> Greg
>>
>> On Wed, Jan 27, 2016 at 1:14 PM, Michael Sarahan <msara...@gmail.com>
>> wrote:
>>
>>> I'm not sure about the mingw tool chain, but usually on windows at link
>>> time you need a .lib file, called the import library.  The .dll is used at
>>> runtime, not at link time.  This is different from *nix, where the .so
>>> serves both purposes.  The link you posted mentions import files, so I hope
>>> this is helpful information.
>>>
>>> Best,
>>> Michael
>>>
>>> On Wed, Jan 27, 2016, 03:39 G Young <gfyoun...@gmail.com> wrote:
>>>
>>>> Hello all,
>>>>
>>>> I'm trying to update the documentation for building Numpy from source,
>>>> and I've hit a brick wall in trying to build the library using OpenBLAS
>>>> because I can't seem to link the libopenblas.dll file.  I tried following
>>>> the suggestion of placing the DLL in numpy/core as suggested here
>>>> <https://github.com/numpy/numpy/wiki/Mingw-static-toolchain#notes> but
>>>> it still doesn't pick it up.  What am I doing wrong?
>>>>
>>>> Thanks,
>>>>
>>>> Greg
>>>> ___
>>>> NumPy-Discussion mailing list
>>>> NumPy-Discussion@scipy.org
>>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>>
>>>
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building Numpy with OpenBLAS

2016-01-27 Thread G Young
I do have my site.cfg file pointing to my library which contains a .lib
file along with the appropriate include_dirs parameter.  However, NumPy
can't seem to find / use the DLL file no matter where I put it (numpy/core,
same directory as openblas.lib).  By the way, I should mention that I am
using a slightly dated version of OpenBLAS (0.2.9), but that shouldn't have
any effect I would imagine.

Greg

On Wed, Jan 27, 2016 at 1:14 PM, Michael Sarahan <msara...@gmail.com> wrote:

> I'm not sure about the mingw tool chain, but usually on windows at link
> time you need a .lib file, called the import library.  The .dll is used at
> runtime, not at link time.  This is different from *nix, where the .so
> serves both purposes.  The link you posted mentions import files, so I hope
> this is helpful information.
>
> Best,
> Michael
>
> On Wed, Jan 27, 2016, 03:39 G Young <gfyoun...@gmail.com> wrote:
>
>> Hello all,
>>
>> I'm trying to update the documentation for building Numpy from source,
>> and I've hit a brick wall in trying to build the library using OpenBLAS
>> because I can't seem to link the libopenblas.dll file.  I tried following
>> the suggestion of placing the DLL in numpy/core as suggested here
>> <https://github.com/numpy/numpy/wiki/Mingw-static-toolchain#notes> but
>> it still doesn't pick it up.  What am I doing wrong?
>>
>> Thanks,
>>
>> Greg
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Appveyor Testing Changes

2016-01-26 Thread G Young
Perhaps a pip + virtualenv build as well since that's one way that is
mentioned in the online docs for installing source code.  I can't think of
anything else beyond that and what you suggested for the time being.

Greg

On Tue, Jan 26, 2016 at 6:59 PM, Ralf Gommers <ralf.gomm...@gmail.com>
wrote:

>
>
> On Tue, Jan 26, 2016 at 2:13 AM, G Young <gfyoun...@gmail.com> wrote:
>
>> Ah, yes, that is true.  That point had completely escaped my mind.  In
>> light of this, it seems that it's not worth the while then to completely
>> switch over to pip + virtualenv.  It's might be better actually to rewrite
>> the current Appveyor tests to use environments so that the test suite can
>> be expanded, though I'm not sure how prudent that is given how slow
>> Appveyor tests run.
>>
>
> At the moment Appveyor is already a bit of a bottleneck - it regularly
> hasn't started yet when TravisCI is already done. This can be solved via a
> paid account, we should seriously consider that when we have a bit more
> experience with it (Appveyor tests have been running for less than a month
> I think). But it does mean we should go for a sparse test matrix, and use a
> more complete one (all Python versions for example) on TravisCI. In the
> near future we'll have to add MingwPy test runs to Appveyor. Beyond that
> I'm not sure what needs to be added?
>
> Ralf
>
>
>
>>
>> Greg
>>
>> On Tue, Jan 26, 2016 at 12:13 AM, Bryan Van de Ven <bry...@continuum.io>
>> wrote:
>>
>>>
>>> > On Jan 25, 2016, at 5:21 PM, G Young <gfyoun...@gmail.com> wrote:
>>> >
>>> > With regards to testing numpy, both Conda and Pip + Virtualenv work
>>> quite well.  I have used both to install master and run unit tests, and
>>> both pass with flying colors.  This chart here illustrates my point nicely
>>> as well.
>>> >
>>> > However, I can't seem to find / access Conda installations for
>>> slightly older versions of Python (e.g. Python 3.4).  Perhaps this is not
>>> much of an issue now with the next release (1.12) being written only for
>>> Python 2.7 and Python 3.4 - 5.  However, if we were to wind the clock
>>> slightly back to when we were testing 2.6 - 7, 3.2 - 5, I feel Conda falls
>>> short in being able to test on a variety of Python distributions given the
>>> nature of Conda releases.  Maybe that situation is no longer the case now,
>>> but in the long term, it could easily happen again.
>>>
>>> Why do you need the installers? The whole point of conda is to be able
>>> to create environments with whatever configuration you need. Just pick the
>>> newest installer and use "conda create" from there:
>>>
>>> bryan@0199-bryanv (git:streaming) ~/work/bokeh/bokeh $ conda create -n
>>> py26 python=2.6
>>> Fetching package metadata: ..
>>> Solving package specifications: ..
>>> Package plan for installation in environment
>>> /Users/bryan/anaconda/envs/py26:
>>>
>>> The following packages will be downloaded:
>>>
>>> package|build
>>> ---|-
>>> setuptools-18.0.1  |   py26_0 343 KB
>>> pip-7.1.0  |   py26_0 1.4 MB
>>> 
>>>Total: 1.7 MB
>>>
>>> The following NEW packages will be INSTALLED:
>>>
>>> openssl:1.0.1k-1
>>> pip:7.1.0-py26_0
>>> python: 2.6.9-1
>>> readline:   6.2-2
>>> setuptools: 18.0.1-py26_0
>>> sqlite: 3.9.2-0
>>> tk: 8.5.18-0
>>> zlib:   1.2.8-0
>>>
>>> Proceed ([y]/n)?
>>>
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Appveyor Testing Changes

2016-01-25 Thread G Young
Hello all,

I currently have a branch on my fork (not PR) where I am experimenting with
running Appveyor CI via Virtualenv instead of Conda.  I have build running
here .
What do people think of using Virtualenv (as we do on Travis) instead of
Conda for testing?

Thanks,

Greg
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Appveyor Testing Changes

2016-01-25 Thread G Young
With regards to testing numpy, both Conda and Pip + Virtualenv work quite
well.  I have used both to install master and run unit tests, and both pass
with flying colors.  This chart here
<http://conda.pydata.org/docs/_downloads/conda-pip-virtualenv-translator.html>
illustrates
my point nicely as well.

However, I can't seem to find / access Conda installations for slightly
older versions of Python (e.g. Python 3.4).  Perhaps this is not much of an
issue now with the next release (1.12) being written only for Python 2.7
and Python 3.4 - 5.  However, if we were to wind the clock slightly back to
when we were testing 2.6 - 7, 3.2 - 5, I feel Conda falls short in being
able to test on a variety of Python distributions given the nature of Conda
releases.  Maybe that situation is no longer the case now, but in the long
term, it could easily happen again.

Greg


On Mon, Jan 25, 2016 at 10:50 PM, Nathaniel Smith <n...@pobox.com> wrote:

> On Mon, Jan 25, 2016 at 2:37 PM, G Young <gfyoun...@gmail.com> wrote:
> > Hello all,
> >
> > I currently have a branch on my fork (not PR) where I am experimenting
> with
> > running Appveyor CI via Virtualenv instead of Conda.  I have build
> running
> > here.  What do people think of using Virtualenv (as we do on Travis)
> instead
> > of Conda for testing?
>
> Can you summarize the advantages and disadvantages that you're aware of?
>
> -n
>
> --
> Nathaniel J. Smith -- https://vorpus.org
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Appveyor Testing Changes

2016-01-25 Thread G Young
Ah, yes, that is true.  That point had completely escaped my mind.  In
light of this, it seems that it's not worth the while then to completely
switch over to pip + virtualenv.  It's might be better actually to rewrite
the current Appveyor tests to use environments so that the test suite can
be expanded, though I'm not sure how prudent that is given how slow
Appveyor tests run.

Greg

On Tue, Jan 26, 2016 at 12:13 AM, Bryan Van de Ven <bry...@continuum.io>
wrote:

>
> > On Jan 25, 2016, at 5:21 PM, G Young <gfyoun...@gmail.com> wrote:
> >
> > With regards to testing numpy, both Conda and Pip + Virtualenv work
> quite well.  I have used both to install master and run unit tests, and
> both pass with flying colors.  This chart here illustrates my point nicely
> as well.
> >
> > However, I can't seem to find / access Conda installations for slightly
> older versions of Python (e.g. Python 3.4).  Perhaps this is not much of an
> issue now with the next release (1.12) being written only for Python 2.7
> and Python 3.4 - 5.  However, if we were to wind the clock slightly back to
> when we were testing 2.6 - 7, 3.2 - 5, I feel Conda falls short in being
> able to test on a variety of Python distributions given the nature of Conda
> releases.  Maybe that situation is no longer the case now, but in the long
> term, it could easily happen again.
>
> Why do you need the installers? The whole point of conda is to be able to
> create environments with whatever configuration you need. Just pick the
> newest installer and use "conda create" from there:
>
> bryan@0199-bryanv (git:streaming) ~/work/bokeh/bokeh $ conda create -n
> py26 python=2.6
> Fetching package metadata: ..
> Solving package specifications: ..
> Package plan for installation in environment
> /Users/bryan/anaconda/envs/py26:
>
> The following packages will be downloaded:
>
> package|build
> ---|-
> setuptools-18.0.1  |   py26_0 343 KB
> pip-7.1.0  |   py26_0 1.4 MB
> 
>Total: 1.7 MB
>
> The following NEW packages will be INSTALLED:
>
> openssl:1.0.1k-1
> pip:7.1.0-py26_0
> python: 2.6.9-1
> readline:   6.2-2
> setuptools: 18.0.1-py26_0
> sqlite: 3.9.2-0
> tk: 8.5.18-0
> zlib:   1.2.8-0
>
> Proceed ([y]/n)?
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Behavior of np.random.uniform

2016-01-19 Thread G Young
Of the methods defined in *numpy/mtrand.pyx* (excluding helper functions
and *random_integers*, as they are all related to *randint*), *randint* is
the only other function with *low* and *high* parameters.  However, it
enforces *high* > *low*.

Greg

On Tue, Jan 19, 2016 at 1:36 PM, Benjamin Root  wrote:

> Are there other functions where this behavior may or may not be happening?
> If it isn't consistent across all np.random functions, it probably should
> be, one way or the other.
>
> Ben Root
>
> On Tue, Jan 19, 2016 at 5:10 AM, Jaime Fernández del Río <
> jaime.f...@gmail.com> wrote:
>
>> Hi all,
>>
>> There is a PR (#7026 ) that
>> documents the current behavior of np.random.uniform when the low and high
>> parameters it takes do not conform to the expected low < high. Basically:
>>
>>- if low < high, random numbers are drawn from [low, high),
>>- if low = high, all random numbers will be equal to low, and
>>- if low > high, random numbers are drawn from (high, low] (notice
>>the change in the open side of the interval.)
>>
>> My only worry is that, once we document this, we can no longer claim that
>> it is a bug.  So I would like to hear from others what do they think.  The
>> other more or less obvious options would be to:
>>
>>- Raise an error, but this would require a deprecation cycle, as
>>people may be relying on the current undocumented behavior.
>>- Check the inputs and draw numbers from [min(low, high), max(low,
>>high)), which is minimally different from current behavior.
>>
>> I will be merging the current documentation changes in the next few days,
>> so it would be good if any concerns were voiced before that.
>>
>> Thanks,
>>
>> Jaime
>>
>> --
>> (\__/)
>> ( O.o)
>> ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
>> de dominación mundial.
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Behavior of np.random.uniform

2016-01-19 Thread G Young
In rand range, it raises an exception if low >= high.

I should also add that AFAIK enforcing low >= high with floats is a lot
trickier than it is for integers.  I have been knee-deep in corner cases
for some time with *randint* where numbers that are visually different are
cast as the same number by *numpy* due to rounding and representation
issues.  That situation only gets worse with floats.

Greg

On Tue, Jan 19, 2016 at 4:23 PM, Chris Barker - NOAA Federal <
chris.bar...@noaa.gov> wrote:

> What does the standard lib do for rand range? I see that randint Is closed
> on both ends, so order doesn't matter, though if it raises for b<a, then
> that's a precedent we could follow.
>
> (Sorry, on a phone, can't check)
>
> CHB
>
>
>
> On Jan 19, 2016, at 6:21 AM, G Young <gfyoun...@gmail.com> wrote:
>
> Of the methods defined in *numpy/mtrand.pyx* (excluding helper functions
> and *random_integers*, as they are all related to *randint*), *randint* is
> the only other function with *low* and *high* parameters.  However, it
> enforces *high* > *low*.
>
> Greg
>
> On Tue, Jan 19, 2016 at 1:36 PM, Benjamin Root <ben.v.r...@gmail.com>
> wrote:
>
>> Are there other functions where this behavior may or may not be
>> happening? If it isn't consistent across all np.random functions, it
>> probably should be, one way or the other.
>>
>> Ben Root
>>
>> On Tue, Jan 19, 2016 at 5:10 AM, Jaime Fernández del Río <
>> jaime.f...@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> There is a PR (#7026 <https://github.com/numpy/numpy/pull/7026>) that
>>> documents the current behavior of np.random.uniform when the low and
>>> high parameters it takes do not conform to the expected low < high.
>>> Basically:
>>>
>>>- if low < high, random numbers are drawn from [low, high),
>>>- if low = high, all random numbers will be equal to low, and
>>>- if low > high, random numbers are drawn from (high, low] (notice
>>>the change in the open side of the interval.)
>>>
>>> My only worry is that, once we document this, we can no longer claim
>>> that it is a bug.  So I would like to hear from others what do they think.
>>> The other more or less obvious options would be to:
>>>
>>>- Raise an error, but this would require a deprecation cycle, as
>>>people may be relying on the current undocumented behavior.
>>>- Check the inputs and draw numbers from [min(low, high), max(low,
>>>high)), which is minimally different from current behavior.
>>>
>>> I will be merging the current documentation changes in the next few
>>> days, so it would be good if any concerns were voiced before that.
>>>
>>> Thanks,
>>>
>>> Jaime
>>>
>>> --
>>> (\__/)
>>> ( O.o)
>>> ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus
>>> planes de dominación mundial.
>>>
>>> ___
>>> NumPy-Discussion mailing list
>>> NumPy-Discussion@scipy.org
>>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>>
>>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Building master issues on Windows

2016-01-07 Thread G Young
Hello all,

Turns out I needed to do a complete re-installation of essentially
everything that was involved in my Numpy setup.  Now everything is working
just fine!

Cheers,

Greg

On Tue, Jan 5, 2016 at 7:45 AM, G Young <gfyoun...@gmail.com> wrote:

> Sure.  I'm running Windows 7 with Python 2.7.11, gcc 4.8.1, and GNU
> Fortran 5.2.0.  It's very similar to the Appveyor Python 2 build.  When I
> am in the numpy root directory, I run "*python setup.py build_ext -i*".
> I've attached the output I get from the terminal.  Perhaps that might
> illuminate some issues that I'm not seeing.
>
> Greg
>
> On Mon, Jan 4, 2016 at 8:17 PM, Jaime Fernández del Río <
> jaime.f...@gmail.com> wrote:
>
>> On Tue, Jan 5, 2016 at 3:15 AM, G Young <gfyoun...@gmail.com> wrote:
>>
>>> Hello all,
>>>
>>> I've recently encountered issues building numpy off master on Windows in
>>> which setup.py complains that it can't find the Advapi library in any
>>> directories (which is an empty list).  I scanned the DLL under System32 and
>>> ran /sfc scannow as Administrator, and both came up clean.  Is anyone else
>>> encountering (or can reproduce) this issue, or is it just me?  Any
>>> suggestions about how to resolve this problem?
>>>
>>
>> Can you give more details on your setup?
>>
>> Jaime
>>
>> (\__/)
>> ( O.o)
>> ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
>> de dominación mundial.
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Building master issues on Windows

2016-01-04 Thread G Young
Hello all,

I've recently encountered issues building numpy off master on Windows in which 
setup.py complains that it can't find the Advapi library in any directories 
(which is an empty list).  I scanned the DLL under System32 and ran /sfc 
scannow as Administrator, and both came up clean.  Is anyone else encountering 
(or can reproduce) this issue, or is it just me?  Any suggestions about how to 
resolve this problem?

Thanks!

Greg
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] deprecate random.random_integers

2016-01-03 Thread G Young
Hello all,

In light of the discussion in #6910
, I have gone ahead and
deprecated random_integers in my most recent PR here
.  As this is an API change (sort
of), what are people's thoughts on this deprecation?

Thanks!

Greg
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype random.rand

2016-01-03 Thread G Young
Hello,

Issue #6790  had requested the
enhancement of adding the dtype argument to both random.randint and
random.rand.  With #6910  merged
in, that addresses the first half of the request.  What do people think of
adding a dtype argument to random.rand?

Thanks!

Greg
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Fwd: Python 3.3 i386 Build

2015-12-14 Thread G Young
I accidentally subscribed to the defunct discussion mailing list, so my
email got rejected the first time I sent to the active mailing list.  My
question is in the forwarded email below:

-- Forwarded message --
From: G Young <gfyoun...@gmail.com>
Date: Mon, Dec 14, 2015 at 3:47 PM
Subject: re: Python 3.3 i386 Build
To: numpy-discussion@scipy.org


Hello all,

I was wondering if anyone else has been running into issues with the Python
3.3 i386 build on Travis.  There have been several occasions when the build
has strangely failed (and none of the others did) even though there was no
plausible way for my changes to break it (see here
<https://github.com/numpy/numpy/pull/6831>).  Does anyone have an idea as
to why this is happening?

Thanks!

Greg
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion