Re: [Numpy-discussion] linux wheels coming soon

2016-04-14 Thread Jonathan Helmus



On 4/14/16 3:11 PM, Matthew Brett wrote:

On Thu, Apr 14, 2016 at 12:57 PM, Matthew Brett <matthew.br...@gmail.com> wrote:

On Thu, Apr 14, 2016 at 12:25 PM, Jonathan Helmus <jjhel...@gmail.com> wrote:


On 4/14/16 1:26 PM, Matthew Brett wrote:

Hi,

On Thu, Apr 14, 2016 at 11:11 AM, Benjamin Root <ben.v.r...@gmail.com>
wrote:

Are we going to have to have documentation somewhere making it clear that
the numpy wheel shouldn't be used in a conda environment? Not that I
would
expect this issue to come up all that often, but I could imagine a
scenario
where a non-scientist is simply using a base conda distribution because
that
is what IT put on their system. Then they do "pip install ipython" that
indirectly brings in numpy (through the matplotlib dependency), and end
up
with an incompatible numpy because they would have been linked against
different pythons?

Or is this not an issue?

I'm afraid I don't know conda at all, but I'm guessing that pip will
not install numpy when it is installed via conda.

Correct, pip will not (or at least should not, and did not in my tests)
install numpy over top of an existing conda installed numpy. Unfortunately
from my testing, conda will install a conda version of numpy over top of a
pip installed version.  This may be the expected behavior as conda maintains
its own list of installed packages.

So the potential difference is that, pre-wheel, if numpy was not
installed in your conda environment, then pip would build numpy from
source, whereas now you'll get a binary install.

I _think_ that Python's binary API specification
(pip.pep425tags.get_abi_tag()) should prevent pip from installing an
incompatible wheel.  Are there any conda experts out there who can
give more detail, or more convincing assurance?

I tested "pip install numpy" in conda environments (conda's equivalent to
virtualenvs) which did not have numpy installed previously for Python 2.7,
3.4 and 3.5 in a Ubuntu 14.04 Docker container.  In all cases numpy was
installed from the whl file and appeared to be functional.  Running the
numpy test suite found three failing tests for Python 2.7 and 3.5 and 21
errors in Python 3.4. The 2.7 and 3.5 failures do not look concerning but
the 3.4 errors are a bit strange.
Logs are in
https://gist.github.com/jjhelmus/a433a66d56fb0e39b8ebde248ad3fe36

Thanks for testing.  For:

docker run -ti --rm ubuntu:14.04 /bin/bash

apt-get update && apt-get install -y curl
curl -LO https://bootstrap.pypa.io/get-pip.py
python3 get-pip.py
pip install numpy nose
python3 -c "import numpy; numpy.test()"

I get:

FAILED (KNOWNFAIL=7, SKIP=17, errors=21)

This is stock Python 3.4 - so not a conda issue.  It is definitely a
problem with the wheel because a compiled numpy wheel on the same
docker image:

apt-get update && apt-get install -y curl python3-dev
curl -LO https://bootstrap.pypa.io/get-pip.py
python3 get-pip.py
pip install --no-binary=:all: numpy nose
python3 -c "import numpy; numpy.test()"

gives no test errors.

It looks like we have some more work to do...

Actually, I can solve these errors by first doing:

apt-get install gcc

I think these must be bugs in the numpy tests where numpy is assuming
a functional compiler.

Does the conda numpy give test errors when there is no compiler?

Cheers,

Matthew


Yes, both the wheel and conda numpy packages give errors when there is 
not a compiler.  These errors clear when gcc is installed.  Looks like 
the wheels are fine, just forgot about a compiler.


Cheers,

- Jonathan Helmus
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linux wheels coming soon

2016-04-14 Thread Jonathan Helmus



On 4/14/16 1:26 PM, Matthew Brett wrote:

Hi,

On Thu, Apr 14, 2016 at 11:11 AM, Benjamin Root <ben.v.r...@gmail.com> wrote:

Are we going to have to have documentation somewhere making it clear that
the numpy wheel shouldn't be used in a conda environment? Not that I would
expect this issue to come up all that often, but I could imagine a scenario
where a non-scientist is simply using a base conda distribution because that
is what IT put on their system. Then they do "pip install ipython" that
indirectly brings in numpy (through the matplotlib dependency), and end up
with an incompatible numpy because they would have been linked against
different pythons?

Or is this not an issue?

I'm afraid I don't know conda at all, but I'm guessing that pip will
not install numpy when it is installed via conda.
Correct, pip will not (or at least should not, and did not in my tests) 
install numpy over top of an existing conda installed numpy. 
Unfortunately from my testing, conda will install a conda version of 
numpy over top of a pip installed version.  This may be the expected 
behavior as conda maintains its own list of installed packages.

So the potential difference is that, pre-wheel, if numpy was not
installed in your conda environment, then pip would build numpy from
source, whereas now you'll get a binary install.

I _think_ that Python's binary API specification
(pip.pep425tags.get_abi_tag()) should prevent pip from installing an
incompatible wheel.  Are there any conda experts out there who can
give more detail, or more convincing assurance?
I tested "pip install numpy" in conda environments (conda's equivalent 
to virtualenvs) which did not have numpy installed previously for Python 
2.7, 3.4 and 3.5 in a Ubuntu 14.04 Docker container.  In all cases numpy 
was installed from the whl file and appeared to be functional.  Running 
the numpy test suite found three failing tests for Python 2.7 and 3.5 
and 21 errors in Python 3.4. The 2.7 and 3.5 failures do not look 
concerning but the 3.4 errors are a bit strange.
Logs are in 
https://gist.github.com/jjhelmus/a433a66d56fb0e39b8ebde248ad3fe36



Cheers,

- Jonathan Helmus


Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using OpenBLAS for manylinux wheels

2016-03-29 Thread Jonathan Helmus

On 03/28/2016 04:33 PM, Matthew Brett wrote:

Please do test on your own machines with something like this script [4]:

Matthew,

I ran the tests after installing the wheels on my machine running 
Ubuntu 14.04.  Three numpy tests failed with the GFORTRAN_1.4 error you 
mentioned in post to the wheel-builds list recently.  All other tests 
passed.  I can reproduce these failing tests in a Docker container if it 
is helpful.


# python -c 'import numpy; numpy.test("full")'
Running unit tests for numpy
NumPy version 1.11.0
NumPy relaxed strides checking option: False
NumPy is installed in /usr/local/lib/python2.7/dist-packages/numpy
Python version 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2]
nose version 1.3.7
...
==
ERROR: test_kind.TestKind.test_all
--
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/nose/case.py", line 381, 
in setUp

try_run(self.inst, ('setup', 'setUp'))
  File "/usr/local/lib/python2.7/dist-packages/nose/util.py", line 471, 
in try_run

return func()
  File 
"/usr/local/lib/python2.7/dist-packages/numpy/f2py/tests/util.py", line 
358, in setUp

module_name=self.module_name)
  File 
"/usr/local/lib/python2.7/dist-packages/numpy/f2py/tests/util.py", line 
78, in wrapper

memo[key] = func(*a, **kw)
  File 
"/usr/local/lib/python2.7/dist-packages/numpy/f2py/tests/util.py", line 
149, in build_module

__import__(module_name)
ImportError: 
/usr/local/lib/python2.7/dist-packages/numpy/core/../.libs/libgfortran.so.3: 
version `GFORTRAN_1.4' not found (required by 
/tmp/tmptYznnz/_test_ext_module_5405.so)


==
ERROR: test_mixed.TestMixed.test_all
--
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/nose/case.py", line 381, 
in setUp

try_run(self.inst, ('setup', 'setUp'))
  File "/usr/local/lib/python2.7/dist-packages/nose/util.py", line 471, 
in try_run

return func()
  File 
"/usr/local/lib/python2.7/dist-packages/numpy/f2py/tests/util.py", line 
358, in setUp

module_name=self.module_name)
  File 
"/usr/local/lib/python2.7/dist-packages/numpy/f2py/tests/util.py", line 
78, in wrapper

memo[key] = func(*a, **kw)
  File 
"/usr/local/lib/python2.7/dist-packages/numpy/f2py/tests/util.py", line 
149, in build_module

__import__(module_name)
ImportError: 
/usr/local/lib/python2.7/dist-packages/numpy/core/../.libs/libgfortran.so.3: 
version `GFORTRAN_1.4' not found (required by 
/tmp/tmptYznnz/_test_ext_module_5405.so)


==
ERROR: test_mixed.TestMixed.test_docstring
--
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/nose/case.py", line 381, 
in setUp

try_run(self.inst, ('setup', 'setUp'))
  File "/usr/local/lib/python2.7/dist-packages/nose/util.py", line 471, 
in try_run

return func()
  File 
"/usr/local/lib/python2.7/dist-packages/numpy/f2py/tests/util.py", line 
358, in setUp

module_name=self.module_name)
  File 
"/usr/local/lib/python2.7/dist-packages/numpy/f2py/tests/util.py", line 
84, in wrapper

raise ret
ImportError: 
/usr/local/lib/python2.7/dist-packages/numpy/core/../.libs/libgfortran.so.3: 
version `GFORTRAN_1.4' not found (required by 
/tmp/tmptYznnz/_test_ext_module_5405.so)


------
Ran 6322 tests in 136.678s

FAILED (KNOWNFAIL=6, SKIP=11, errors=3)


Cheers,

- Jonathan Helmus
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fwd: Windows wheels for testing

2016-02-13 Thread Jonathan Helmus
 raise DistutilsPlatformError("Unable to find vcvarsall.bat")
distutils.errors.DistutilsPlatformError: Unable to find vcvarsall.bat

==
FAIL: test_blasdot.test_blasdot_used
--
Traceback (most recent call last):
   File "C:\Python34\lib\site-packages\nose\case.py", line 198, in runTest
 self.test(*self.arg)
   File "C:\Python34\lib\site-packages\numpy\testing\decorators.py", line
146, in skipper_func
 return f(*args, **kwargs)
   File "C:\Python34\lib\site-packages\numpy\core\tests\test_blasdot.py",
line 31, in test_blasdot_used
 assert_(dot is _dotblas.dot)
   File "C:\Python34\lib\site-packages\numpy\testing\utils.py", line 53, in
assert_
 raise AssertionError(smsg)
AssertionError

--
Ran 5575 tests in 32.042s

FAILED (KNOWNFAIL=8, SKIP=12, errors=1, failures=1)


Great - thanks - I got the same couple of failures - I believe they
are benign...

Matthew

Matthew,

The wheels seem to work fine in the Python provided by Continuum on 
32-bit Windows.  Tested in Python 2.7, 3.3 and 3.4.  The only test 
errors/failures was the the vcvarsall.bat error on all three versions.  
Full tests logs at https://gist.github.com/jjhelmus/de2b34779e83eb37a70f.


Cheers,

- Jonathan Helmus


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Subclassing ma.masked_array, code broken after version 1.9

2016-02-13 Thread Jonathan Helmus
 status: unspecified
numpy.log:
__array_wrap__ called
fs3 type: 
fs3.folded status: True


The change mentioned in the original message was made in pull request 
3907 [2] in case anyone wants to have a look.


Cheers,

    - Jonathan Helmus

[1] 
http://docs.scipy.org/doc/numpy-1.10.1/user/basics.subclassing.html#array-wrap-for-ufuncs

[2] https://github.com/numpy/numpy/pull/3907
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal for a new function: np.moveaxis

2015-11-05 Thread Jonathan Helmus
Also a +1 from me.  I've had to (re-)learn how exactly np.transpose
works more times then I care to admit.

- Jonathan Helmus

On 11/05/2015 02:26 AM, Juan Nunez-Iglesias wrote:
> I'm just a lowly user, but I'm a fan of this. +1!
>
>
>
>
> On Thu, Nov 5, 2015 at 6:42 PM, Stephan Hoyer <sho...@gmail.com
> <mailto:sho...@gmail.com>> wrote:
>
> I've put up a pull request implementing a new function,
> np.moveaxis, as an alternative to np.transpose and np.rollaxis:
> https://github.com/numpy/numpy/pull/6630
>
> This functionality has been discussed (even the exact function
> name) several times over the years, but it never made it into a
> pull request. The most pressing issue is that the behavior of
> np.rollaxis is not intuitive to most users:
> 
> https://mail.scipy.org/pipermail/numpy-discussion/2010-September/052882.html
> https://github.com/numpy/numpy/issues/2039
> 
> http://stackoverflow.com/questions/29891583/reason-why-numpy-rollaxis-is-so-confusing
>
> In this pull request, I also allow the source and destination axes
> to be sequences as well as scalars. This does not add much
> complexity to the code, solves some additional use cases and makes
> np.moveaxis a proper generalization of the other axes manipulation
> routines (see the pull requests for details).
>
> Best of all, it already works on ndarray duck types (like masked
> array and dask.array), because they have already implemented
> transpose.
>
> I think np.moveaxis would be a useful addition to NumPy -- I've
> found myself writing helper functions with a subset of its
> functionality several times over the past few years. What do you
> think?
>
> Cheers,
> Stephan
>
>
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Commit rights for Jonathan J. Helmus

2015-10-30 Thread Jonathan Helmus
On 10/28/2015 09:43 PM, Allan Haldane wrote:
> On 10/28/2015 05:27 PM, Nathaniel Smith wrote:
>> Hi all,
>>
>> Jonathan J. Helmus (@jjhelmus) has been given commit rights -- let's all
>> welcome him aboard.
>>
>> -n
>
> Welcome Jonathan, happy to have you on the team!
>
> Allan
>

Thanks you everyone for the kind welcome.  I'm looking forwarding to
being part of them team.

- Jonathan Helmus

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Behavior of numpy.copy with sub-classes

2015-10-19 Thread Jonathan Helmus
In GitHub issue #3474, a number of us have started a conversation on how 
NumPy's copy function should behave when passed an instance which is a 
sub-class of the array class.  Specifically, the issue began by noting 
that when a MaskedArray is passed to np.copy, the sub-class is not 
passed through but rather a ndarray is returned.


I suggested adding a "subok" parameter which controls how sub-classes 
are handled and others suggested having the function call a copy method 
on duck arrays.  The "subok" parameter is implemented in PR #6509 as an 
example. Both of these options would change the API of numpy.copy and 
possibly break backwards compatibility.  Do others have an opinion of 
how np.copy should handle sub-classes?


For a concrete example of this behavior and possible changes, what type 
should copy_x be in the following snippet:


import numpy as np
x = np.ma.array([1,2,3])
copy_x = np.copy(x)


Cheers,

    - Jonathan Helmus
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: Py-ART v1.4.0 released

2015-06-09 Thread Jonathan Helmus
I am happy to announce the release of Py-ART version 1.4.0.

Py-ART is an open source Python module for reading, visualizing,
correcting and analysis of weather radar data.

Documentation : http://arm-doe.github.io/pyart/dev/index.html
GitHub : https://github.com/ARM-DOE/pyart
Pre-build conda binaries:
https://binstar.org/jjhelmus/pyart/files?version=1.4.0

Version 1.4.0 is the result of 4 months of work by 7 contributors.
Thanks to all contributors, especially those who have made their first
contribution to Py-ART.

Highlights of this release:

* Support for reading and writing MDV Grid files. (thanks to
Anderson Gama)
* Support for reading GCPEX D3R files. (thanks to Steve Nesbitt)
* Support for reading NEXRAD Level 3 files.
* Optional loading of radar field data upon use rather than initial
read.
* Significantly faster gridding method, map_gates_to_grid.
* Improvements to the speed and bug fixes to the region based
dealiasing algorithm.
* Textures of differential phase fields. (thanks to Scott Collis)
* Py-ART now can be used with Python 3.3 and 3.4

Cheers,

- Jonathan Helmus
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] (no subject)

2014-09-18 Thread Jonathan Helmus

On 09/18/2014 12:01 PM, Chris Barker wrote:

Well,

First of all, numpy and the python math module have a number of 
differences when it comes to handling these kind of special cases -- 
and I think that:


1) numpy needs to do what makes the most sense for numpy and NOT 
mirror the math lib.


2) the use-cases of the math lib and numpy are different, so they 
maybe _should_ have different handling of this kind of thing.


3) I'm not sure that the core devs think these kinds of issues are 
wrong 'enough to break backward compatibility in subtle ways.


But it's a fun topic in any case, and maybe numpy's behavior could be 
improved.


My vote is that NumPy is correct here. I see no reason why
 float('inf') / 1
and
 float('inf') // 1

should return different results.


Well, one argument is that floor division is supposed to return an 
integer value, and that inf is NOT an integer value. The integral part 
of infinity doesn't exist and thus is Not a Number.




But nan is not an integer value either:

 float('inf') // 1
nan
 int(float('inf') // 1)
Traceback (most recent call last):
  File stdin, line 1, in module
ValueError: cannot convert float NaN to integer

Perhaps float('inf') // 1 should raise a ValueError directly since there 
is no proper way perform the floor division on infinity.


- Jonathan Helmus


You also get some weird edge cases around the mod operator.

-Chris

--

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov mailto:chris.bar...@noaa.gov


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] (no subject)

2014-09-18 Thread Jonathan Helmus
On 09/18/2014 12:44 PM, Petr Viktorin wrote:
 On Thu, Sep 18, 2014 at 7:14 PM, Jonathan Helmus jjhel...@gmail.com wrote:
 On 09/18/2014 12:01 PM, Chris Barker wrote:

 Well,

 First of all, numpy and the python math module have a number of differences
 when it comes to handling these kind of special cases -- and I think that:

 1) numpy needs to do what makes the most sense for numpy and NOT mirror the
 math lib.
 Sure.

 2) the use-cases of the math lib and numpy are different, so they maybe
 _should_ have different handling of this kind of thing.
 If you have a reason for the difference, I'd like to hear it.

 3) I'm not sure that the core devs think these kinds of issues are wrong
 'enough to break backward compatibility in subtle ways.
 I'd be perfectly fine with it being documented and tested (in CPython)
 as either a design mistake or design choice.

 But it's a fun topic in any case, and maybe numpy's behavior could be
 improved.
 My vote is that NumPy is correct here. I see no reason why
 float('inf') / 1
 and
 float('inf') // 1
 should return different results.

 Well, one argument is that floor division is supposed to return an integer
 value, and that inf is NOT an integer value. The integral part of infinity
 doesn't exist and thus is Not a Number.


 But nan is not an integer value either:

 float('inf') // 1
 nan
 int(float('inf') // 1)
 Traceback (most recent call last):
File stdin, line 1, in module
 ValueError: cannot convert float NaN to integer

 Perhaps float('inf') // 1 should raise a ValueError directly since there is
 no proper way perform the floor division on infinity.
 inf not even a *real* number; a lot of operations don't make
 mathematical sense on it. But most are defined anyway, and quite
 sanely.

But in IEEE-754 inf is a valid floating point number (where-as NaN is 
not) and has well defined arithmetic, specifically inf / 1 == inf and 
RoundToIntergral(inf) == inf.  In the numpy example, the 
numpy.array(float('inf')) statement creates an array containing a 
float32 or float64 representation of inf.  After this I would expect a 
floor division to return inf since that is what IEEE-754 arithmetic 
specifies.

For me the question is if the floor division should also perform a cast 
to an integer type. Since inf cannot be represented in most common 
integer formats this should raise an exception.  Since // does not 
normally perform a cast, for example type(float(5) // 2) == float, the 
point is mute.

The real question is if Python floats follows IEEE-754 arithmetic or 
not.  If they do not what standard are they going to follow?

 - Jonathan Helmus
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Conditionally compiling Fortran 90 extensions using numpy.distutils

2014-06-20 Thread Jonathan Helmus
I have been working on creating a setup.py file for a sub-module with 
the aim to conditionally compile a Fortran 90 extension using F2PY 
depending on the availability of an appropriate compiler on the system.  
Basically if gfortran or another Fortran 90 compiler is available on the 
system, the extension should be built. If no compiler is available, the 
extension should not be built and the build should continue without error.

After a bit of work I was able to get a working script which uses 
numpy.distutils:

from os.path import join


def configuration(parent_package='', top_path=None):
 global config
 from numpy.distutils.misc_util import Configuration
 config = Configuration('retrieve', parent_package, top_path)
 config.add_data_dir('tests')
 # Conditionally add Steiner echo classifier extension.
 config.add_extension('echo_steiner',sources=[steiner_echo_gen_source])
 return config


def steiner_echo_gen_source(ext, build_dir):
 try:
 config.have_f90c()
 return [join(config.local_path, 'echo_steiner.pyf'),
 join(config.local_path, 'src', 'echo_steiner.f90')]
 except:
 return None

if __name__ == '__main__':
 from numpy.distutils.core import setup
 setup(**configuration(top_path='').todict())

Is there a better way of accomplishing this conditional compiling, 
perhaps without using a global variable or a try/except block? 
Additionally is the expected behaviour of the have_90c function to raise 
an exception when a Fortran 90 compiler is not available or is this a 
bug?  From the documentation [1] it was unclear what the results should be.

I should mention that the above code snippet was aided greatly by 
information in the NumPy Packinging documentation [2], NumPy Distutils - 
Users Guide [3], and code from the f2py utils unit tests [4].

Thanks,

 - Jonathan Helmus
 nmrglue.com/jhelmus


[1] 
http://docs.scipy.org/doc/numpy/reference/distutils.html#numpy.distutils.misc_util.Configuration.have_f90c
[2] http://docs.scipy.org/doc/numpy/reference/distutils.html
[3] http://wiki.scipy.org/Wiki/Documentation/numpy_distutils
[4] https://github.com/numpy/numpy/blob/master/numpy/f2py/tests/util.py
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] official binaries on web page.

2013-10-22 Thread Jonathan Helmus
On 10/22/2013 09:28 AM, Chris Barker wrote:
 On Tue, Oct 22, 2013 at 6:07 AM, Pauli Virtanen p...@iki.fi wrote:
 22.10.2013 06:29, Chris Barker kirjoitti:
 If you go to numpy.org, and try to figure out how to install numpy,
 you are most likely to end up here:

 http://www.scipy.org/install.html

 where there is no mention of the binaries built by the numpy project
 itself, either Windows or Mac.
 The links are there: http://www.scipy.org/install.html#custom
 Boy! that's subtle -- I literally looked at least 3-4 times and didn't notice.

 It seems a bit odd that they are under custom, and kind of as a side note:

 projects’ sites may also offer official binary packages (e.g. numpy,
 scipy library)

 and may -- this is the official site, and we don't _know_ if
 binaries are provided?

 Anyway, a lot of effort goes into those, it'd be nice for it to be
 more prominent.

 -Chris

That page is generated from the rst file 
https://github.com/scipy/scipy.org/blob/master/www/install.rst.  I'm 
sure a Pull requests against that repository would be welcome.  You can 
even do it with an online editor, 
https://github.com/scipy/scipy.org/edit/master/www/install.rst!

 - Jonathan Helmus
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A bug in numpy.random.shuffle?

2013-09-05 Thread Jonathan Helmus

On 09/05/2013 01:29 PM, Warren Weckesser wrote:



On Thu, Sep 5, 2013 at 2:11 PM, Fernando Perez fperez@gmail.com 
mailto:fperez@gmail.com wrote:


Hi all,

I just ran into this rather weird behavior:

http://nbviewer.ipython.org/6453869

In summary, as far as I can tell, shuffle is misbehaving when acting
on arrays that have structured dtypes. I've seen the problem on 1.7.1
(official on ubuntu 13.04) as well as master as of a few minutes ago.

Is this my misuse? It really looks like a bug to me...



Definitely a bug:

In [1]: np.__version__
Out[1]: '1.9.0.dev-573b3b0'

In [2]: z = np.array([(0,),(1,),(2,),(3,),(4,)], dtype=[('a',int)])

In [3]: z
Out[3]:
array([(0,), (1,), (2,), (3,), (4,)],
  dtype=[('a', 'i8')])

In [4]: shuffle(z)

In [5]: z
Out[5]:
array([(0,), (1,), (2,), (0,), (0,)],
  dtype=[('a', 'i8')])



Nothing in the docstring suggests that it shouldn't work for 
structured dtypes.


Warren


Looks to stemming from the fact that elements of records arrays cannot 
be swapped:


In [1]: import numpy as np

In [2]: x = np.zeros(5, dtype=[('n', 'S1'), ('i', int)])

In [3]: x['i'] = range(5)

In [4]: print x
[('', 0) ('', 1) ('', 2) ('', 3) ('', 4)]

In [5]: x[0], x[1] = x[1], x[0]

In [6]: print x
[('', 1) ('', 1) ('', 2) ('', 3) ('', 4)]

This is with numpy 1.7.1

Cheers,

- Jonathan Helmus







Cheers,

f

--
Fernando Perez (@fperez_org; http://fperez.org)
fperez.net-at-gmail: mailing lists only (I ignore this when swamped!)
fernando.perez-at-berkeley: contact me here for any direct mail
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A bug in numpy.random.shuffle?

2013-09-05 Thread Jonathan Helmus
On 09/05/2013 01:50 PM, Fernando Perez wrote:
 On Thu, Sep 5, 2013 at 11:43 AM, Charles R Harris
 charlesr.har...@gmail.com wrote:


 Oh, nice one ;) Should be fixable if you want to submit a patch.
 Strategy? One option is to do, for structured arrays, a shuffle of the
 indices and then an in-place

 arr = arr[shuffled_indices]

 But there may be a cleaner/faster way to do it.

 I'm happy to submit a patch, but I'm not familiar enough with the
 internals to know what the best approach should be.

 Cheers,

 f

Fixing the shuffle function can be done by adding a check to see if x[0] 
is of type numpy.void on line 4429 of numpy/random/mtrand/mtrand.pyx and 
using the top if block of code which uses a buffer for element swapping 
if it is.  But it wouldn't fix the problem with swapping of records 
array elements which is the real problem.

 - Jonathan Helmus
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] use slicing as argument values?

2012-07-12 Thread Jonathan Helmus
On 07/12/2012 04:46 PM, Chao YUE wrote:
 Hi Ben,

 it helps a lot. I am nearly finishing a function in a way I think 
 pythonic.
 Just one more question, I have:

 In [24]: b=np.arange(1,11)

 In [25]: b
 Out[25]: array([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10])

 In [26]: b[slice(1)]
 Out[26]: array([1])

 In [27]: b[slice(4)]
 Out[27]: array([1, 2, 3, 4])

 In [28]: b[slice(None,4)]
 Out[28]: array([1, 2, 3, 4])

 so slice(4) is actually slice(None,4), how can I exactly want retrieve 
 a[4] using slice object?

 thanks again!

 Chao

slice is a build in python function and the online docs explain its use 
(http://docs.python.org/library/functions.html#slice).  b[slice(4,5)] 
will give you something close to b[4], but not quite the same.

In [8]: b[4]
Out[8]: 5

In [9]: b[slice(4,5)]
Out[9]: array([5])

 - Jonathan Helmus

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Functions for finding the relative extrema of numeric data

2011-09-15 Thread Jonathan Helmus
I've written some peak picking functions that work in N dimensions for a 
module for looking at NMR data in python, 
http://code.google.com/p/nmrglue/.  I'd be glad to polish up the code if 
people think it would be a useful addition to scipy.ndimage or 
scipy.interpolate?  The methods are not based on any formal algorithms I 
know of, just some fast and relatively simple methods that I found seem 
to work decently.  

The methods are contained in the peakpick.py and segmentation.py files 
in the analysis directory (specifically see the find_all_connected, 
find_all_ downward and find_all_thres):
http://code.google.com/p/nmrglue/source/browse/trunk/nmrglue/analysis/peakpick.py
http://code.google.com/p/nmrglue/source/browse/trunk/nmrglue/analysis/segmentation.py

Let me know if there is an interest in including these in scipy or numpy.

-Jonathan Helmus



Jacob Silterra wrote:
 What is your application?

 The most common case is looking at Fourier transforms and identifying 
 spectral peaks. I've also analyzed images looking at 1D slices 
 (usually very regular data) and looked for peaks there.

 That stackoverflow page had a nice link to a comparison of different 
 algorithms here: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2631518/. 
 That paper is focused on mass-spectrometry data, but the approach 
 would generalize to any 1D data set. Unless somebody feels otherwise, 
 I'll close this pull request and start working on an implementation of 
 peak finding via continuous wavelet transform (the best and most 
 computationally intensive approach of those described above).

 -Jacob

 --

 Message: 4
 Date: Tue, 13 Sep 2011 22:34:01 +0200
 From: Ralf Gommers ralf.gomm...@googlemail.com
 mailto:ralf.gomm...@googlemail.com
 Subject: Re: [Numpy-discussion] Functions for finding the relative
extrema of numeric data
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 mailto:numpy-discussion@scipy.org
 Message-ID:
  
  cabl7cqhxcx0lkfenmw6-4zsbdiegxz04zbsrny4bxyvxvl7...@mail.gmail.com
 
 mailto:cabl7cqhxcx0lkfenmw6-4zsbdiegxz04zbsrny4bxyvxvl7...@mail.gmail.com
 Content-Type: text/plain; charset=iso-8859-1

 Hi Jacob,

 On Fri, Sep 9, 2011 at 11:57 PM, Jacob Silterra jsil...@gmail.com
 mailto:jsil...@gmail.com wrote:

  Hello all,
 
  I'd like to see functions for calculating the relative extrema
 in a set of
  data included in numpy. I use that functionality frequently, and
 always seem
  to be writing my own version. It seems like this functionality
 would be
  useful to the community at large, as it's a fairly common operation.
 

 What is your application?

 
  For numeric data (which is presumably noisy), the definition of
 a relative
  extrema isn't completely obvious. The implementation I am
 proposing finds a
  point in an ndarray along an axis which is larger (or smaller)
 than it's
  `order` nearest neighbors (`order` being an optional parameter,
 default 1).
  This is likely to find more points than may be desired,  which I
 believe is
  preferable to the alternative. The output is formatted the same as
  numpy.where.
 
  Code available here: https://github.com/numpy/numpy/pull/154
 
  I'm not sure whether this belongs in numpy or scipy, that
 question is
  somewhat debatable. More sophisticated peak-finding functions (in N
  dimensions, as opposed to 1) may also be useful to the
 community, and those
  would definitely belong in scipy.
 

 I have the feeling this belongs in scipy. Although if it's just
 these two
 functions I'm not sure where exactly to put them. Looking at the
 functionality, this is quite a simple approach. For any data of
 the type I'm
 usually working with it will not be able to find the right local
 extrema.
 The same is true for your alternative definition below.

 A more powerful peak detection function would be a very good
 addition to
 scipy imho (perhaps in scipy.interpolate?). See also
 
 http://stackoverflow.com/questions/1713335/peak-finding-algorithm-for-python-scipy

 Cheers,
 Ralf


  An alternative implementation would be to require that function be
  continuously descending (or ascending) for `order` points, which
 would
  enforce a minimum width on a peak.
 
  -Jacob Silterra
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 -- next part --
 An HTML attachment was scrubbed...
 URL:
 
 http://mail.scipy.org/pipermail/numpy-discussion/attachments/20110913/8bb2e1a5/attachment-0001.html

Re: [Numpy-discussion] How to output array with indexes to a text file?

2011-08-25 Thread Jonathan Helmus
Paul Menzel wrote:
 Dear NumPy folks,


 is there an easy way to also save the indexes of an array (columns, rows
 or both) when outputting it to a text file. For saving an array to a
 file I only found `savetxt()` [1] which does not seem to have such an
 option. Adding indexes manually is doable but I would like to avoid
 that.

 --- minimal example (also attached) ---
 from numpy import *
 
 a = zeros([2, 3], int)
 print(a)
 
 savetxt(/tmp/test1.txt, a, fmt='%8i')
 
 # Work around for adding the indexes for the columns.
 a[0] = range(3)
 print(a)
 
 savetxt(/tmp/test2.txt, a, fmt='%8i')
 --- minimal example ---

 The output is the following.

 $ python output-array.py 
 [[0 0 0]
  [0 0 0]]
 [[0 1 2]
  [0 0 0]]
 $ more /tmp/test*
 ::
 /tmp/test1.txt
 ::
000
000
 ::
 /tmp/test2.txt
 ::
012
000

 Is there a way to accomplish that task without reserving the 0th row or
 column to store the indexes?

 I want to process these text files to produce graphs and MetaPost’s [2]
 graph package needs these indexes. (I know about Matplotlib [3], but I
 would like to use MetaPost.)


 Thanks,

 Paul


 [1] http://docs.scipy.org/doc/numpy/reference/generated/numpy.savetxt.html
 [2] http://wiki.contextgarden.net/MetaPost
 [3] http://matplotlib.sourceforge.net/
   
 

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
Paul,

I don't know of any numpy function which will output the array indexes 
but with numpy's ndindex this can be accomplished with a for loop.

import numpy as np
a = np.arange(12).reshape(3,4)
f = open(test.txt,'w')

for i in np.ndindex(a.shape):
print  f, .join([str[s] for s in i]),a[i]
f.close()

cat test.txt
0 0 0
0 1 1
0 2 2
...

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion