[Numpy-discussion] problem with numpy/cython on python3, ok with python2

2010-11-20 Thread Darren Dale
I just installed numpy for both python2 and 3 from an up-to-date
checkout of the 1.5.x branch.

I am attempting to cythonize the following code with cython-0.13:

---
cimport numpy as np
import numpy as np

def test():
   cdef np.ndarray[np.float64_t, ndim=1] ret
   ret_arr = np.zeros((20,), dtype=np.float64)
   ret = ret_arr
---

I have this setup.py file:

---
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext

import numpy

setup(
cmdclass = {'build_ext': build_ext},
ext_modules = [
Extension(
test_open, [test_open.pyx], include_dirs=[numpy.get_include()]
)
]
)
---

When I run python setup.py build_ext --inplace, everything is fine.
When I run python3 setup.py build_ext --inplace, I get an error:

running build_ext
cythoning test_open.pyx to test_open.c

Error converting Pyrex file to C:

...
# For use in situations where ndarray can't replace PyArrayObject*,
# like PyArrayObject**.
pass

ctypedef class numpy.ndarray [object PyArrayObject]:
cdef __cythonbufferdefaults__ = {mode: strided}
^


/home/darren/.local/lib/python3.1/site-packages/Cython/Includes/numpy.pxd:173:49:
mode is not a buffer option

Error converting Pyrex file to C:

...
cimport numpy as np
import numpy as np


def test():
   cdef np.ndarray[np.float64_t, ndim=1] ret
   ^


/home/darren/temp/test/test_open.pyx:6:8: 'ndarray' is not a type identifier
building 'test_open' extension
gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC
-I/home/darren/.local/lib/python3.1/site-packages/numpy/core/include
-I/usr/include/python3.1 -c test_open.c -o
build/temp.linux-x86_64-3.1/test_open.o
test_open.c:1: error: #error Do not use this file, it is the result of
a failed Cython compilation.
error: command 'gcc' failed with exit status 1


Is this a bug, or is there a problem with my example?

Thanks,
Darren
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] problem with numpy/cython on python3, ok with python2

2010-11-20 Thread Pauli Virtanen
On Sat, 20 Nov 2010 10:01:50 -0500, Darren Dale wrote:
 I just installed numpy for both python2 and 3 from an up-to-date
 checkout of the 1.5.x branch.
 
 I am attempting to cythonize the following code with cython-0.13:

I think you should ask on the Cython list -- I don't think this issue 
invokes any Numpy code.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Yet more merges in maintenance/1.5.x

2010-11-20 Thread Pauli Virtanen
On Sat, 20 Nov 2010 20:57:07 +, Pauli Virtanen wrote:
[clip]
 There are more unnecessary merge commits in the git history.

I did a forced push to clear those up.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] seeking advice on a fast string-array conversion

2010-11-20 Thread william ratcliff
Actually,
I do use spec when I have synchotron experiments.  But why are your files so
large?
On Nov 16, 2010 9:20 AM, Darren Dale dsdal...@gmail.com wrote:
 I am wrapping up a small package to parse a particular ascii-encoded
 file format generated by a program we use heavily here at the lab. (In
 the unlikely event that you work at a synchrotron, and use Certified
 Scientific's spec program, and are actually interested, the code is
 currently available at
 https://github.com/darrendale/praxes/tree/specformat/praxes/io/spec/
 .)

 I have been benchmarking the project against another python package
 developed by a colleague, which is an extension module written in pure
 C. My python/cython project takes about twice as long to parse and
 index a file (~0.8 seconds for 100MB), which is acceptable. However,
 actually converting ascii strings to numpy arrays, which is done using
 numpy.fromstring, takes a factor of 10 longer than the extension
 module. So I am wondering about the performance of np.fromstring:

 import time
 import numpy as np
 s = b'1 ' * 2048 *1200
 d = time.time()
 x = np.fromstring(s)
 print time.time() - d
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Yet more merges in maintenance/1.5.x

2010-11-20 Thread Charles R Harris
On Sat, Nov 20, 2010 at 2:05 PM, Pauli Virtanen p...@iki.fi wrote:

 On Sat, 20 Nov 2010 20:57:07 +, Pauli Virtanen wrote:
 [clip]
  There are more unnecessary merge commits in the git history.

 I did a forced push to clear those up.


It was a pull from upstream to update that did it, when I noticed I decided
not to do a forced push cleanup because, well, it's risky for a public
repository. But I'm not sure why the pull didn't just fast forward, I
suspect something is off in my local branch. I'll just delete it and co
again from upstream.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] Nanny, faster NaN functions

2010-11-20 Thread Keith Goodman
On Fri, Nov 19, 2010 at 7:42 PM, Keith Goodman kwgood...@gmail.com wrote:
 I should make a benchmark suite.

 ny.benchit(verbose=False)
Nanny performance benchmark
Nanny 0.0.1dev
Numpy 1.4.1
Speed is numpy time divided by nanny time
NaN means all NaNs
   Speed   TestShapedtypeNaN?
   6.6770  nansum(a, axis=-1)  (500,500)int64
   4.6612  nansum(a, axis=-1)  (1,) float64
   9.0351  nansum(a, axis=-1)  (500,500)int32
   3.0746  nansum(a, axis=-1)  (500,500)float64
  11.5740  nansum(a, axis=-1)  (1,) int32
   6.4484  nansum(a, axis=-1)  (1,) int64
  51.3917  nansum(a, axis=-1)  (500,500)float64  NaN
  13.8692  nansum(a, axis=-1)  (1,) float64  NaN
   6.5327  nanmax(a, axis=-1)  (500,500)int64
   8.8222  nanmax(a, axis=-1)  (1,) float64
   0.2059  nanmax(a, axis=-1)  (500,500)int32
   6.9262  nanmax(a, axis=-1)  (500,500)float64
   5.0688  nanmax(a, axis=-1)  (1,) int32
   6.5605  nanmax(a, axis=-1)  (1,) int64
  48.4850  nanmax(a, axis=-1)  (500,500)float64  NaN
  14.6289  nanmax(a, axis=-1)  (1,) float64  NaN

You can also use the makefile to run the benchmark: make bench
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] Nanny, faster NaN functions

2010-11-20 Thread Wes McKinney
On Sat, Nov 20, 2010 at 6:39 PM, Keith Goodman kwgood...@gmail.com wrote:
 On Fri, Nov 19, 2010 at 7:42 PM, Keith Goodman kwgood...@gmail.com wrote:
 I should make a benchmark suite.

 ny.benchit(verbose=False)
 Nanny performance benchmark
    Nanny 0.0.1dev
    Numpy 1.4.1
    Speed is numpy time divided by nanny time
    NaN means all NaNs
   Speed   Test                Shape        dtype    NaN?
   6.6770  nansum(a, axis=-1)  (500,500)    int64
   4.6612  nansum(a, axis=-1)  (1,)     float64
   9.0351  nansum(a, axis=-1)  (500,500)    int32
   3.0746  nansum(a, axis=-1)  (500,500)    float64
  11.5740  nansum(a, axis=-1)  (1,)     int32
   6.4484  nansum(a, axis=-1)  (1,)     int64
  51.3917  nansum(a, axis=-1)  (500,500)    float64  NaN
  13.8692  nansum(a, axis=-1)  (1,)     float64  NaN
   6.5327  nanmax(a, axis=-1)  (500,500)    int64
   8.8222  nanmax(a, axis=-1)  (1,)     float64
   0.2059  nanmax(a, axis=-1)  (500,500)    int32
   6.9262  nanmax(a, axis=-1)  (500,500)    float64
   5.0688  nanmax(a, axis=-1)  (1,)     int32
   6.5605  nanmax(a, axis=-1)  (1,)     int64
  48.4850  nanmax(a, axis=-1)  (500,500)    float64  NaN
  14.6289  nanmax(a, axis=-1)  (1,)     float64  NaN

 You can also use the makefile to run the benchmark: make bench
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


Keith (and others),

What would you think about creating a library of mostly Cython-based
domain specific functions? So stuff like rolling statistical
moments, nan* functions like you have here, and all that-- NumPy-array
only functions that don't necessarily belong in NumPy or SciPy (but
could be included on down the road). You were already talking about
this on the statsmodels mailing list for larry. I spent a lot of time
writing a bunch of these for pandas over the last couple of years, and
I would have relatively few qualms about moving these outside of
pandas and introducing a dependency. You could do the same for larry--
then we'd all be relying on the same well-vetted and tested codebase.

- Wes
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] Nanny, faster NaN functions

2010-11-20 Thread Wes McKinney
On Sat, Nov 20, 2010 at 6:54 PM, Wes McKinney wesmck...@gmail.com wrote:
 On Sat, Nov 20, 2010 at 6:39 PM, Keith Goodman kwgood...@gmail.com wrote:
 On Fri, Nov 19, 2010 at 7:42 PM, Keith Goodman kwgood...@gmail.com wrote:
 I should make a benchmark suite.

 ny.benchit(verbose=False)
 Nanny performance benchmark
    Nanny 0.0.1dev
    Numpy 1.4.1
    Speed is numpy time divided by nanny time
    NaN means all NaNs
   Speed   Test                Shape        dtype    NaN?
   6.6770  nansum(a, axis=-1)  (500,500)    int64
   4.6612  nansum(a, axis=-1)  (1,)     float64
   9.0351  nansum(a, axis=-1)  (500,500)    int32
   3.0746  nansum(a, axis=-1)  (500,500)    float64
  11.5740  nansum(a, axis=-1)  (1,)     int32
   6.4484  nansum(a, axis=-1)  (1,)     int64
  51.3917  nansum(a, axis=-1)  (500,500)    float64  NaN
  13.8692  nansum(a, axis=-1)  (1,)     float64  NaN
   6.5327  nanmax(a, axis=-1)  (500,500)    int64
   8.8222  nanmax(a, axis=-1)  (1,)     float64
   0.2059  nanmax(a, axis=-1)  (500,500)    int32
   6.9262  nanmax(a, axis=-1)  (500,500)    float64
   5.0688  nanmax(a, axis=-1)  (1,)     int32
   6.5605  nanmax(a, axis=-1)  (1,)     int64
  48.4850  nanmax(a, axis=-1)  (500,500)    float64  NaN
  14.6289  nanmax(a, axis=-1)  (1,)     float64  NaN

 You can also use the makefile to run the benchmark: make bench
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 Keith (and others),

 What would you think about creating a library of mostly Cython-based
 domain specific functions? So stuff like rolling statistical
 moments, nan* functions like you have here, and all that-- NumPy-array
 only functions that don't necessarily belong in NumPy or SciPy (but
 could be included on down the road). You were already talking about
 this on the statsmodels mailing list for larry. I spent a lot of time
 writing a bunch of these for pandas over the last couple of years, and
 I would have relatively few qualms about moving these outside of
 pandas and introducing a dependency. You could do the same for larry--
 then we'd all be relying on the same well-vetted and tested codebase.

 - Wes


By the way I wouldn't mind pushing all of my datetime-related code
(date range generation, date offsets, etc.) into this new library,
too.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] Nanny, faster NaN functions

2010-11-20 Thread Keith Goodman
On Sat, Nov 20, 2010 at 3:54 PM, Wes McKinney wesmck...@gmail.com wrote:

 Keith (and others),

 What would you think about creating a library of mostly Cython-based
 domain specific functions? So stuff like rolling statistical
 moments, nan* functions like you have here, and all that-- NumPy-array
 only functions that don't necessarily belong in NumPy or SciPy (but
 could be included on down the road). You were already talking about
 this on the statsmodels mailing list for larry. I spent a lot of time
 writing a bunch of these for pandas over the last couple of years, and
 I would have relatively few qualms about moving these outside of
 pandas and introducing a dependency. You could do the same for larry--
 then we'd all be relying on the same well-vetted and tested codebase.

I've started working on moving window statistics cython functions. I
plan to make it into a package called Roly (for rolling). The
signatures are: mov_sum(arr, window, axis=-1) and mov_nansum(arr,
window, axis=-1), etc.

I think of Nanny and Roly as two separate packages. A narrow focus is
good for a new package. But maybe each package could be a subpackage
in a super package?

Would the function signatures in Nanny (exact duplicates of the
corresponding functions in Numpy and Scipy) work for pandas? I plan to
use Nanny in larry. I'll try to get the structure of the Nanny package
in place. But if it doesn't attract any interest after that then I may
fold it into larry.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN] Nanny, faster NaN functions

2010-11-20 Thread Charles R Harris
On Sat, Nov 20, 2010 at 4:39 PM, Keith Goodman kwgood...@gmail.com wrote:

 On Fri, Nov 19, 2010 at 7:42 PM, Keith Goodman kwgood...@gmail.com
 wrote:
  I should make a benchmark suite.

  ny.benchit(verbose=False)
 Nanny performance benchmark
Nanny 0.0.1dev
Numpy 1.4.1
Speed is numpy time divided by nanny time
NaN means all NaNs
   Speed   TestShapedtypeNaN?
   6.6770  nansum(a, axis=-1)  (500,500)int64
   4.6612  nansum(a, axis=-1)  (1,) float64
   9.0351  nansum(a, axis=-1)  (500,500)int32
   3.0746  nansum(a, axis=-1)  (500,500)float64
  11.5740  nansum(a, axis=-1)  (1,) int32
   6.4484  nansum(a, axis=-1)  (1,) int64
  51.3917  nansum(a, axis=-1)  (500,500)float64  NaN
  13.8692  nansum(a, axis=-1)  (1,) float64  NaN
   6.5327  nanmax(a, axis=-1)  (500,500)int64
   8.8222  nanmax(a, axis=-1)  (1,) float64
   0.2059  nanmax(a, axis=-1)  (500,500)int32
   6.9262  nanmax(a, axis=-1)  (500,500)float64
   5.0688  nanmax(a, axis=-1)  (1,) int32
   6.5605  nanmax(a, axis=-1)  (1,) int64
  48.4850  nanmax(a, axis=-1)  (500,500)float64  NaN
  14.6289  nanmax(a, axis=-1)  (1,) float64  NaN


Here's what I get using (my current) np.fmax.reduce in place of nanmax.

   Speed   TestShapedtypeNaN?
   3.3717  nansum(a, axis=-1)  (500,500)int64
   5.1639  nansum(a, axis=-1)  (1,) float64
   3.8308  nansum(a, axis=-1)  (500,500)int32
   6.0854  nansum(a, axis=-1)  (500,500)float64
   8.7821  nansum(a, axis=-1)  (1,) int32
   1.1716  nansum(a, axis=-1)  (1,) int64
   5.5777  nansum(a, axis=-1)  (500,500)float64  NaN
   5.8718  nansum(a, axis=-1)  (1,) float64  NaN
   0.5419  nanmax(a, axis=-1)  (500,500)int64
   2.8732  nanmax(a, axis=-1)  (1,) float64
   0.0301  nanmax(a, axis=-1)  (500,500)int32
   2.7437  nanmax(a, axis=-1)  (500,500)float64
   0.7868  nanmax(a, axis=-1)  (1,) int32
   0.5535  nanmax(a, axis=-1)  (1,) int64
   2.8715  nanmax(a, axis=-1)  (500,500)float64  NaN
   2.5937  nanmax(a, axis=-1)  (1,) float64  NaN

I think the really small int32 ratio is due to timing granularity. For
random ints in the range 0..99 the results are not quite as good for fmax,
which I find puzzling.

   Speed   TestShapedtypeNaN?
   3.4021  nansum(a, axis=-1)  (500,500)int64
   5.5913  nansum(a, axis=-1)  (1,) float64
   4.4569  nansum(a, axis=-1)  (500,500)int32
   6.6202  nansum(a, axis=-1)  (500,500)float64
   7.1847  nansum(a, axis=-1)  (1,) int32
   2.0448  nansum(a, axis=-1)  (1,) int64
   6.0257  nansum(a, axis=-1)  (500,500)float64  NaN
   6.3172  nansum(a, axis=-1)  (1,) float64  NaN
   0.9598  nanmax(a, axis=-1)  (500,500)int64
   3.2407  nanmax(a, axis=-1)  (1,) float64
   0.0520  nanmax(a, axis=-1)  (500,500)int32
   3.1954  nanmax(a, axis=-1)  (500,500)float64
   1.5538  nanmax(a, axis=-1)  (1,) int32
   0.3716  nanmax(a, axis=-1)  (1,) int64
   3.2372  nanmax(a, axis=-1)  (500,500)float64  NaN
   2.5633  nanmax(a, axis=-1)  (1,) float64  NaN

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: NumPy 1.5.1

2010-11-20 Thread Ralf Gommers
Hi,

I am pleased to announce the availability of NumPy 1.5.1. This bug-fix
release comes almost 3 months after the 1.5.0 release, it contains no new
features compared to 1.5.0.

Binaries, sources and release notes can be found at
https://sourceforge.net/projects/numpy/files/.

Thank you to everyone who contributed to this release.

Enjoy,
The numpy developers.



=
NumPy 1.5.1 Release Notes
=

Numpy 1.5.1 is a bug-fix release with no new features compared to 1.5.0.


Numpy source code location changed
==

Numpy has stopped using SVN as the version control system, and moved
to Git. The development source code for Numpy can from now on be found
at

http://github.com/numpy/numpy


Note on GCC versions


On non-x86 platforms, Numpy can trigger a bug in the recent GCC compiler
versions 4.5.0 and 4.5.1: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=45967
We recommend not using these versions of GCC for compiling Numpy
on these platforms.


Bugs fixed
==

Of the following, #1605 is important for Cython modules.

- #937:  linalg: lstsq should always return real residual
- #1196: lib: fix negative indices in s_ and index_exp
- #1287: core: fix uint64 - Python int cast
- #1491: core: richcompare should return Py_NotImplemented when undefined
- #1517: lib: close file handles after use in numpy.lib.npyio.*
- #1605: core: ensure PEP 3118 buffers can be released in exception handler
- #1617: core: fix clongdouble cast to Python complex()
- #1625: core: fix detection for ``isfinite`` routine
- #1626: core: fix compilation with Solaris 10 / Sun Studio 12.1

Scipy could not be built against Numpy 1.5.0 on OS X due to a
numpy.distutils
bug, #1399. This issue is fixed now.

- #1399: distutils: use C arch flags for Fortran compilation on OS X.

Python 3 specific; #1610 is important for any I/O:

- #: f2py: make f2py script runnable on Python 3
- #1604: distutils: potential infinite loop in numpy.distutils
- #1609: core: use accelerated BLAS, when available
- #1610: core: ensure tofile and fromfile maintain file handle positions

Checksums
=

b3db7d1ccfc3640b4c33b7911dbceabc
release/installers/numpy-1.5.1-py2.5-python.org-macosx10.3.dmg
55f5863856485bbb005b77014edcd34a
release/installers/numpy-1.5.1-py2.6-python.org-macosx10.3.dmg
420113e2a30712668445050a0f38e7a6
release/installers/numpy-1.5.1-py2.7-python.org-macosx10.3.dmg
757885ab8d64cf060ef629800da2e65c
release/installers/numpy-1.5.1-py2.7-python.org-macosx10.5.dmg
11e60c3f7f3c86fcb5facf88c3981fd3
release/installers/numpy-1.5.1-win32-superpack-python2.5.exe
3fc14943dc2fcf740d8c204455e68aa7
release/installers/numpy-1.5.1-win32-superpack-python2.6.exe
a352acce86c8b2cfb247e38339e27fd0
release/installers/numpy-1.5.1-win32-superpack-python2.7.exe
160de9794e4a239c9da1196a5eb30f7e
release/installers/numpy-1.5.1-win32-superpack-python3.1.exe
376ef150df41b5353944ab742145352d  release/installers/numpy-1.5.1.tar.gz
ab6045070c0de5016fdf94dd2a79638b  release/installers/numpy-1.5.1.zip
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion