[Numpy-discussion] subdivide array

2010-11-30 Thread John
Hello,

I have an array of data for a global grid at 1 degree resolution. It's
filled with 1s and 0s, and it is just a land sea mask (not only, but
as an example). I want to be able to regrid the data to higher or
lower resolutions (i.e. 0.5 or 2 degrees). But if I try to use any
standard interp functions, such as mpl_toolkits.basemap.interp it
fails -- I assume due to the data being binary.

I guess there may be a fairly easy routine to do this?? Does someone
have an example?

Thanks!
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] subdivide array

2010-11-30 Thread Whitcomb, Mr. Tim
 I have an array of data for a global grid at 1 degree resolution. It's
 filled with 1s and 0s, and it is just a land sea mask (not only, but
 as an example). I want to be able to regrid the data to higher or
 lower resolutions (i.e. 0.5 or 2 degrees). But if I try to use any
 standard interp functions, such as mpl_toolkits.basemap.interp it
 fails -- I assume due to the data being binary.
 
 I guess there may be a fairly easy routine to do this?? Does someone
 have an example?
 

When I've had to do this, I typically set Basemap's interp to do
nearest-neighbor interpolation (by setting order=0).  It defaults to
order=1, which is bilinear interpolation, which will destroy the binary
nature of your data (as you perhaps noticed).

Tim
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] subdivide array

2010-11-30 Thread Pierre GM

On Nov 30, 2010, at 5:40 PM, John wrote:

 Hello,
 
 I have an array of data for a global grid at 1 degree resolution. It's
 filled with 1s and 0s, and it is just a land sea mask (not only, but
 as an example). I want to be able to regrid the data to higher or
 lower resolutions (i.e. 0.5 or 2 degrees). But if I try to use any
 standard interp functions, such as mpl_toolkits.basemap.interp it
 fails -- I assume due to the data being binary.
 
 I guess there may be a fairly easy routine to do this?? Does someone
 have an example?

Just a random idea: have you tried to convert your input data to float? 
Hopefully you could get some values between 0 and 1 for your interpolated 
values, that you'll have to transform back to integers following a scheme of 
your choosing...
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.5.1 on RedHat 5.5

2010-11-30 Thread David Brodbeck
On Mon, Nov 29, 2010 at 9:08 PM, David da...@silveregg.co.jp wrote:
 the *.so.N.M are enough for binaries, but you need the *.so to link
 against a library. Those are generally provided in the -devel RPMS on RH
 distributions,

Ah, right. Thank you for filling in that missing piece of information
for me.  I'll see if I can find development RPMs.

I could have sworn I got this to build once before, too.  I should
have taken notes.

-- 
David Brodbeck
System Administrator, Linguistics
University of Washington
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] subdivide array

2010-11-30 Thread Gerrit Holl
On 30 November 2010 17:58, Pierre GM pgmdevl...@gmail.com wrote:
 On Nov 30, 2010, at 5:40 PM, John wrote:
 I have an array of data for a global grid at 1 degree resolution. It's
 filled with 1s and 0s, and it is just a land sea mask (not only, but
 as an example). I want to be able to regrid the data to higher or
 lower resolutions (i.e. 0.5 or 2 degrees). But if I try to use any
 standard interp functions, such as mpl_toolkits.basemap.interp it
 fails -- I assume due to the data being binary.

 I guess there may be a fairly easy routine to do this?? Does someone
 have an example?

 Just a random idea: have you tried to convert your input data to float? 
 Hopefully you could get some values between 0 and 1 for your interpolated 
 values, that you'll have to transform back to integers following a scheme of 
 your choosing...

I would argue that some float between 0 and 1 is an excellent
representation when regridding a binary land-sea-mask onto a higher
resolution. After all, this information is not there. Why should a
land-sea mask be binary anyway? As if a grid-cell can only be fully
ocean or fully land...

BTW, I just realised Pythons convention that negative indices count
from the end of the array is perfect when using a 180x360
land-sea-mask, as lon[-30] and lon[330] mean and should mean the same
:)

Gerrit.

--
Gerrit Holl
PhD student at Department of Space Science, Luleå University of
Technology, Kiruna, Sweden
http://www.sat.ltu.se/members/gerrit/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Warning: invalid value encountered in subtract

2010-11-30 Thread Keith Goodman
After upgrading from numpy 1.4.1 to 1.5.1 I get warnings like
Warning: invalid value encountered in subtract when I run unit tests
(or timeit) using python -c 'blah' but not from an interactive
session. How can I tell the warnings to go away?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A faster median (Wirth's method)

2010-11-30 Thread Keith Goodman
On Tue, Sep 1, 2009 at 2:37 PM, Sturla Molden stu...@molden.no wrote:
 Dag Sverre Seljebotn skrev:

 Nitpick: This will fail on large arrays. I guess numpy.npy_intp is the
 right type to use in this case?

 By the way, here is a more polished version, does it look ok?

 http://projects.scipy.org/numpy/attachment/ticket/1213/generate_qselect.py
 http://projects.scipy.org/numpy/attachment/ticket/1213/quickselect.pyx

This is my favorite numpy/scipy ticket. So I am happy that I can
contribute in a small way by pointing out a bug. The search for the
k-th smallest element is only done over the first k elements (that's
the bug) instead of over the entire array. Specifically while l  k
should be while l  r.

I added a median function to the Bottleneck package:
https://github.com/kwgoodman/bottleneck

Timings:

 import bottleneck as bn
 arr = np.random.rand(100, 100)
 timeit np.median(arr)
1000 loops, best of 3: 762 us per loop
 timeit bn.median(arr)
1 loops, best of 3: 198 us per loop

What other functions could be built from a selection algorithm?

nanmedian
scoreatpercentile
quantile
knn
select
others?

But before I add more functions to the package I need to figure out
how to make a cython apply_along_axis function. For the first release
I am hand coding the 1d, 2d, and 3d cases. Boring to write, hard to
maintain, and doesn't solve the nd case.

Does anyone have a cython apply_along_axis that takes a cython
reducing function as input? The ticket has an example but I couldn't
get it to run. If no one has one (the horror!) I'll begin to work on
one sometime after the first release.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A faster median (Wirth's method)

2010-11-30 Thread John Salvatier
I am very interested in this result. I have wanted to know how to do an
apply_along_axis function for a while now.

On Tue, Nov 30, 2010 at 11:21 AM, Keith Goodman kwgood...@gmail.com wrote:

 On Tue, Sep 1, 2009 at 2:37 PM, Sturla Molden stu...@molden.no wrote:
  Dag Sverre Seljebotn skrev:
 
  Nitpick: This will fail on large arrays. I guess numpy.npy_intp is the
  right type to use in this case?
 
  By the way, here is a more polished version, does it look ok?
 
 
 http://projects.scipy.org/numpy/attachment/ticket/1213/generate_qselect.py
  http://projects.scipy.org/numpy/attachment/ticket/1213/quickselect.pyx

 This is my favorite numpy/scipy ticket. So I am happy that I can
 contribute in a small way by pointing out a bug. The search for the
 k-th smallest element is only done over the first k elements (that's
 the bug) instead of over the entire array. Specifically while l  k
 should be while l  r.

 I added a median function to the Bottleneck package:
 https://github.com/kwgoodman/bottleneck

 Timings:

  import bottleneck as bn
  arr = np.random.rand(100, 100)
  timeit np.median(arr)
 1000 loops, best of 3: 762 us per loop
  timeit bn.median(arr)
 1 loops, best of 3: 198 us per loop

 What other functions could be built from a selection algorithm?

 nanmedian
 scoreatpercentile
 quantile
 knn
 select
 others?

 But before I add more functions to the package I need to figure out
 how to make a cython apply_along_axis function. For the first release
 I am hand coding the 1d, 2d, and 3d cases. Boring to write, hard to
 maintain, and doesn't solve the nd case.

 Does anyone have a cython apply_along_axis that takes a cython
 reducing function as input? The ticket has an example but I couldn't
 get it to run. If no one has one (the horror!) I'll begin to work on
 one sometime after the first release.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A faster median (Wirth's method)

2010-11-30 Thread Keith Goodman
On Tue, Nov 30, 2010 at 11:25 AM, John Salvatier
jsalv...@u.washington.edu wrote:
 I am very interested in this result. I have wanted to know how to do an

My first thought was to write the reducing function like this

cdef np.float64_t namean(np.ndarray[np.float64_t, ndim=1] a):

but cython doesn't allow np.ndarray in a cdef.

That's why the ticket (URL earlier in the thread) uses a (the?) buffer
interface to the array. The particular example in the ticket is not a
reducing function, it works on the data in place. We can change that.

I can set up a sandbox in the Bottleneck project if anyone (John!) is
interested in working on this. I plan to get a first (preview) release
out soon and then take a break. The first project for the second
release is this; second project is templating. Then the third release
is just turning the crank and adding more functions.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.5.1 on RedHat 5.5

2010-11-30 Thread David Brodbeck
On Tue, Nov 30, 2010 at 9:38 AM, David Brodbeck bro...@uw.edu wrote:
 On Mon, Nov 29, 2010 at 9:08 PM, David da...@silveregg.co.jp wrote:
 the *.so.N.M are enough for binaries, but you need the *.so to link
 against a library. Those are generally provided in the -devel RPMS on RH
 distributions,

 Ah, right. Thank you for filling in that missing piece of information
 for me.  I'll see if I can find development RPMs.

 I could have sworn I got this to build once before, too.  I should
 have taken notes.

Turns out there is no atlas-devel package, so I changed tactics and
installed blas, blas-devel, lapack, and lapack-devel, instead.  This
was enough to get both NumPy and SciPy built. However, now SciPy is
segfaulting when I try to run the test suite:

bro...@patas:~$ python2.5
Python 2.5.5 (r255:77872, May 17 2010, 14:07:05)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2
Type help, copyright, credits or license for more information.
 import scipy;
 scipy.test();
Running unit tests for scipy
NumPy version 1.5.1
NumPy is installed in /opt/python-2.5/lib/python2.5/site-packages/numpy
SciPy version 0.8.0
SciPy is installed in /opt/python-2.5/lib/python2.5/site-packages/scipy
Python version 2.5.5 (r255:77872, May 17 2010, 14:07:05) [GCC 4.1.2
20080704 (Red Hat 4.1.2-46)]
nose version 0.11.2
/opt/python-2.5/lib/python2.5/site-packages/scipy/fftpack/tests/test_basic.py:404:
ComplexWarning: Casting complex values to real discards the imaginary
part
  y1 = fftn(x.astype(np.float32))
/opt/python-2.5/lib/python2.5/site-packages/scipy/fftpack/tests/test_basic.py:405:
ComplexWarning: Casting complex values to real discards the imaginary
part
  y2 = fftn(x.astype(np.float64)).astype(np.complex64)
/opt/python-2.5/lib/python2.5/site-packages/scipy/fftpack/tests/test_basic.py:413:
ComplexWarning: Casting complex values to real discards the imaginary
part
  y1 = fftn(x.astype(np.float32))
/opt/python-2.5/lib/python2.5/site-packages/scipy/fftpack/tests/test_basic.py:414:
ComplexWarning: Casting complex values to real discards the imaginary
part
  y2 = fftn(x.astype(np.float64)).astype(np.complex64)
..K.K..KWarning:
divide by zero encountered in log
Warning: invalid value encountered in multiply
Warning: divide by zero encountered in log
Warning: invalid value encountered in multiply
Warning: divide by zero encountered in log
Warning: invalid value encountered in multiply
.Warning: divide by zero encountered in log
Warning: invalid value encountered in multiply
Warning: divide by zero encountered in log
Warning: invalid value encountered in multiply
.Warning: divide by zero encountered in log
Warning: invalid value encountered in multiply
Warning: divide by zero encountered in log
Warning: invalid value encountered in multiply
.Warning: divide by zero encountered in log
Warning: invalid value encountered in multiply
Warning: divide by zero encountered in log
Warning: invalid value encountered in multiply
.../opt/python-2.5/lib/python2.5/site-packages/scipy/io/recaster.py:328:
ComplexWarning: Casting complex values to real discards the imaginary
part
  test_arr = arr.astype(T)
../opt/python-2.5/lib/python2.5/site-packages/scipy/io/recaster.py:375:
ComplexWarning: Casting complex values to real discards the imaginary
part
  return arr.astype(idt)
..F..FF.../opt/python-2.5/lib/python2.5/site-packages/scipy/lib/blas/tests/test_fblas.py:86:
ComplexWarning: Casting complex values to real discards the imaginary
part
  self.blas_func(x,y,n=3,incy=5)
../opt/python-2.5/lib/python2.5/site-packages/scipy/lib/blas/tests/test_fblas.py:196:
ComplexWarning: Casting complex values to real discards the imaginary
part
  self.blas_func(x,y,n=3,incy=5)
.../opt/python-2.5/lib/python2.5/site-packages/scipy/lib/blas/tests/test_fblas.py:279:
ComplexWarning: Casting complex values to real discards the imaginary
part
  self.blas_func(x,y,n=3,incy=5)
..SS..SS.....F.Segmentation
fault


-- 
David Brodbeck
System Administrator, Linguistics
University of Washington
___

Re: [Numpy-discussion] A faster median (Wirth's method)

2010-11-30 Thread Matthew Brett
Hi,

On Tue, Nov 30, 2010 at 11:35 AM, Keith Goodman kwgood...@gmail.com wrote:
 On Tue, Nov 30, 2010 at 11:25 AM, John Salvatier
 jsalv...@u.washington.edu wrote:
 I am very interested in this result. I have wanted to know how to do an

 My first thought was to write the reducing function like this

 cdef np.float64_t namean(np.ndarray[np.float64_t, ndim=1] a):

 but cython doesn't allow np.ndarray in a cdef.

Sorry for the ill-considered hasty reply, but do you mean that this:

import numpy as np
cimport numpy as cnp

cdef cnp.float64_t namean(cnp.ndarray[cnp.float64_t, ndim=1] a):
return np.nanmean(a)  # just a placeholder

is not allowed?  It works for me.  Is it a cython version thing?
(I've got 0.13),

See you,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A faster median (Wirth's method)

2010-11-30 Thread Keith Goodman
On Tue, Nov 30, 2010 at 11:58 AM, Matthew Brett matthew.br...@gmail.com wrote:
 Hi,

 On Tue, Nov 30, 2010 at 11:35 AM, Keith Goodman kwgood...@gmail.com wrote:
 On Tue, Nov 30, 2010 at 11:25 AM, John Salvatier
 jsalv...@u.washington.edu wrote:
 I am very interested in this result. I have wanted to know how to do an

 My first thought was to write the reducing function like this

 cdef np.float64_t namean(np.ndarray[np.float64_t, ndim=1] a):

 but cython doesn't allow np.ndarray in a cdef.

 Sorry for the ill-considered hasty reply, but do you mean that this:

 import numpy as np
 cimport numpy as cnp

 cdef cnp.float64_t namean(cnp.ndarray[cnp.float64_t, ndim=1] a):
    return np.nanmean(a)  # just a placeholder

 is not allowed?  It works for me.  Is it a cython version thing?
 (I've got 0.13),

Oh, that's nice! I'm using 0.11.2. OK, time to upgrade.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.5.1 on RedHat 5.5

2010-11-30 Thread David Brodbeck
On Tue, Nov 30, 2010 at 11:40 AM, David Brodbeck bro...@uw.edu wrote:
 On Tue, Nov 30, 2010 at 9:38 AM, David Brodbeck bro...@uw.edu wrote:
 Turns out there is no atlas-devel package, so I changed tactics and
 installed blas, blas-devel, lapack, and lapack-devel, instead.  This
 was enough to get both NumPy and SciPy built. However, now SciPy is
 segfaulting when I try to run the test suite:

Never mind, I got it.  It appears to have been an ABI  mismatch.
Building with --fcompiler=gnu95 fixed it.

-- 
David Brodbeck
System Administrator, Linguistics
University of Washington
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Warning: invalid value encountered in subtract

2010-11-30 Thread Skipper Seabold
On Tue, Nov 30, 2010 at 1:34 PM, Keith Goodman kwgood...@gmail.com wrote:
 After upgrading from numpy 1.4.1 to 1.5.1 I get warnings like
 Warning: invalid value encountered in subtract when I run unit tests
 (or timeit) using python -c 'blah' but not from an interactive
 session. How can I tell the warnings to go away?

If it's this type of floating point related stuff, you can use np.seterr

In [1]: import numpy as np

In [2]: np.log(1./np.array(0))
Warning: divide by zero encountered in divide
Out[2]: inf

In [3]: orig_settings = np.seterr()

In [4]: np.seterr(all=ignore)
Out[4]: {'divide': 'print', 'invalid': 'print', 'over': 'print', 'under': 'ignor
e'}

In [5]: np.log(1./np.array(0))
Out[5]: inf

In [6]: np.seterr(**orig_settings)
Out[6]: {'divide': 'ignore', 'invalid': 'ignore', 'over': 'ignore', 'under': 'ig
nore'}

In [7]: np.log(1./np.array(0))
Warning: divide by zero encountered in divide
Out[7]: inf

I have been using the orig_settings so that I can take over the
control of this from the user and then set it back to how it was.

Skipper
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Warning: invalid value encountered in subtract

2010-11-30 Thread Keith Goodman
On Tue, Nov 30, 2010 at 2:25 PM, Robert Kern robert.k...@gmail.com wrote:
 On Tue, Nov 30, 2010 at 16:22, Keith Goodman kwgood...@gmail.com wrote:
 On Tue, Nov 30, 2010 at 1:41 PM, Skipper Seabold jsseab...@gmail.com wrote:
 On Tue, Nov 30, 2010 at 1:34 PM, Keith Goodman kwgood...@gmail.com wrote:
 After upgrading from numpy 1.4.1 to 1.5.1 I get warnings like
 Warning: invalid value encountered in subtract when I run unit tests
 (or timeit) using python -c 'blah' but not from an interactive
 session. How can I tell the warnings to go away?

 If it's this type of floating point related stuff, you can use np.seterr

 In [1]: import numpy as np

 In [2]: np.log(1./np.array(0))
 Warning: divide by zero encountered in divide
 Out[2]: inf

 In [3]: orig_settings = np.seterr()

 In [4]: np.seterr(all=ignore)
 Out[4]: {'divide': 'print', 'invalid': 'print', 'over': 'print', 'under': 
 'ignor
 e'}

 In [5]: np.log(1./np.array(0))
 Out[5]: inf

 In [6]: np.seterr(**orig_settings)
 Out[6]: {'divide': 'ignore', 'invalid': 'ignore', 'over': 'ignore', 
 'under': 'ig
 nore'}

 In [7]: np.log(1./np.array(0))
 Warning: divide by zero encountered in divide
 Out[7]: inf

 I have been using the orig_settings so that I can take over the
 control of this from the user and then set it back to how it was.

 Thank, Skipper. That works. Do you wrap it in a try...except? And then
 raise whatever brought you to the exception? Sounds like a pain.

 Is it considered OK for a package to change the state of np.seterr if
 there is an error? Silly question. I'm just looking for an easy fix.

 with np.errstate(invalid='ignore'):

Ah! Thank you, Robert!
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Warning: invalid value encountered in subtract

2010-11-30 Thread Pierre GM

On Nov 30, 2010, at 11:22 PM, Keith Goodman wrote:

 On Tue, Nov 30, 2010 at 1:41 PM, Skipper Seabold jsseab...@gmail.com wrote:
 On Tue, Nov 30, 2010 at 1:34 PM, Keith Goodman kwgood...@gmail.com wrote:
 After upgrading from numpy 1.4.1 to 1.5.1 I get warnings like
 Warning: invalid value encountered in subtract when I run unit tests
 (or timeit) using python -c 'blah' but not from an interactive
 session. How can I tell the warnings to go away?
 
 If it's this type of floating point related stuff, you can use np.seterr
 
 In [1]: import numpy as np
 
 In [2]: np.log(1./np.array(0))
 Warning: divide by zero encountered in divide
 Out[2]: inf
 
 In [3]: orig_settings = np.seterr()
 
 In [4]: np.seterr(all=ignore)
 Out[4]: {'divide': 'print', 'invalid': 'print', 'over': 'print', 'under': 
 'ignor
 e'}
 
 In [5]: np.log(1./np.array(0))
 Out[5]: inf
 
 In [6]: np.seterr(**orig_settings)
 Out[6]: {'divide': 'ignore', 'invalid': 'ignore', 'over': 'ignore', 'under': 
 'ig
 nore'}
 
 In [7]: np.log(1./np.array(0))
 Warning: divide by zero encountered in divide
 Out[7]: inf
 
 I have been using the orig_settings so that I can take over the
 control of this from the user and then set it back to how it was.
 
 Thank, Skipper. That works. Do you wrap it in a try...except? And then
 raise whatever brought you to the exception? Sounds like a pain.
 
 Is it considered OK for a package to change the state of np.seterr if
 there is an error? Silly question. I'm just looking for an easy fix.

I had to go through the try/except/set-and-reset-the-error-options dance myself 
in numpy.ma a few months ago. I realized that setting errors globally in a 
module (as was the case before) was a tad too sneaky. Sure, it was a bit of a 
pain, but at least you're not hiding anything

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A faster median (Wirth's method)

2010-11-30 Thread John Salvatier
I think last time I looked into how to apply a function along an axis I
thought that the PyArray_IterAllButAxis would not work for that task (
http://docs.scipy.org/doc/numpy/reference/c-api.array.html#PyArray_IterAllButAxis),
but I think perhaps I misunderstood it. I'm looking into how to use it.

On Tue, Nov 30, 2010 at 12:06 PM, Keith Goodman kwgood...@gmail.com wrote:

 On Tue, Nov 30, 2010 at 11:58 AM, Matthew Brett matthew.br...@gmail.com
 wrote:
  Hi,
 
  On Tue, Nov 30, 2010 at 11:35 AM, Keith Goodman kwgood...@gmail.com
 wrote:
  On Tue, Nov 30, 2010 at 11:25 AM, John Salvatier
  jsalv...@u.washington.edu wrote:
  I am very interested in this result. I have wanted to know how to do an
 
  My first thought was to write the reducing function like this
 
  cdef np.float64_t namean(np.ndarray[np.float64_t, ndim=1] a):
 
  but cython doesn't allow np.ndarray in a cdef.
 
  Sorry for the ill-considered hasty reply, but do you mean that this:
 
  import numpy as np
  cimport numpy as cnp
 
  cdef cnp.float64_t namean(cnp.ndarray[cnp.float64_t, ndim=1] a):
 return np.nanmean(a)  # just a placeholder
 
  is not allowed?  It works for me.  Is it a cython version thing?
  (I've got 0.13),

 Oh, that's nice! I'm using 0.11.2. OK, time to upgrade.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A faster median (Wirth's method)

2010-11-30 Thread Felix Schlesinger
  import numpy as np
  cimport numpy as cnp

  cdef cnp.float64_t namean(cnp.ndarray[cnp.float64_t, ndim=1] a):
 return np.nanmean(a)  # just a placeholder

  is not allowed?  It works for me.  Is it a cython version thing?
  (I've got 0.13),

 Oh, that's nice! I'm using 0.11.2. OK, time to upgrade.

Oh wow, does that mean that http://trac.cython.org/cython_trac/ticket/177
is fixed? I couldn't find anything in the release notes about that,
but it would be great news. Does the cdef function acquire and hold
the buffer?

Felix


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion