Re: [Numpy-discussion] float128 in fact float80

2011-10-17 Thread Dag Sverre Seljebotn
On 10/17/2011 03:22 AM, Charles R Harris wrote:


 On Sun, Oct 16, 2011 at 6:13 PM, Nathaniel Smith n...@pobox.com
 mailto:n...@pobox.com wrote:

 The solution is just to call it 'longdouble', which clearly
 communicates 'this does some quirky thing that depends on your C
 compiler and architecture'.


 Well, I don't know. If someone is unfamiliar with floats I would expect
 they would post a complaint about bugs if a file of longdouble type
 written on a 32 bit system couldn't be read on a 64 bit system. It might
 be better to somehow combine both the ieee type and the storage alignment.

np.float80_96
np.float80_128


?

Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] dtyping with .astype()

2011-10-17 Thread Alex van der Spek
Beginner's question?

I have this dictionary dtypes of names and types:

dtypes
{'names': ['col1', 'col2', 'col3', 'col4', 'col5'], 'formats': [type 
'numpy.float16', type 'numpy.float16', type 'numpy.float16', type 
'numpy.float16', type 'numpy.float16']}

and this array y

 y
array([[ 0,  1,  2,  3,  4],
   [ 5,  6,  7,  8,  9],
   [10, 11, 12, 13, 14],
   [15, 16, 17, 18, 19],
   [20, 21, 22, 23, 24],
   [25, 26, 27, 28, 29],
   [30, 31, 32, 33, 34],
   [35, 36, 37, 38, 39],
   [40, 41, 42, 43, 44],
   [45, 46, 47, 48, 49]])


But:
z=y.astype(dtypes)

gives me a confusing result. I only asked to name the columns and change their 
types to half precision floats.

What am I missing? How to do this?

Thank you in advance, 
Alex van der Spek

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtyping with .astype()

2011-10-17 Thread Pauli Virtanen
13.10.2011 12:59, Alex van der Spek kirjoitti:
 gives me a confusing result. I only asked to name the columns and change 
 their 
 types to half precision floats.

Structured arrays shouldn't be thought as an array with named columns,
as they are somewhat different.

 What am I missing? How to do this?

np.rec.fromarrays(arr.T, dtype=dt)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] float128 in fact float80

2011-10-17 Thread Charles R Harris
On Mon, Oct 17, 2011 at 2:20 AM, Dag Sverre Seljebotn 
d.s.seljeb...@astro.uio.no wrote:

 On 10/17/2011 03:22 AM, Charles R Harris wrote:
 
 
  On Sun, Oct 16, 2011 at 6:13 PM, Nathaniel Smith n...@pobox.com
  mailto:n...@pobox.com wrote:
 
  The solution is just to call it 'longdouble', which clearly
  communicates 'this does some quirky thing that depends on your C
  compiler and architecture'.
 
 
  Well, I don't know. If someone is unfamiliar with floats I would expect
  they would post a complaint about bugs if a file of longdouble type
  written on a 32 bit system couldn't be read on a 64 bit system. It might
  be better to somehow combine both the ieee type and the storage
 alignment.

 np.float80_96
 np.float80_128


Heh, my thoughts too.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtyping with .astype()

2011-10-17 Thread josef . pktd
On Mon, Oct 17, 2011 at 6:17 AM, Pauli Virtanen p...@iki.fi wrote:
 13.10.2011 12:59, Alex van der Spek kirjoitti:
 gives me a confusing result. I only asked to name the columns and change 
 their
 types to half precision floats.

 Structured arrays shouldn't be thought as an array with named columns,
 as they are somewhat different.

 What am I missing? How to do this?

 np.rec.fromarrays(arr.T, dtype=dt)

y.astype(float16).view(dt)

?
Josef



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtyping with .astype()

2011-10-17 Thread Pauli Virtanen
17.10.2011 15:48, josef.p...@gmail.com kirjoitti:
 On Mon, Oct 17, 2011 at 6:17 AM, Pauli Virtanen p...@iki.fi wrote:
[clip]
 What am I missing? How to do this?

 np.rec.fromarrays(arr.T, dtype=dt)
 
 y.astype(float16).view(dt)

I think this will give surprises if the original array is not in C-order.

Pauli

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtyping with .astype()

2011-10-17 Thread josef . pktd
On Mon, Oct 17, 2011 at 10:18 AM, Pauli Virtanen p...@iki.fi wrote:
 17.10.2011 15:48, josef.p...@gmail.com kirjoitti:
 On Mon, Oct 17, 2011 at 6:17 AM, Pauli Virtanen p...@iki.fi wrote:
 [clip]
 What am I missing? How to do this?

 np.rec.fromarrays(arr.T, dtype=dt)

 y.astype(float16).view(dt)

 I think this will give surprises if the original array is not in C-order.

I forgot about those, dangerous if the array is square, otherwise it
raises an error if it is in F-order

maybe
np.asarray(y, np.float16, order='C').view(dt)
if I don't like record arrays

Josef


        Pauli

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] yet another indexing question

2011-10-17 Thread Chris.Barker
On 10/14/11 5:04 AM, Neal Becker wrote:
 suppose I have:

 In [10]: u
 Out[10]:
 array([[0, 1, 2, 3, 4],
 [5, 6, 7, 8, 9]])

 And I have a vector v:
   v = np.array ((0,1,0,1,0))

 I want to form an output vector which selects items from u where v is the 
 index
 of the row of u to be selected.

 Now, more importantly, I need the result to be a reference to the original 
 array
 (not a copy), because I'm going to use it on the LHS of an assignment.  Is 
 this
 possible?

No, it's not. numpy arrays need to be describable with regular strides 
-- when selecting arbitrary elements from an array, there is no way to 
describe the resulting array as regular strides into the same data block 
as the original.

-Chris



-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Example Usage of Neighborhood Iterator in Cython

2011-10-17 Thread T J
I recently put together a Cython example which uses the neighborhood
iterator.  It was trickier than I thought it would be, so I thought to
share it with the community.  The function takes a 1-dimensional array
and returns a 2-dimensional array of neighborhoods in the original
area. This is somewhat similar to the functionality provided by
segment_axis (http://projects.scipy.org/numpy/ticket/901), but I
believe this slightly different in that neighborhood can extend to the
left of the current index (assuming circular boundaries).  Keep in
mind that this is just an example, and normal usage probably is not
concerned with creating a new array.

External link:  http://codepad.org/czRIzXQl

--

import numpy as np
cimport numpy as np

cdef extern from numpy/arrayobject.h:

ctypedef extern class numpy.flatiter [object PyArrayIterObject]:
cdef int nd_m1
cdef np.npy_intp index, size
cdef np.ndarray ao
cdef char *dataptr

# This isn't exposed to the Python API.
# So we can't use the same approach we used to define flatiter
ctypedef struct PyArrayNeighborhoodIterObject:
int nd_m1
np.npy_intp index, size
np.PyArrayObject *ao # note the change from np.ndarray
char *dataptr

object PyArray_NeighborhoodIterNew(flatiter it, np.npy_intp* bounds,
   int mode, np.ndarray fill_value)
int PyArrayNeighborhoodIter_Next(PyArrayNeighborhoodIterObject *it)
int PyArrayNeighborhoodIter_Reset(PyArrayNeighborhoodIterObject *it)

object PyArray_IterNew(object arr)
void PyArray_ITER_NEXT(flatiter it)
np.npy_intp PyArray_SIZE(np.ndarray arr)

cdef enum:
NPY_NEIGHBORHOOD_ITER_ZERO_PADDING,
NPY_NEIGHBORHOOD_ITER_ONE_PADDING,
NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING,
NPY_NEIGHBORHOOD_ITER_CIRCULAR_PADDING,
NPY_NEIGHBORHOOD_ITER_MIRROR_PADDING

np.import_array()

def windowed(np.ndarray[np.int_t, ndim=1] arr, bounds):

cdef flatiter iterx = flatiterPyArray_IterNew(objectarr)
cdef np.npy_intp size = PyArray_SIZE(arr)
cdef np.npy_intp* boundsPtr = [bounds[0], bounds[1]]
cdef int hoodSize = bounds[1] - bounds[0] + 1

# Create the Python object and keep a reference to it
cdef object niterx_ = PyArray_NeighborhoodIterNew(iterx,
boundsPtr, NPY_NEIGHBORHOOD_ITER_CIRCULAR_PADDING, None)
cdef PyArrayNeighborhoodIterObject *niterx = \
PyArrayNeighborhoodIterObject *niterx_

cdef int i,j
cdef np.ndarray[np.int_t, ndim=2] hoods

hoods = np.empty((arr.shape[0], hoodSize), dtype=np.int)
for i in range(iterx.size):
for j in range(niterx.size):
hoods[i,j] = (niterx.dataptr)[0]
PyArrayNeighborhoodIter_Next(niterx)
PyArray_ITER_NEXT(iterx)
PyArrayNeighborhoodIter_Reset(niterx)
return hoods

def test():
x = np.arange(10)
print x
print
print windowed(x, [-1, 3])
print
print windowed(x, [-2, 2])


--

If you run test(), this is what you should see:

[0 1 2 3 4 5 6 7 8 9]

[[9 0 1 2 3]
 [0 1 2 3 4]
 [1 2 3 4 5]
 [2 3 4 5 6]
 [3 4 5 6 7]
 [4 5 6 7 8]
 [5 6 7 8 9]
 [6 7 8 9 0]
 [7 8 9 0 1]
 [8 9 0 1 2]]

[[8 9 0 1 2]
 [9 0 1 2 3]
 [0 1 2 3 4]
 [1 2 3 4 5]
 [2 3 4 5 6]
 [3 4 5 6 7]
 [4 5 6 7 8]
 [5 6 7 8 9]
 [6 7 8 9 0]
 [7 8 9 0 1]]

windowed(x, [0, 2]) is almost like segment_axis(x, 3, 2, end='wrap').
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] float128 in fact float80

2011-10-17 Thread Matthew Brett
Hi,

On Sun, Oct 16, 2011 at 6:22 PM, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Sun, Oct 16, 2011 at 6:13 PM, Nathaniel Smith n...@pobox.com wrote:

 On Sun, Oct 16, 2011 at 4:29 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
  On Sun, Oct 16, 2011 at 4:16 PM, Nathaniel Smith n...@pobox.com wrote:
  I understand the argument that you don't want to call it float80
  because not all machines support a float80 type. But I don't
  understand why we would solve that problem by making up two *more*
  names (float96, float128) that describe types that *no* machines
  actually support... this is incredibly confusing.
 
  Well, float128 and float96 aren't interchangeable across architectures
  because of the different alignments, C long double isn't portable
  either,
  and float80 doesn't seem to be available anywhere. What concerns me is
  the
  difference between extended and quad precision, both of which can occupy
  128
  bits. I've complained about that for several years now, but as to
  extended
  precision, just don't use it. It will never be portable.

 I think part of the confusion here is about when a type is named like
 'floatN', does 'N' refer to the size of the data or to the minimum
 alignment? I have a strong intuition that it should be the former, and
 I assume Matthew does too. If we have a data structure like
  struct { uint8_t flags; void * data; }

 We need both in theory, in practice floats and doubles are pretty well
 defined these days, but long doubles depend on architecture and compiler for
 alignment, and even for representation in the case of PPC. I don't regard
 these facts as obscure if one is familiar with floating point, but most
 folks aren't and I agree that it can be misleading if one assumes that types
 and storage space are strongly coupled. This also ties in to the problem
 with ints and longs, which may both be int32 despite having different C
 names.


 then 'flags' will actually get 32 or 64 bits of space... but we would
 never, ever refer to it as a uint32 or a uint64! I know these extended
 precision types are even weirder because the compiler will insert that
 padding unconditionally, but the intuition still stands, and obviously
 some proportion of the userbase will share it.

 If our API makes smart people like Matthew spend a week going around
 in circles, then our API is dangerously broken!


 I think dangerously is a bit overly dramatic.


 The solution is just to call it 'longdouble', which clearly
 communicates 'this does some quirky thing that depends on your C
 compiler and architecture'.


 Well, I don't know. If someone is unfamiliar with floats I would expect they
 would post a complaint about bugs if a file of longdouble type written on a
 32 bit system couldn't be read on a 64 bit system. It might be better to
 somehow combine both the ieee type and the storage alignment.

David was pointing out that e.g. np.float128 could be a different
thing on SPARC, PPC and Intel, so it seems to me that the float128 is
a false friend if we think it at all likely that people will use
platforms other than Intel.

Personally, if I saw 'longdouble' as a datatype, it would not surprise
me if it wasn't portable across platforms, including 32 and 64 bit.

float80_96 and float80_128 seem fine to me, but it would also be good
to suggest longdouble as the default name to use for the
platform-specific higher-precision datatype to make code portable
across platforms.

See you,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Example Usage of Neighborhood Iterator in Cython

2011-10-17 Thread eat
Hi,

On Mon, Oct 17, 2011 at 9:19 PM, T J tjhn...@gmail.com wrote:

 I recently put together a Cython example which uses the neighborhood
 iterator.  It was trickier than I thought it would be, so I thought to
 share it with the community.  The function takes a 1-dimensional array
 and returns a 2-dimensional array of neighborhoods in the original
 area. This is somewhat similar to the functionality provided by
 segment_axis (http://projects.scipy.org/numpy/ticket/901), but I
 believe this slightly different in that neighborhood can extend to the
 left of the current index (assuming circular boundaries).  Keep in
 mind that this is just an example, and normal usage probably is not
 concerned with creating a new array.

 External link:  http://codepad.org/czRIzXQl

 --

 import numpy as np
 cimport numpy as np

 cdef extern from numpy/arrayobject.h:

ctypedef extern class numpy.flatiter [object PyArrayIterObject]:
cdef int nd_m1
cdef np.npy_intp index, size
cdef np.ndarray ao
cdef char *dataptr

# This isn't exposed to the Python API.
# So we can't use the same approach we used to define flatiter
ctypedef struct PyArrayNeighborhoodIterObject:
int nd_m1
np.npy_intp index, size
np.PyArrayObject *ao # note the change from np.ndarray
char *dataptr

object PyArray_NeighborhoodIterNew(flatiter it, np.npy_intp* bounds,
   int mode, np.ndarray fill_value)
int PyArrayNeighborhoodIter_Next(PyArrayNeighborhoodIterObject *it)
int PyArrayNeighborhoodIter_Reset(PyArrayNeighborhoodIterObject *it)

object PyArray_IterNew(object arr)
void PyArray_ITER_NEXT(flatiter it)
np.npy_intp PyArray_SIZE(np.ndarray arr)

cdef enum:
NPY_NEIGHBORHOOD_ITER_ZERO_PADDING,
NPY_NEIGHBORHOOD_ITER_ONE_PADDING,
NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING,
NPY_NEIGHBORHOOD_ITER_CIRCULAR_PADDING,
NPY_NEIGHBORHOOD_ITER_MIRROR_PADDING

 np.import_array()

 def windowed(np.ndarray[np.int_t, ndim=1] arr, bounds):

cdef flatiter iterx = flatiterPyArray_IterNew(objectarr)
cdef np.npy_intp size = PyArray_SIZE(arr)
cdef np.npy_intp* boundsPtr = [bounds[0], bounds[1]]
cdef int hoodSize = bounds[1] - bounds[0] + 1

# Create the Python object and keep a reference to it
cdef object niterx_ = PyArray_NeighborhoodIterNew(iterx,
boundsPtr, NPY_NEIGHBORHOOD_ITER_CIRCULAR_PADDING, None)
cdef PyArrayNeighborhoodIterObject *niterx = \
PyArrayNeighborhoodIterObject *niterx_

cdef int i,j
cdef np.ndarray[np.int_t, ndim=2] hoods

hoods = np.empty((arr.shape[0], hoodSize), dtype=np.int)
for i in range(iterx.size):
for j in range(niterx.size):
hoods[i,j] = (niterx.dataptr)[0]
PyArrayNeighborhoodIter_Next(niterx)
PyArray_ITER_NEXT(iterx)
PyArrayNeighborhoodIter_Reset(niterx)
return hoods

 def test():
x = np.arange(10)
print x
print
print windowed(x, [-1, 3])
print
print windowed(x, [-2, 2])


 --

 If you run test(), this is what you should see:

 [0 1 2 3 4 5 6 7 8 9]

 [[9 0 1 2 3]
  [0 1 2 3 4]
  [1 2 3 4 5]
  [2 3 4 5 6]
  [3 4 5 6 7]
  [4 5 6 7 8]
  [5 6 7 8 9]
  [6 7 8 9 0]
  [7 8 9 0 1]
  [8 9 0 1 2]]

 [[8 9 0 1 2]
  [9 0 1 2 3]
  [0 1 2 3 4]
  [1 2 3 4 5]
  [2 3 4 5 6]
  [3 4 5 6 7]
  [4 5 6 7 8]
  [5 6 7 8 9]
  [6 7 8 9 0]
  [7 8 9 0 1]]

 windowed(x, [0, 2]) is almost like segment_axis(x, 3, 2, end='wrap').

Just wondering what are the main benefits, of your approach, comparing to
simple:
In []: a= arange(5)
In []: n= 10
In []: b= arange(n)[:, None]
In []: mod(a+ roll(b, 1), n)
Out[]:
array([[9, 0, 1, 2, 3],
   [0, 1, 2, 3, 4],
   [1, 2, 3, 4, 5],
   [2, 3, 4, 5, 6],
   [3, 4, 5, 6, 7],
   [4, 5, 6, 7, 8],
   [5, 6, 7, 8, 9],
   [6, 7, 8, 9, 0],
   [7, 8, 9, 0, 1],
   [8, 9, 0, 1, 2]])
In []: mod(a+ roll(b, 2), n)
Out[]:
array([[8, 9, 0, 1, 2],
   [9, 0, 1, 2, 3],
   [0, 1, 2, 3, 4],
   [1, 2, 3, 4, 5],
   [2, 3, 4, 5, 6],
   [3, 4, 5, 6, 7],
   [4, 5, 6, 7, 8],
   [5, 6, 7, 8, 9],
   [6, 7, 8, 9, 0],
   [7, 8, 9, 0, 1]])

Regards,
eat

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion