Re: [Numpy-discussion] `allclose` vs `assert_allclose`

2014-07-18 Thread Tony Yu
On Wed, Jul 16, 2014 at 1:47 PM, Ralf Gommers ralf.gomm...@gmail.com
wrote:




 On Wed, Jul 16, 2014 at 6:37 AM, Tony Yu tsy...@gmail.com wrote:

 Is there any reason why the defaults for `allclose` and `assert_allclose`
 differ? This makes debugging a broken test much more difficult. More
 importantly, using an absolute tolerance of 0 causes failures for some
 common cases. For example, if two values are very close to zero, a test
 will fail:

 np.testing.assert_allclose(0, 1e-14)

 Git blame suggests the change was made in the following commit, but I
 guess that change only reverted to the original behavior.


 https://github.com/numpy/numpy/commit/f43223479f917e404e724e6a3df27aa701e6d6bf


 Indeed, was reverting a change that crept into
 https://github.com/numpy/numpy/commit/f527b49a



 It seems like the defaults for  `allclose` and `assert_allclose` should
 match, and an absolute tolerance of 0 is probably not ideal. I guess this
 is a pretty big behavioral change, but the current default for
 `assert_allclose` doesn't seem ideal.


 I agree, current behavior quite annoying. It would make sense to change
 the atol default to 1e-8, but technically it's a backwards compatibility
 break. Would probably have a very minor impact though. Changing the default
 for rtol in one of the functions may be much more painful though, I don't
 think that should be done.

 Ralf


Thanks for the feedback. I've opened up a PR here:

https://github.com/numpy/numpy/pull/4880

Best,
-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] `allclose` vs `assert_allclose`

2014-07-16 Thread Tony Yu
Is there any reason why the defaults for `allclose` and `assert_allclose`
differ? This makes debugging a broken test much more difficult. More
importantly, using an absolute tolerance of 0 causes failures for some
common cases. For example, if two values are very close to zero, a test
will fail:

np.testing.assert_allclose(0, 1e-14)

Git blame suggests the change was made in the following commit, but I guess
that change only reverted to the original behavior.

https://github.com/numpy/numpy/commit/f43223479f917e404e724e6a3df27aa701e6d6bf

It seems like the defaults for  `allclose` and `assert_allclose` should
match, and an absolute tolerance of 0 is probably not ideal. I guess this
is a pretty big behavioral change, but the current default for
`assert_allclose` doesn't seem ideal.

Thanks,
-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New (old) function proposal.

2014-02-18 Thread Tony Yu
On Tue, Feb 18, 2014 at 11:11 AM, Jaime Fernández del Río 
jaime.f...@gmail.com wrote:




 On Tue, Feb 18, 2014 at 9:03 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:




 On Tue, Feb 18, 2014 at 9:40 AM, Nathaniel Smith n...@pobox.com wrote:

 On 18 Feb 2014 11:05, Charles R Harris charlesr.har...@gmail.com
 wrote:
 
  Hi All,
 
  There is an old ticket, #1499, that suggest adding a segment_axis
 function.
 
  def segment_axis(a, length, overlap=0, axis=None, end='cut',
 endvalue=0):
  Generate a new array that chops the given array along the given
 axis
  into overlapping frames.
 
  Parameters
  --
  a : array-like
  The array to segment
  length : int
  The length of each frame
  overlap : int, optional
  The number of array elements by which the frames should overlap
  axis : int, optional
  The axis to operate on; if None, act on the flattened array
  end : {'cut', 'wrap', 'end'}, optional
  What to do with the last frame, if the array is not evenly
  divisible into pieces.
 
  - 'cut'   Simply discard the extra values
  - 'wrap'  Copy values from the beginning of the array
  - 'pad'   Pad with a constant value
 
  endvalue : object
  The value to use for end='pad'
 
 
  Examples
  
   segment_axis(arange(10), 4, 2)
  array([[0, 1, 2, 3],
 [2, 3, 4, 5],
 [4, 5, 6, 7],
 [6, 7, 8, 9]])
 
 
  Is there and interest in having this function available?

 I'd use it, though haven't looked at the details of this api per set yet.

 rolling_window or shingle are better names.

 It should probably be documented and implemented to return a view when
 possible (using stride tricks). Along with a note that whether this is
 possible depends heavily on 32- vs. 64-bitness.


 I believe it does return views when possible. There are two patches
 attached to the issue, one for the function and another for tests. So here
 is an easy commit for someone ;) The original author seems to be Anne
 Archibald, who should be mentioned if this is put in.

 Where does 'shingle' come from. I can see the analogy but haven't seen
 that as a technical term.


 In an inkjet printing pipeline, one of the last steps is to split the
 image into the several passes that will be needed to physically print it.
 This is often done with a tiled, non-overlapping mask, known as a
 shingling mask.


Just for reference, scikit-image has a similar function (w/o padding)
called `view_as_blocks`:

http://scikit-image.org/docs/0.9.x/api/skimage.util.html#view-as-blocks

(and a rolling-window version called `view_as_windows`).

Cheers,
-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: scikits-image 0.7.0 release

2012-09-30 Thread Tony Yu
Announcement: scikits-image 0.7.0
=

We're happy to announce the 7th version of scikits-image!

Scikits-image is an image processing toolbox for SciPy that includes
algorithms
for segmentation, geometric transformations, color space manipulation,
analysis, filtering, morphology, feature detection, and more.

For more information, examples, and documentation, please visit our website

  http://skimage.org


New Features


It's been only 3 months since scikits-image 0.6 was released, but in that
short
time, we've managed to add plenty of new features and enhancements,
including

- Geometric image transforms
- 3 new image segmentation routines (Felsenzwalb, Quickshift, SLIC)
- Local binary patterns for texture characterization
- Morphological reconstruction
- Polygon approximation
- CIE Lab color space conversion
- Image pyramids
- Multispectral support in random walker segmentation
- Slicing, concatenation, and natural sorting of image collections
- Perimeter and coordinates measurements in regionprops
- An extensible image viewer based on Qt and Matplotlib, with plugins for
edge
  detection, line-profiling, and viewing image collections

Plus, this release adds a number of bug fixes, new examples, and performance
enhancements.


Contributors to this release


This release was only possible due to the efforts of many contributors, both
new and old.

- Andreas Mueller
- Andreas Wuerl
- Andy Wilson
- Brian Holt
- Christoph Gohlke
- Dharhas Pothina
- Emmanuelle Gouillart
- Guillaume Gay
- Josh Warner
- James Bergstra
- Johannes Schonberger
- Jonathan J. Helmus
- Juan Nunez-Iglesias
- Leon Tietz
- Marianne Corvellec
- Matt McCormick
- Neil Yager
- Nicolas Pinto
- Nicolas Poilvert
- Pavel Campr
- Petter Strandmark
- Stefan van der Walt
- Tim Sheerman-Chase
- Tomas Kazmar
- Tony S Yu
- Wei Li
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Synonym standards

2012-07-27 Thread Tony Yu
On Fri, Jul 27, 2012 at 11:39 AM, Derek Homeier 
de...@astro.physik.uni-goettingen.de wrote:

 On 27.07.2012, at 3:27PM, Benjamin Root wrote:

   I would prefer not to use:  from xxx import *,
  
   because of the name pollution.
  
   The name  convention that I copied above facilitates avoiding the
 pollution.
  
   In the same spirit, I've used:
   import pylab as plb
 
  But in that same spirit, using np and plt separately is preferred.
 
 
  Namespaces are one honking great idea -- let's do more of those!
  from http://www.python.org/dev/peps/pep-0020/
 
  Absolutely correct.  The namespace pollution is exactly why we encourage
 converts to move over from the pylab mode to separating out the numpy and
 pyplot namespaces.  There are very subtle issues that arise when doing
 from pylab import * such as overriding the built-in any and all.  The
 only real advantage of the pylab mode over separating out numpy and pyplot
 is conciseness, which many matlab users expect at first.

 It unfortunately also comes with the convenience of using the ipython
 --pylab mode -
 does anyone know how to turn the import * part of, or how to create a
 similar working
 environment with ipython that does keep namespaces clean?

 Cheers,
 Derek



 There's a config flag that you can add to your ipython profile:

c.TerminalIPythonApp.pylab_import_all = False

For example, my profile is in ~/.ipython/profile_default/ipython_config.py

Cheers,
-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] convert any non square matrix in to square matrix using numpy

2012-06-18 Thread Tony Yu
On Mon, Jun 18, 2012 at 11:55 AM, bob tnur bobtnu...@gmail.com wrote:

 Hi,
 how I can convert (by adding zero) of any non-square numpy matrix in to
 square matrix using numpy? then how to find the minimum number in each row
 except the zeros added(for making square matrix)? ;)

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


Hi Bob,

I'm not quite sure what you're looking for (esp. the second question), but
maybe something like the following?

#~~~ example code
import numpy as np

nonsquare = np.random.random(size=(3, 5))

M, N = nonsquare.shape
width = max(M, N)
square = np.zeros((width, width))
square[:M, :N] = nonsquare

min_rows = np.min(nonsquare, axis=1)
#~~~

-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] why not zerodivision error?

2012-05-20 Thread Tony Yu
On Sun, May 20, 2012 at 3:47 AM, eat e.antero.ta...@gmail.com wrote:

 Hi,

 On Sun, May 20, 2012 at 10:21 AM, Chao YUE chaoyue...@gmail.com wrote:

 Dear all,

 could anybody give one sentence about this? why in the loop I didn't get
 zerodivision error by when I explicitly do this, I get a zerodivision
 error? thanks.

 In [7]: for i in np.arange(-10,10):
 print 1./i
...:
 -0.1
 -0.
 -0.125
 -0.142857142857
 -0.1667
 -0.2
 -0.25
 -0.
 -0.5
 -1.0
 inf
 1.0
 0.5
 0.
 0.25
 0.2
 0.1667
 0.142857142857
 0.125
 0.

 In [8]: 1/0.

 ---
 ZeroDivisionError Traceback (most recent call
 last)
 /mnt/f/data/DROUGTH/ipython-input-8-7e0bf5b37da6 in module()
  1 1/0.


[snip]


 You may like to read more on here
 http://docs.scipy.org/doc/numpy/reference/generated/numpy.seterr.html#numpy.seterr


[snip]


 My 2 cents,
 -eat



Also, note that the original errors were raised when working with pure
Python types (ints and floats), while in the loop you were dividing by
numpy scalars, which handles division-by-zero differently.

Best,
-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Status of np.bincount

2012-05-03 Thread Tony Yu
On Thu, May 3, 2012 at 9:57 AM, Robert Kern robert.k...@gmail.com wrote:

 On Thu, May 3, 2012 at 2:50 PM, Robert Elsner ml...@re-factory.de wrote:
 
  Am 03.05.2012 15:45, schrieb Robert Kern:
  On Thu, May 3, 2012 at 2:24 PM, Robert Elsner ml...@re-factory.de
 wrote:
  Hello Everybody,
 
  is there any news on the status of np.bincount with respect to big
  numbers? It seems I have just been bitten by #225. Is there an
 efficient
  way around? I found the np.histogram function painfully slow.
 
  Below a simple script, that demonstrates bincount failing with a memory
  error on big numbers
 
  import numpy as np
 
  x = np.array((30e9,)).astype(int)
  np.bincount(x)
 
 
  Any good idea how to work around it. My arrays contain somewhat 50M
  entries in the range from 0 to 30e9. And I would like to have them
  bincounted...
 
  You need a sparse data structure, then. Are you sure you even have
 duplicates?
 
  Anyways, I won't work out all of the details, but let me sketch
  something that might get you your answers. First, sort your array.
  Then use np.not_equal(x[:-1], x[1:]) as a mask on np.arange(1,len(x))
  to find the indices where each sorted value changes over to the next.
  The np.diff() of that should give you the size of each. Use np.unique
  to get the sorted unique values to match up with those sizes.
 
  Fixing all of the off-by-one errors and dealing with the boundary
  conditions correctly is left as an exercise for the reader.
 
 
  ?? I suspect that this mail was meant to end up in the thread about
  sparse array data?

 No, I am responding to you.


Hi Robert (Elsner),

Just to expand a bit on Robert Kern's explanation: Your problem is only
partly related to Ticket #225 http://projects.scipy.org/numpy/ticket/225.
Even if that is fixed, you won't be able to call `bincount` with an array
containing `30e9` unless you implement something using sparse arrays
because `bincount` wants return an array that's `30e9 + 1` in length, which
isn't going to happen.

-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] (no subject)

2012-04-20 Thread Tony Yu
On Fri, Apr 20, 2012 at 2:15 PM, Andre Martel soucoupevola...@yahoo.comwrote:

 What would be the best way to remove the maximum from a cube and
 collapse the remaining elements along the z-axis ?
 For example, I want to reduce Cube to NewCube:

  Cube
 array([[[  13,   2,   3,  42],
 [  5, 100,   7,   8],
 [  9,   1,  11,  12]],

[[ 25,   4,  15,   1],
 [ 17,  30,   9,  20],
 [ 21,   2,  23,  24]],

[[ 1,   2,  27,  28],
 [ 29,  18,  31,  32],
 [ -1,   3,  35,   4]]])

 NewCube

 array([[[  13,   2,   3,  1],
 [  5, 30,   7,   8],
 [  9,   1,  11,  12]],

[[ 1,   2,  15,  28],
 [ 17,  18,  9,  20],
 [ -1,   2,  23,   4]]])

 I tried with argmax() and then roll() and delete() but these
 all work on 1-D arrays only. Thanks.


Actually, those commands do work with n-dimensional arrays, but you'll have
to specify the axis (the default for all these functions is `axis=None`
which tell the function to operate on flattened the arrays). If you don't
care about the order of the collapse, you can just do a simple sort (and
drop the last---i.e. max---sub-array):

 np.sort(cube, axis=0)[:2]

If you need to keep the order, you can probably use some combination of
`np.argsort` and `np.choose`.

Cheers,
-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] adding a cut function to numpy

2012-04-16 Thread Tony Yu
On Mon, Apr 16, 2012 at 5:27 PM, Skipper Seabold jsseab...@gmail.comwrote:

 Hi,

 I have a pull request here [1] to add a cut function similar to R's
 [2]. It seems there are often requests for similar functionality. It's
 something I'm making use of for my own work and would like to use in
 statstmodels and in generating instances of pandas' Factor class, but
 is this generally something people would find useful to warrant its
 inclusion in numpy? It will be even more useful I think with an enum
 dtype in numpy.

 If you aren't familiar with cut, here's a potential use case. Going
 from a continuous to a categorical variable.

 Given a continuous variable

 [~/]
 [8]: age = np.random.randint(15,70, size=100)

 [~/]
 [9]: age
 [9]:
 array([58, 32, 20, 25, 34, 69, 52, 27, 20, 23, 51, 61, 39, 54, 39, 44, 27,
   17, 29, 18, 66, 25, 44, 21, 54, 32, 50, 60, 25, 41, 68, 25, 42, 69,
   50, 69, 24, 69, 69, 48, 30, 20, 18, 15, 50, 48, 44, 27, 57, 52, 40,
   27, 58, 45, 44, 32, 54, 19, 36, 32, 55, 17, 55, 15, 19, 29, 22, 25,
   36, 44, 29, 53, 37, 31, 51, 39, 21, 66, 25, 26, 20, 17, 41, 50, 27,
   23, 62, 69, 65, 34, 38, 61, 39, 34, 38, 35, 18, 36, 29, 26])

 Give me a variable where people are in age groups (lower bound is not
 inclusive)

 [~/]
 [10]: groups = [14, 25, 35, 45, 55, 70]

 [~/]
 [11]: age_cat = np.cut(age, groups)

 [~/]
 [12]: age_cat
 [12]:
 array([5, 2, 1, 1, 2, 5, 4, 2, 1, 1, 4, 5, 3, 4, 3, 3, 2, 1, 2, 1, 5, 1, 3,
   1, 4, 2, 4, 5, 1, 3, 5, 1, 3, 5, 4, 5, 1, 5, 5, 4, 2, 1, 1, 1, 4, 4,
   3, 2, 5, 4, 3, 2, 5, 3, 3, 2, 4, 1, 3, 2, 4, 1, 4, 1, 1, 2, 1, 1, 3,
   3, 2, 4, 3, 2, 4, 3, 1, 5, 1, 2, 1, 1, 3, 4, 2, 1, 5, 5, 5, 2, 3, 5,
   3, 2, 3, 2, 1, 3, 2, 2])

 Skipper

 [1] https://github.com/numpy/numpy/pull/248
 [2] http://stat.ethz.ch/R-manual/R-devel/library/base/html/cut.html


Is this the same as `np.searchsorted` (with reversed arguments)?

In [292]: np.searchsorted(groups, age)
Out[292]:
array([5, 2, 1, 1, 2, 5, 4, 2, 1, 1, 4, 5, 3, 4, 3, 3, 2, 1, 2, 1, 5, 1, 3,
   1, 4, 2, 4, 5, 1, 3, 5, 1, 3, 5, 4, 5, 1, 5, 5, 4, 2, 1, 1, 1, 4, 4,
   3, 2, 5, 4, 3, 2, 5, 3, 3, 2, 4, 1, 3, 2, 4, 1, 4, 1, 1, 2, 1, 1, 3,
   3, 2, 4, 3, 2, 4, 3, 1, 5, 1, 2, 1, 1, 3, 4, 2, 1, 5, 5, 5, 2, 3, 5,
   3, 2, 3, 2, 1, 3, 2, 2])
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] adding a cut function to numpy

2012-04-16 Thread Tony Yu
On Mon, Apr 16, 2012 at 6:01 PM, Skipper Seabold jsseab...@gmail.comwrote:

 On Mon, Apr 16, 2012 at 5:51 PM, Tony Yu tsy...@gmail.com wrote:
 
 
  On Mon, Apr 16, 2012 at 5:27 PM, Skipper Seabold jsseab...@gmail.com
  wrote:
 
  Hi,
 
  I have a pull request here [1] to add a cut function similar to R's
  [2]. It seems there are often requests for similar functionality. It's
  something I'm making use of for my own work and would like to use in
  statstmodels and in generating instances of pandas' Factor class, but
  is this generally something people would find useful to warrant its
  inclusion in numpy? It will be even more useful I think with an enum
  dtype in numpy.
 
  If you aren't familiar with cut, here's a potential use case. Going
  from a continuous to a categorical variable.
 
  Given a continuous variable
 
  [~/]
  [8]: age = np.random.randint(15,70, size=100)
 
  [~/]
  [9]: age
  [9]:
  array([58, 32, 20, 25, 34, 69, 52, 27, 20, 23, 51, 61, 39, 54, 39, 44,
 27,
17, 29, 18, 66, 25, 44, 21, 54, 32, 50, 60, 25, 41, 68, 25, 42,
 69,
50, 69, 24, 69, 69, 48, 30, 20, 18, 15, 50, 48, 44, 27, 57, 52,
 40,
27, 58, 45, 44, 32, 54, 19, 36, 32, 55, 17, 55, 15, 19, 29, 22,
 25,
36, 44, 29, 53, 37, 31, 51, 39, 21, 66, 25, 26, 20, 17, 41, 50,
 27,
23, 62, 69, 65, 34, 38, 61, 39, 34, 38, 35, 18, 36, 29, 26])
 
  Give me a variable where people are in age groups (lower bound is not
  inclusive)
 
  [~/]
  [10]: groups = [14, 25, 35, 45, 55, 70]
 
  [~/]
  [11]: age_cat = np.cut(age, groups)
 
  [~/]
  [12]: age_cat
  [12]:
  array([5, 2, 1, 1, 2, 5, 4, 2, 1, 1, 4, 5, 3, 4, 3, 3, 2, 1, 2, 1, 5, 1,
  3,
1, 4, 2, 4, 5, 1, 3, 5, 1, 3, 5, 4, 5, 1, 5, 5, 4, 2, 1, 1, 1, 4,
 4,
3, 2, 5, 4, 3, 2, 5, 3, 3, 2, 4, 1, 3, 2, 4, 1, 4, 1, 1, 2, 1, 1,
 3,
3, 2, 4, 3, 2, 4, 3, 1, 5, 1, 2, 1, 1, 3, 4, 2, 1, 5, 5, 5, 2, 3,
 5,
3, 2, 3, 2, 1, 3, 2, 2])
 
  Skipper
 
  [1] https://github.com/numpy/numpy/pull/248
  [2] http://stat.ethz.ch/R-manual/R-devel/library/base/html/cut.html
 
 
  Is this the same as `np.searchsorted` (with reversed arguments)?
 
  In [292]: np.searchsorted(groups, age)
  Out[292]:
  array([5, 2, 1, 1, 2, 5, 4, 2, 1, 1, 4, 5, 3, 4, 3, 3, 2, 1, 2, 1, 5, 1,
 3,
 1, 4, 2, 4, 5, 1, 3, 5, 1, 3, 5, 4, 5, 1, 5, 5, 4, 2, 1, 1, 1, 4,
 4,
 3, 2, 5, 4, 3, 2, 5, 3, 3, 2, 4, 1, 3, 2, 4, 1, 4, 1, 1, 2, 1, 1,
 3,
 3, 2, 4, 3, 2, 4, 3, 1, 5, 1, 2, 1, 1, 3, 4, 2, 1, 5, 5, 5, 2, 3,
 5,
 3, 2, 3, 2, 1, 3, 2, 2])
 

 That's news to me, and I don't know how I missed it.


Actually, the only reason I remember searchsorted is because I also
implemented a variant of it before finding that it existed.


 It looks like
 there is overlap, but cut will also do binning for equal width
 categorization

 [~/]
 [21]: np.cut(age, 6)
 [21]:
 array([5, 2, 1, 2, 3, 6, 5, 2, 1, 1, 4, 6, 3, 5, 3, 4, 2, 1, 2, 1, 6, 2, 4,
   1, 5, 2, 4, 5, 2, 3, 6, 2, 3, 6, 4, 6, 1, 6, 6, 4, 2, 1, 1, 1, 4, 4,
   4, 2, 5, 5, 3, 2, 5, 4, 4, 2, 5, 1, 3, 2, 5, 1, 5, 1, 1, 2, 1, 2, 3,
4, 2, 5, 3, 2, 4, 3, 1, 6, 2, 2, 1, 1, 3, 4, 2, 1, 6, 6, 6, 3, 3, 6,
   3, 3, 3, 3, 1, 3, 2, 2])

 and explicitly handles the case with constant x

 [~/]
 [26]: x = np.ones(100)*6

 [~/]
 [27]: np.cut(x, 5)
 [27]:
 array([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
   3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
   3, 3, 3, 3, 3, 3, 3, 3])

 I guess I could patch searchsorted. Thoughts?

 Skipper


Hmm, ... I'm not sure if these other call signatures map as well to the
name searchsorted; i.e. cut makes more sense in these cases.

On the other hand, it seems these cases could be handled by `np.digitize`
(although they aren't currently). Hmm,... why doesn't the above call to
`cut` match (what I assume to be) the equivalent call to `np.digitize`:

In [302]: np.digitize(age, np.linspace(age.min(), age.max(), 6))
Out[302]:
array([4, 2, 1, 1, 2, 6, 4, 2, 1, 1, 4, 5, 3, 4, 3, 3, 2, 1, 2, 1, 5, 1, 3,
   1, 4, 2, 4, 5, 1, 3, 5, 1, 3, 6, 4, 6, 1, 6, 6, 4, 2, 1, 1, 1, 4, 4,
   3, 2, 4, 4, 3, 2, 4, 3, 3, 2, 4, 1, 2, 2, 4, 1, 4, 1, 1, 2, 1, 1, 2,
   3, 2, 4, 3, 2, 4, 3, 1, 5, 1, 2, 1, 1, 3, 4, 2, 1, 5, 6, 5, 2, 3, 5,
   3, 2, 3, 2, 1, 2, 2, 2])

It's unfortunate that `digitize` and `histogram` have one call signature,
but `searchsorted` has the reverse; in that sense, I like `cut` better.

Cheers
-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slice specified axis

2012-04-09 Thread Tony Yu
On Mon, Apr 9, 2012 at 12:22 PM, Benjamin Root ben.r...@ou.edu wrote:



 On Mon, Apr 9, 2012 at 12:14 PM, Jonathan T. Niehof jnie...@lanl.govwrote:

 On 04/06/2012 06:54 AM, Benjamin Root wrote:

  Take a peek at how np.gradient() does it. It creates a list of None with
  a length equal to the number of dimensions, and then inserts a slice
  object in the appropriate spot in the list.

 List of slice(None), correct? At least that's what I see in the source,
 and:

   a = numpy.array([[1,2],[3,4]])
   operator.getitem(a, (None, slice(1, 2)))
 array([[[3, 4]]])
   operator.getitem(a, (slice(None), slice(1, 2)))
 array([[2],
[4]])


 Correct, sorry, I was working from memory.

 Ben Root


I guess I wasn't reading very carefully and assumed that you meant a list
of `slice(None)` instead of a list of `None`. In any case, both your
solution and Matthew's solution work (and both are more readable than my
original implementation).

After I got everything cleaned up (and wrote documentation and tests), I
found out that numpy already has a function to do *exactly* what I wanted
in the first place: `np.split` (the slicing was just one component of
this). I was initially misled by the
docstringhttps://github.com/numpy/numpy/pull/249,
but with a list of indices, you can split an array into subarrays of
variable length (I wanted to use this to save and load ragged arrays).
Well, I guess it was a learning experience, at least.

In case anyone is wondering about the original question, `np.split` (and
`np.array_split`) uses `np.swapaxes` to specify the slicing axis.

Thanks for all your help.
-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slice specified axis

2012-04-06 Thread Tony Yu
On Fri, Apr 6, 2012 at 8:54 AM, Benjamin Root ben.r...@ou.edu wrote:



 On Friday, April 6, 2012, Val Kalatsky wrote:


 The only slicing short-cut I can think of is the Ellipsis object, but
 it's not going to help you much here.
 The alternatives that come to my mind are (1) manipulation of shape
 directly and (2) building a string and running eval on it.
 Your solution is better than (1), and (2) is a horrible hack, so your
 solution wins again.
 Cheers
 Val


 Take a peek at how np.gradient() does it.  It creates a list of None with
 a length equal to the number of dimensions, and then inserts a slice object
 in the appropriate spot in the list.

 Cheers!
 Ben Root


Hmm, it looks like my original implementation wasn't too far off. Thanks
for the tip!

-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Slice specified axis

2012-04-05 Thread Tony Yu
Is there a way to slice an nd-array along a specified axis? It's easy to
slice along a fixed axis, e.g.:

axis = 0:
 array[start:end]

axis = 1:
 array[:, start:end]
...

But I need to do this inside of a function that accepts arrays of any
dimension, and the user can operate on any axis of the array. My current
solution looks like the following:

 aslice = lambda axis, s, e: (slice(None),) * axis + (slice(s, e),)
 array[aslice(axis, start, end)]

which works, but I'm guessing that numpy has a more readable way of doing
this that I've overlooked.

Thanks,
-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how to cite 1Xn array as nX1 array?

2012-01-27 Thread Tony Yu
On Fri, Jan 27, 2012 at 9:28 AM, Paul Anton Letnes 
paul.anton.let...@gmail.com wrote:


 On 27. jan. 2012, at 14:52, Chao YUE wrote:

  Dear all,
 
  suppose I have a ndarray a:
 
  In [66]: a
  Out[66]: array([0, 1, 2, 3, 4])
 
  how can use it as 5X1 array without doing a=a.reshape(5,1)?

 Several ways, this is one, although not much simpler.
 In [6]: a
 Out[6]: array([0, 1, 2, 3, 4])

 In [7]: a.shape = 5, 1

 In [8]: a
 Out[8]:
 array([[0],
   [1],
   [2],
   [3],
   [4]])

 Paul


I'm assuming your issue with that call to reshape is that you need to know
the dimensions beforehand. An alternative is to call:

 a.reshape(-1, 1)

The -1 allows numpy to infer the length based on the given sizes.

Another alternative is:

 a[:, np.newaxis]

-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Counting the Colors of RGB-Image

2012-01-15 Thread Tony Yu
On Sun, Jan 15, 2012 at 10:45 AM, a...@pdauf.de wrote:


 Counting the Colors of RGB-Image,
 nameit im0 with im0.shape = 2500,3500,3
 with this code:

 tab0 = zeros( (256,256,256) , dtype=int)
 tt = im0.view()
 tt.shape = -1,3
 for r,g,b in tt:
  tab0[r,g,b] += 1

 Question:

 Is there a faster way in numpy to get this result?


 MfG elodw


Assuming that your image is made up of integer values (which I guess they'd
have to be if you're indexing into `tab0`), then you could write:

 rgb_unique = set(tuple(rgb) for rgb in tt)

I'm not sure if it's any faster than your loop, but I would assume it is.

-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] idea of optimisation?

2011-12-06 Thread Tony Yu
On Tue, Dec 6, 2011 at 2:51 AM, Xavier Barthelemy xab...@gmail.com wrote:

 ok let me be more precise

 I have an Z array which is the elevation
 from this I extract a discrete array of Zero Crossing, and another
 discrete array of Crests.
 len(crest) is different than len(Xzeros). I have a threshold method to
 detect my valid crests, and sometimes there are 2 crests between two
 zero-crossing (grouping effect)

 Crest and Zeros are 2 different arrays, with positions. example:
 Zeros=[1,2,3,4] Arrays=[1.5,1.7,3.5]


 and yes arrays can be sorted. not a problm with this.

 Xavier

 I may be oversimplifying this, but does searchsorted do what you want?

In [314]: xzeros=[1,2,3,4]; xcrests=[1.5,1.7,3.5]

In [315]: np.searchsorted(xzeros, xcrests)
Out[315]: array([1, 1, 3])

 This returns the indexes of xzeros to the left of xcrests.

-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Reading automatically all the parameters from a file

2011-11-30 Thread Tony Yu
On Wed, Nov 30, 2011 at 1:49 PM, Neal Becker ndbeck...@gmail.com wrote:

 My suggestion is: don't.

 It's easier to script runs if you read parameters from the command line.
 I recommend argparse.


 I think setting parameters in a config file and setting them on the
command line both have their merits. I like to combine ConfigObj with
argparse; something like:

#~~~
parser = argparse.ArgumentParser()
add arguments to parser here

cfg = configobj.ConfigObj(params_file.cfg)
parser.set_defaults(**cfg.config)
#~~~

Then call parser.parse_args, which will override parameters in the config
file by specifying values on the command line.

Cheers,
-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Type checking inconsistency

2011-10-16 Thread Tony Yu
Hi,

I noticed a type-checking inconsistency between assignments using slicing
and fancy-indexing. The first will happily cast on assignment (regardless of
type), while the second will throw a type error if there's reason to believe
the casting will be unsafe. I'm not sure which would be the correct
behavior, but the inconsistency is surprising.

Best,
-Tony

Example:

 import numpy as np
 a = np.arange(10)
 b = np.ones(10, dtype=np.uint8)

# this runs without error
 b[:5] = a[:5]

 mask = a  5
 b[mask] = b[mask]
TypeError: array cannot be safely cast to required type
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Type checking inconsistency

2011-10-16 Thread Tony Yu
On Sun, Oct 16, 2011 at 12:39 PM, Tony Yu tsy...@gmail.com wrote:

 Hi,

 I noticed a type-checking inconsistency between assignments using slicing
 and fancy-indexing. The first will happily cast on assignment (regardless of
 type), while the second will throw a type error if there's reason to believe
 the casting will be unsafe. I'm not sure which would be the correct
 behavior, but the inconsistency is surprising.

 Best,
 -Tony

 Example:

  import numpy as np
  a = np.arange(10)
  b = np.ones(10, dtype=np.uint8)

 # this runs without error
  b[:5] = a[:5]

  mask = a  5
  b[mask] = b[mask]
 TypeError: array cannot be safely cast to required type

 And I just noticed that 1D arrays behave differently than 2D arrays. If you
replace the above definitions of a, b with:

 a = np.arange(10)[:, np.newaxis]
 b = np.ones((10, 1), dtype=np.uint8)

 The rest of the code will run without error.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Type checking inconsistency

2011-10-16 Thread Tony Yu
On Sun, Oct 16, 2011 at 12:49 PM, Pauli Virtanen p...@iki.fi wrote:

 (16.10.2011 18:39), Tony Yu wrote:
import numpy as np
a = np.arange(10)
b = np.ones(10, dtype=np.uint8)
 
  # this runs without error
b[:5] = a[:5]
 
mask = a  5
b[mask] = b[mask]
  TypeError: array cannot be safely cast to required type

 Seems to be fixed in Git master

   import numpy as np
   a = np.arange(10)
   b = np.ones(10, dtype=np.uint8)
   mask = a  5
   b[mask] = b[mask]
   b[mask] = a[mask]
   np.__version__
 '2.0.0.dev-1dc1877'


(I see you noticed the typo in my original example: b -- a). Agreed, I'm
getting this error with an old master. I just tried master and it worked
fine, but the maintenance branch ('1.6.2.dev-396dbb9') does still have this
issue.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] question about subtraction and shape

2011-09-01 Thread Tony Yu
On Thu, Sep 1, 2011 at 5:33 PM, Jonas Wallin jonas.walli...@gmail.comwrote:

 Hello,

 I implemented the following line of code:

 Gami[~index0].shape  (100,)
 sigma.shape  (1,1)
 Gami[~index0]  = Gam[~index0] - sigma**2

 I get the error message:
 *** ValueError: array is not broadcastable to correct shape

 apparently
 *temp* = Gam[~index0] - sigma**2
 *temp*.shape --  (1,100)

 Which seems strange to me is there any reason why *temp* gets this shape ?


Jonas,

Numpy changes the shape of the output array based on certain broadcasting
rules (See http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
Broadcasting is really useful when you need it, but it can be confusing when
you don't.

Best,
-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpydoc warnings for methods

2011-07-17 Thread Tony Yu
I'm building documentation using Sphinx, and it seems that numpydoc is
raising
a lot of warnings. Specifically, the warnings look like failed to import
method_name, toctree
references unknown document u'method_name', toctree contains reference
to nonexisting document 'method_name'---for each method defined. The
example below reproduces the issue on my system (Sphinx 1.0.7, numpy HEAD).
These warnings appear in my build of the numpy docs, as well.

Removing numpydoc from the list of Sphinx extensions gets rid of these
warnings
(but, of course, adds new warnings if headings for 'Parameters', 'Returns',
etc. are present).

Am I doing something wrong here?

Thanks,
-Tony

test_sphinx/foo.py:
===

class Bar(object):
Bar docstring

def baz(self):
baz docstring
pass

test_sphinx/doc/source/foo.rst:
===

.. autoclass:: foo.Bar
   :members:

Warnings from build:


/Users/Tony/Desktop/test_sphinx/doc/source/foo.rst:13: (WARNING/2) failed to
import baz

/Users/Tony/Desktop/test_sphinx/doc/source/foo.rst:13: (WARNING/2) toctree
references unknown document u'baz'

/Users/Tony/Desktop/test_sphinx/doc/source/foo.rst:: WARNING: toctree
contains reference to nonexisting document 'baz'
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpydoc warnings for methods

2011-07-17 Thread Tony Yu
On Sun, Jul 17, 2011 at 3:35 PM, Ralf Gommers
ralf.gomm...@googlemail.comwrote:


 On Sun, Jul 17, 2011 at 7:15 PM, Tony Yu tsy...@gmail.com wrote:


 Am I doing something wrong here?

 You're not, it's a Sphinx bug that Pauli already has a fix for. See
 http://projects.scipy.org/numpy/ticket/1772

 Ralf


I thought I searched pretty thoroughly, but apparently my google skills are
lacking. Thanks for the link!
-Tony
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using interpolate with zero-rank array raises error

2009-07-17 Thread Tony Yu



Date: Thu, 16 Jul 2009 23:37:58 -0400
From: Ralf Gommers ralf.gomm...@googlemail.com

It seems to me that there are quite a few other functions that will  
give
errors with 0-D arrays (apply_along/over_axis are two that come to  
mind).

There is nothing to interpolate so I'm not surprised.


Hmm, I don't quite understand. In the example below, the 0-D array  
(`x0`) gives the x-value(s) where you want interpolated values. This  
shouldn't require a non-scalar, and in fact, interp currently accepts  
python scalars (but not Numpy scalars).


If the 0-D array replaced `x` and `y`---the known data points--- then,  
I agree there would be nothing to interpolate. I believe the example  
functions you cite are similar to replacing `x` and `y` below with  
scalar values.


... or am I just missing something?

Thanks,
-Tony



When using interpolate with a zero-rank array, I get ValueError:
object of too small depth for desired array. The following code
reproduces this issue


import numpy as np
x0 = np.array(0.1)
x = np.linspace(0, 1)
y = np.linspace(0, 1)
np.interp(x0, x, y)




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using interpolate with zero-rank array raises error

2009-07-17 Thread Tony Yu



Date: Fri, 17 Jul 2009 13:27:25 -0400
From: Ralf Gommers ralf.gomm...@googlemail.com
Subject: Re: [Numpy-discussion] Using interpolate with zero-rank array
raises error

[snip]
If it works with scalars it should work with 0-D arrays I think. So  
you

should probably open a ticket and attach your patch.


Thanks for your responses and suggestion. I've opened up Ticket #1177  
on Trac to address this issue.


Best,
-Tony___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Using interpolate with zero-rank array raises error

2009-07-16 Thread Tony Yu
Sorry, I don't know if its proper mailing-list-etiquette to bump my  
own post...

Are there any comments on whether this interp error is expected  
behavior?

Thanks,
-Tony

 Date: Mon, 13 Jul 2009 13:50:50 -0400
 From: Tony Yu tsy...@gmail.com
 Subject: [Numpy-discussion] Using interpolate with zero-rank array
   raises  error
 To: Discussion of Numerical Python numpy-discussion@scipy.org
 Message-ID: 67526143-4424-4e06-bfe5-435091c78...@gmail.com
 Content-Type: text/plain; charset=us-ascii

 (Sorry if this is a duplicate; I think sent this from the wrong email
 the first time)

 When using interpolate with a zero-rank array, I get ValueError:
 object of too small depth for desired array. The following code
 reproduces this issue

 import numpy as np
 x0 = np.array(0.1)
 x = np.linspace(0, 1)
 y = np.linspace(0, 1)
 np.interp(x0, x, y)

 I hesitate to call this behavior a bug (when I've done this in the
 past, I find out I'm just doing something wrong), but I find the error
 unnecessary (and the error output a bit obscure).

 Below is a (poorly tested) fix, which seems to work for me. (Sorry I'm
 not on svn, so it isn't a proper diff)

 Cheers,
 -Tony

 Python 2.5.1
 numpy 1.3.0

 numpy/lib/function_base.py: line 1144
 =
if isinstance(x, (float, int, number)):
return compiled_interp([x], xp, fp, left, right).item()
 +elif isinstance(x, np.ndarray) and x.ndim == 0:
 +return compiled_interp(x[np.newaxis], xp, fp, left, right)[0]
else:
return compiled_interp(x, xp, fp, left, right)
 =


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Using interpolate with zero-rank array raises error

2009-07-13 Thread Tony Yu
(Sorry if this is a duplicate; I think sent this from the wrong email  
the first time)


When using interpolate with a zero-rank array, I get ValueError:  
object of too small depth for desired array. The following code  
reproduces this issue


 import numpy as np
 x0 = np.array(0.1)
 x = np.linspace(0, 1)
 y = np.linspace(0, 1)
 np.interp(x0, x, y)

I hesitate to call this behavior a bug (when I've done this in the  
past, I find out I'm just doing something wrong), but I find the error  
unnecessary (and the error output a bit obscure).


Below is a (poorly tested) fix, which seems to work for me. (Sorry I'm  
not on svn, so it isn't a proper diff)


Cheers,
-Tony

Python 2.5.1
numpy 1.3.0

numpy/lib/function_base.py: line 1144
=
   if isinstance(x, (float, int, number)):
   return compiled_interp([x], xp, fp, left, right).item()
+elif isinstance(x, np.ndarray) and x.ndim == 0:
+return compiled_interp(x[np.newaxis], xp, fp, left, right)[0]
   else:
   return compiled_interp(x, xp, fp, left, right)
=




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] array += masked_array gives a normal array, not masked array

2008-06-01 Thread Tony Yu
Ok, so you guys shot down my last attempt at finding a bug :). Here's  
another attempt.

  array + masked_array

outputs a masked array

  array += masked_array

outputs an array.

I'm actually not sure if this is a bug (works the same for both the  
old and new masked arrays), but I thought this was unexpected.

*However*, if the right side were an old masked array, the masked  
values would not be added to the output array. With the new masked  
arrays, the data from the masked array is added regardless of whether  
the value is masked (example, below).

-Tony

Example:
===

In [1]: import numpy

In [2]: normal = numpy.array([1, 1])

In [3]: masked = numpy.ma.array([1, 1], mask=[True, False])

In [4]: new_masked = normal + masked

In [5]: new_masked

Out[5]:
masked_array(data = [-- 2],
   mask = [ True False],
   fill_value=99)


In [6]: normal += masked

In [7]: normal

Out[7]: array([2, 2])

# If used old masked arrays in the above, the final output would be:  
array([1, 2])

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Strange behavior in setting masked array values in Numpy 1.1.0

2008-05-31 Thread Tony Yu
Great job getting numpy 1.1.0 out and thanks for including the old API  
of masked arrays.

I've been playing around with some software using numpy 1.0.4 and took  
a crack at upgrading it to numpy 1.1.0, but I ran into some strange  
behavior when assigning to slices of a masked array.

I made the simplest example I could think of to show this weird  
behavior. Basically, reordering the masked array and assigning back to  
itself *on the same line* seems to work for part of the array, but  
other parts are left unchanged. In the example below, half of the  
array is assigned properly and the other half isn't. This problem is  
eliminated if the assignment is done with a copy of the array.  
Alternatively, this problem is eliminated if I using  
numpy.oldnumeric.ma.masked_array instead of the new masked array  
implementation.

Is this just a problem on my setup?

Thanks in advance for your help.
-Tony Yu

Example:

In [1]: import numpy

In [2]: masked = numpy.ma.masked_array([[1, 2, 3, 4, 5]], mask=False)

In [3]: masked[:] = numpy.fliplr(masked.copy())

In [4]: print masked
[[5 4 3 2 1]]

In [5]: masked[:] = numpy.fliplr(masked)

In [6]: print masked
[[1 2 3 2 1]]


Specs:
==
Numpy 1.1.0
Python 2.5.1
OS X Leopard 10.5.3

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Strange behavior in setting masked array values in Numpy 1.1.0

2008-05-31 Thread Tony Yu


On May 31, 2008, at 6:04 PM, Matthieu Brucher wrote:


Hi,

This is to be expected. You are trying to modify and read the same  
array at the same time, which should never be done.


Thanks, I'll have to keep this in mind next time.

So, what's the best way to rearrange a subarray of an array. Copying  
seems inefficient.


-Tony




Matthieu

2008/5/31 Tony Yu [EMAIL PROTECTED]:
Great job getting numpy 1.1.0 out and thanks for including the old API
of masked arrays.

I've been playing around with some software using numpy 1.0.4 and took
a crack at upgrading it to numpy 1.1.0, but I ran into some strange
behavior when assigning to slices of a masked array.

I made the simplest example I could think of to show this weird
behavior. Basically, reordering the masked array and assigning back to
itself *on the same line* seems to work for part of the array, but
other parts are left unchanged. In the example below, half of the
array is assigned properly and the other half isn't. This problem is
eliminated if the assignment is done with a copy of the array.
Alternatively, this problem is eliminated if I using
numpy.oldnumeric.ma.masked_array instead of the new masked array
implementation.

Is this just a problem on my setup?

Thanks in advance for your help.
-Tony Yu

Example:

In [1]: import numpy

In [2]: masked = numpy.ma.masked_array([[1, 2, 3, 4, 5]], mask=False)

In [3]: masked[:] = numpy.fliplr(masked.copy())

In [4]: print masked
[[5 4 3 2 1]]

In [5]: masked[:] = numpy.fliplr(masked)

In [6]: print masked
[[1 2 3 2 1]]


Specs:
==
Numpy 1.1.0
Python 2.5.1
OS X Leopard 10.5.3

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion



--
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/? 
blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher  
___

Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion