[Numpy-discussion] Histogram bin definition

2008-07-16 Thread Stéfan van der Walt
Hi all,

I am busy documenting `histogram`, and the definition of a bin
eludes me.  Here is the behaviour that troubles me:

 np.histogram([1,2,1], bins=[0, 1, 2, 3], new=True)
(array([0, 2, 1]), array([0, 1, 2, 3]))

From this result, it seems as if a bin is defined as the half-open
interval [right_edge, left_edge).

Now, looks what happens in the following case:

 np.histogram([1,2,3], bins=[0,1,2,3], new=True)
(array([0, 1, 2]), array([0, 1, 2, 3]))

Here, the last bin is defined by the closed interval [right_edge, left_edge]!

Is this a bug, or a design consideration?

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy date/time types and the resolution concept

2008-07-16 Thread Francesc Alted
A Tuesday 15 July 2008, Anne Archibald escrigué:
 2008/7/15 Francesc Alted [EMAIL PROTECTED]:
  Maybe is only that.  But by using the term 'frequency' I tend to
  think that you are expecting to have one entry (observation) in
  your array for each time 'tick' since time start.  OTOH, the term
  'resolution' doesn't have this implication, and only states the
  precision of the timestamp.
 
  Well, after reading the mails from Chris and Anne, I think the best
  is that the origin would be kept as an int64 with a resolution of
  microseconds (for compatibility with the ``datetime`` module, as
  I've said before).

 A couple of details worth pointing out: we don't need a zillion
 resolutions. One that's as good as the world time standards, and one
 that spans an adequate length of time should cover it. After all, the
 only reason for not using the highest available resolution is if you
 want to cover a larger range of times. So there is no real need for
 microseconds and milliseconds and seconds and days and weeks and...

Maybe you are right, but by providing many resolutions we are trying to 
cope with the needs of people that are using them a lot.  In 
particular, we are willing that the authors of the timseries scikit can 
find on these new dtype a fair replacement of their Date class (our 
proposal will be not so featured, but...).

 There is also no need for the origin to be kept with a resolution as
 high as microseconds; seconds would do just fine, since if necessary
 it can be interpreted as exactly 7000 seconds after the epoch even
 if you are using femtoseconds elsewhere.

Good point.  However, we finally managed to not include the ``origin`` 
metadata in our new proposal.  Have a look at the second proposal that 
I'll be posting soon for details.

Cheers,

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Recommendations for using numpy ma?

2008-07-16 Thread Russell E. Owen
In article [EMAIL PROTECTED],
 Pierre GM [EMAIL PROTECTED] wrote:

 Russell,
 
 What used to be numpy.core.ma is now numpy.oldnumeric.ma, but this latter isd 
 no longer supported and will disappear soon as well. Just use numpy.ma
 
 If you really need support to ancient versions of numpy, just check the import
 try:
import numpy.core.ma as ma
 except ImportError:
import numpy as ma

(I assume you mean the last line to be import numpy .ma as ma?)

Thanks! I was afraid I would have to do that, but not having ready 
access to ancient versions of numpy I was hoping I was wrong and that 
numpy.ma would work for those as well.

However, I plan to assume a modern numpy first, as in:
try:
   import numpy.ma as ma
except ImportError:
  import numpy.core.ma as ma

 Then, you need to replace every mention of numpy.core.ma in your code by ma.
 Your example would then become:
 
 unmaskedArr = numpy.array(
     ma.array(
 ^^
         dataArr,
         mask = mask  self.stretchExcludeBits,
         dtype = float,
     ).compressed())
 
 
 
 On another note: wha't the problem with 'compressed' ? It should return a 
 ndarray, why/how doesn't it work ?

The problem is that the returned array does not support the sort 
method. Here's an example using numpy 1.0.4:

import numpy
z = numpy.zeros(10, dtype=float)
m = numpy.zeros(10, dtype=bool)
m[1] = 1
mzc = numpy.ma.array(z, mask=m).compressed()
mzc.sort()

the last statement fails witH:

Traceback (most recent call last):
  File stdin, line 1, in module
  File 
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-pac
kages/numpy/core/ma.py, line 2132, in not_implemented
raise NotImplementedError, not yet implemented for numpy.ma arrays
NotImplementedError: not yet implemented for numpy.ma arrays

This seems like a bug to me. The returned object is reported by repr 
to be a normal numpy array; there is no obvious way to tell that it is 
anything else. Also I didn't see any reason for compressed to return 
anything except an ordinary array. Oh well.

I reported this on the mailing list awhile ago when I first stumbled 
across it, but nobody seemed interested at the time. It wasn't clear to 
me whether it was a bug so I dropped it without reporting it formally 
(and I've still not reported it formally).

-- Russell

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Recommendations for using numpy ma?

2008-07-16 Thread Pierre GM
On Wednesday 16 July 2008 12:28:40 Russell E. Owen wrote:
  If you really need support to ancient versions of numpy, just check the
  import try:
 import numpy.core.ma as ma
  except ImportError:
 import numpy as ma

 (I assume you mean the last line to be import numpy .ma as ma?)

Indeed, sorry about that.


 import numpy
 z = numpy.zeros(10, dtype=float)
 m = numpy.zeros(10, dtype=bool)
 m[1] = 1
 mzc = numpy.ma.array(z, mask=m).compressed()
 mzc.sort()

 the last statement fails witH:

 Traceback (most recent call last):
   File stdin, line 1, in module
   File
 /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-pac
 kages/numpy/core/ma.py, line 2132, in not_implemented
 raise NotImplementedError, not yet implemented for numpy.ma arrays
 NotImplementedError: not yet implemented for numpy.ma arrays

 This seems like a bug to me. 

Works on 1.1.x

 The returned object is reported by repr 
 to be a normal numpy array; there is no obvious way to tell that it is
 anything else. Also I didn't see any reason for compressed to return
 anything except an ordinary array. Oh well.

.compressed returns an array of the same type as the underlying .data

 I reported this on the mailing list awhile ago when I first stumbled
 across it, but nobody seemed interested at the time. It wasn't clear to
 me whether it was a bug so I dropped it without reporting it formally
 (and I've still not reported it formally).

Try again w/ 1.1.x, but please, please do report bugs when you see them..


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] RFC: A (second) proposal for implementing some date/time types in NumPy

2008-07-16 Thread Francesc Alted
Hi,

After tons of excellent feedback received for our first proposal about 
the date/time types in NumPy Ivan and me have had another brainstorming 
session and ended with a new proposal for your consideration.

While this one does not reap all and every of the suggestions you have 
made, we think that it does represent a fair balance between 
capabilities and simplicity and that it can be a solid and efficient 
basis for build-up more date/time niceties on top of it (read a 
full-fledged ``DateTime`` array class).

Although the proposal is not complete, the essentials are there.
So, please read on.  We will be glad to hear your opinions.

Thanks!

-- 
Francesc Alted


 A (second) proposal for implementing some date/time types in NumPy


:Author: Francesc Alted i Abad
:Contact: [EMAIL PROTECTED]
:Author: Ivan Vilata i Balaguer
:Contact: [EMAIL PROTECTED]
:Date: 2008-07-16


Executive summary
=

A date/time mark is something very handy to have in many fields where
one has to deal with data sets.  While Python has several modules that
define a date/time type (like the integrated ``datetime`` [1]_ or
``mx.DateTime`` [2]_), NumPy has a lack of them.

In this document, we are proposing the addition of a series of date/time
types to fill this gap.  The requirements for the proposed types are
two-folded: 1) they have to be fast to operate with and 2) they have to
be as compatible as possible with the existing ``datetime`` module that
comes with Python.


Types proposed
==

To start with, it is virtually impossible to come up with a single
date/time type that fills the needs of every case of use.  So, after
pondering about different possibilities, we have stick with *two*
different types, namely ``datetime64`` and ``timedelta64`` (these names
are preliminary and can be changed), that can have different resolutions
so as to cover different needs.

**Important note:** the resolution is conceived here as a metadata that
  *complements* a date/time dtype, *without changing the base type*.

Now it goes a detailed description of the proposed types.


``datetime64``
--

It represents a time that is absolute (i.e. not relative).  It is
implemented internally as an ``int64`` type.  The internal epoch is
POSIX epoch (see [3]_).

Resolution
~~

It accepts different resolutions and for each of these resolutions, it
will support different time spans.  The table below describes the
resolutions supported with its corresponding time spans.

+--+--+
| Resolution   | Time span (years)|
+--+--+
|  Code |   Meaning|  |
+==+==+
|   Y   |  year|  [9.2e18 BC, 9.2e18 AC]  |
|   Q   |  quarter |  [3.0e18 BC, 3.0e18 AC]  |
|   M   |  month   |  [7.6e17 BC, 7.6e17 AC]  |
|   W   |  week|  [1.7e17 BC, 1.7e17 AC]  |
|   d   |  day |  [2.5e16 BC, 2.5e16 AC]  |
|   h   |  hour|  [1.0e15 BC, 1.0e15 AC]  |
|   m   |  minute  |  [1.7e13 BC, 1.7e13 AC]  |
|   s   |  second  |  [ 2.9e9 BC,  2.9e9 AC]  |
|   ms  |  millisecond |  [ 2.9e6 BC,  2.9e6 AC]  |
|   us  |  microsecond |  [290301 BC, 294241 AC]  |
|   ns  |  nanosecond  |  [  1678 AC,   2262 AC]  |
+--+--+

Building a ``datetime64`` dtype
~~~

The proposed way to specify the resolution in the dtype constructor
is:

Using parameters in the constructor::

  dtype('datetime64', res=us)  # the default res. is microseconds

Using the long string notation::

  dtype('datetime64[us]')   # equivalent to dtype('datetime64')

Using the short string notation::

  dtype('T8[us]')   # equivalent to dtype('T8')

Compatibility issues


This will be fully compatible with the ``datetime`` class of the
``datetime`` module of Python only when using a resolution of
microseconds.  For other resolutions, the conversion process will
loose precision or will overflow as needed.


``timedelta64``
---

It represents a time that is relative (i.e. not absolute).  It is
implemented internally as an ``int64`` type.

Resolution
~~

It accepts different resolutions and for each of these resolutions, it
will support different time spans.  The table below describes the
resolutions supported with its corresponding time spans.

+--+--+
| Resolution   | Time span|
+--+--+
|  Code |   Meaning|  |
+==+==+

Re: [Numpy-discussion] Second revised list of backports for 1.1.1.

2008-07-16 Thread Charles R Harris
On Tue, Jul 15, 2008 at 11:31 PM, Fernando Perez [EMAIL PROTECTED]
wrote:

 Hi Chuck,

 On Tue, Jul 15, 2008 at 8:41 PM, Charles R Harris
 [EMAIL PROTECTED] wrote:
  After the second pass only a few remain. Fernando, if you don't get to
 these
  I'll do them tomorrow.
 
  fperez
  r5298
  r5301
  r5303

 Sorry, I hadn't seen this.  If you can do it that would be great: I'm
 at a workshop right now with only a couple of hours in the night for
 any coding, and trying to push an ipython release for scipy.  But if
 you can't do it tomorrow let me  know and I'll make a space  for it
 later in the week.


OK. Done.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Histogram bin definition

2008-07-16 Thread David Huard
Hi Stefan,

It's designed this way. The main reason is that the default bin edges are
generated using

linspace(a.min(), a.max(), bin)

when bin is an integer.

If we leave the rightmost edge open, then the histogram of a 100 items array
will typically yield an histogram with 99 values because the maximum value
is an outlier. I thought the least surprising behavior was to make sure that
all items are counted.

The other reason has to do with backward compatibility, I tried to avoid
breakage for the simplest use case.

`histogram(r, bins=10)` yields the same thing as `histogram(r, bins=10,
new=True)`

We could avoid the open ended edge by defining the edges by
linspace(a.min(), a.max()+delta, bin), but people will wonder why the right
edge is 3.01 instead of 3.

Cheers,

David






2008/7/16 Stéfan van der Walt [EMAIL PROTECTED]:

 Hi all,

 I am busy documenting `histogram`, and the definition of a bin
 eludes me.  Here is the behaviour that troubles me:

  np.histogram([1,2,1], bins=[0, 1, 2, 3], new=True)
 (array([0, 2, 1]), array([0, 1, 2, 3]))

 From this result, it seems as if a bin is defined as the half-open
 interval [right_edge, left_edge).

 Now, looks what happens in the following case:

  np.histogram([1,2,3], bins=[0,1,2,3], new=True)
 (array([0, 1, 2]), array([0, 1, 2, 3]))

 Here, the last bin is defined by the closed interval [right_edge,
 left_edge]!

 Is this a bug, or a design consideration?

 Regards
 Stéfan
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Second revised list of backports for 1.1.1.

2008-07-16 Thread Fernando Perez
On Wed, Jul 16, 2008 at 9:50 AM, Charles R Harris
 OK. Done.

Fantastic, many thanks.

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] kinds

2008-07-16 Thread Charles Doutriaux
Hello,

A long long time ago, there used to be this module named kinds

It's totally outdated nowdays but it had one nice functionality and i 
was wondering if you knew how to reproduce that

it was:
maxexp=kinds.default_float_kind.MAX_10_EXP
minexp=kinds.default_float_kind.MIN_10_EXP

and a bunch of similar flags that would basically tell you the limits on 
the machine you're running (or at least compiled on)

Any idea on how to reproduce this?

While we're at it, does anybody know of way in python to know how memory 
is available on your system (similar to the free call unde rLinux) I'm 
looking for something that works accross platforms (well we can forget 
windows for now I could live w/o it)

Thanks,

C.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket review: #843

2008-07-16 Thread Charles R Harris
On Tue, Jul 15, 2008 at 1:42 AM, Michael Abbott [EMAIL PROTECTED]
wrote:

 I'm reviewing my tickets (seems a good thing to do with a release
 imminent), and I'll post up each ticket that merits comment as a separate
 message.

 Ticket #843 has gone into trunk (commit 5361, oliphant) ... but your
 editor appears to be introducing hard tabs!  Hard tab characters are
 fortunately relatively rare in numpy source, but my patch has gone in with
 tabs I didn't use.


Heh, hard tabs have been removed.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] kinds

2008-07-16 Thread Pierre GM
On Wednesday 16 July 2008 15:08:59 Charles Doutriaux wrote:

 and a bunch of similar flags that would basically tell you the limits on
 the machine you're running (or at least compiled on)

 Any idea on how to reproduce this?

Charels, have you tried numpy.finfo ? That should give you information for 
floating points.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket review: #843

2008-07-16 Thread Alan G Isaac
On Wed, 16 Jul 2008, Charles R Harris apparently wrote:
  Hard tab characters are fortunately relatively rare in 
  numpy source 

http://www.rizzoweb.com/java/tabs-vs-spaces.html

Cheers,
Alan Isaac



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket review: #843

2008-07-16 Thread Michael Abbott
On Wed, 16 Jul 2008, Alan G Isaac wrote:
 On Wed, 16 Jul 2008, Charles R Harris apparently wrote:
Michael Abbott actually wrote:
   Hard tab characters are fortunately relatively rare in 
   numpy source 
 http://www.rizzoweb.com/java/tabs-vs-spaces.html

Ha ha ha!  I'm not going to rise to the bait.  

Well, I'll just say: source code is a concrete expression, not an 
abstraction.  

I expect an argument on this topic could be unhealthy...
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket review: #848, leak in PyArray_DescrFromType

2008-07-16 Thread Charles R Harris
On Tue, Jul 15, 2008 at 9:28 AM, Michael Abbott [EMAIL PROTECTED]
wrote:

 On Tue, 15 Jul 2008, Michael Abbott wrote:
  Only half of my patch for this bug has gone into trunk, and without the
  rest of my patch there remains a leak.

 I think I might need to explain a little more about the reason for this
 patch, because obviously the bug it fixes was missed the last time I
 posted on this bug.

 So here is the missing part of the patch:

  --- numpy/core/src/scalartypes.inc.src  (revision 5411)
  +++ numpy/core/src/scalartypes.inc.src  (working copy)
  @@ -1925,19 +1925,30 @@
   goto finish;
   }
 
  +Py_XINCREF(typecode);
   arr = PyArray_FromAny(obj, typecode, 0, 0, FORCECAST, NULL);
  -if ((arr==NULL) || (PyArray_NDIM(arr)  0)) return arr;
  +if ((arr==NULL) || (PyArray_NDIM(arr)  0)) {
  +Py_XDECREF(typecode);
  +return arr;
  +}
   robj = PyArray_Return((PyArrayObject *)arr);
 
   finish:
  -if ((robj==NULL) || (robj-ob_type == type)) return robj;
  +if ((robj==NULL) || (robj-ob_type == type)) {
  +Py_XDECREF(typecode);
  +return robj;
  +}
   /* Need to allocate new type and copy data-area over */
   if (type-tp_itemsize) {
   itemsize = PyString_GET_SIZE(robj);
   }
   else itemsize = 0;
   obj = type-tp_alloc(type, itemsize);
  -if (obj == NULL) {Py_DECREF(robj); return NULL;}
  +if (obj == NULL) {
  +Py_XDECREF(typecode);
  +Py_DECREF(robj);
  +return NULL;
  +}
   if (typecode==NULL)
   typecode = PyArray_DescrFromType([EMAIL PROTECTED]@);
   dest = scalar_value(obj, typecode);

 On the face of it it might appear that all the DECREFs are cancelling out
 the first INCREF, but not so.  Let's see two more lines of context:

   src = scalar_value(robj, typecode);
   Py_DECREF(typecode);

 Ahah.  That DECREF balances the original PyArray_DescrFromType, or maybe
 the later call ... and of course this has to happen on *ALL* return paths.
 If we now take a closer look at the patch we can see that it's doing two
 separate things:

 1. There's an extra Py_XINCREF to balance the ref count lost to
 PyArray_FromAny and ensure that typecode survives long enough;

 2. Every early return path has an extra Py_XDECREF to balance the creation
 of typecode.

 I rest my case for this patch.


Yes, there does look to be a memory leak here. Not to mention a missing NULL
check since PyArray_Scalar not only doesn't swallow a reference, it can't
take a Null value for desc. But the whole function is such a mess I want to
see if we can rewrite it to have a better flow of logic puts on todo list

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket review: #843

2008-07-16 Thread Charles R Harris
On Wed, Jul 16, 2008 at 1:33 PM, Michael Abbott [EMAIL PROTECTED]
wrote:

 On Wed, 16 Jul 2008, Alan G Isaac wrote:
  On Wed, 16 Jul 2008, Charles R Harris apparently wrote:
 Michael Abbott actually wrote:
Hard tab characters are fortunately relatively rare in
numpy source
  http://www.rizzoweb.com/java/tabs-vs-spaces.html

 Ha ha ha!  I'm not going to rise to the bait.

 Well, I'll just say: source code is a concrete expression, not an
 abstraction.

 I expect an argument on this topic could be unhealthy...


In any case, the python standard is four spaces. Linux uses tabs. When in
Rome...

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket review #849: reference to deallocated object?

2008-07-16 Thread Charles R Harris
On Tue, Jul 15, 2008 at 1:53 AM, Michael Abbott [EMAIL PROTECTED]
wrote:

 Tenuous but easy fix, and conformant to style elsewhere.


This one depends on whether there is some sort of threading going on that
can interrupt in the middle of the call. Probably not, but the fix doesn't
look disruptive and I'll put it in just to be safe. I actually think things
should be set up so that the descr reference counts never go to zero, i.e.,
all the types should be singleton's set up during initialization and
maintained throughout the existence of numpy. But it's not that way now with
the way the chararray type is implemented. I wonder if we need a NULL check
also?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] kinds

2008-07-16 Thread Charles Doutriaux
Thx Pierre,

That's exactly what i was looking for

C.

Pierre GM wrote:
 On Wednesday 16 July 2008 15:08:59 Charles Doutriaux wrote:

   
 and a bunch of similar flags that would basically tell you the limits on
 the machine you're running (or at least compiled on)

 Any idea on how to reproduce this?
 

 Charels, have you tried numpy.finfo ? That should give you information for 
 floating points.


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket review #850: leak in _strings_richcompare

2008-07-16 Thread Charles R Harris
On Tue, Jul 15, 2008 at 1:50 AM, Michael Abbott [EMAIL PROTECTED]
wrote:

 This one is easy, ought to go in.  Fixes a (not particularly likely)
 memory leak.
 ___


Done and backported.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket review: #843

2008-07-16 Thread Alan G Isaac
On Wed, 16 Jul 2008, Charles R Harris apparently wrote:
 the python standard is four spaces 

It is only a recommendation:
http://www.python.org/dev/peps/pep-0008/
(And a misguided one at that.  ;-) )

Cheers,
Alan Isaac



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Ticket #837

2008-07-16 Thread Pauli Virtanen

http://scipy.org/scipy/numpy/ticket/837

Infinite loop in fromfile and fromstring with sep=' ' and malformed input.

I committed a fix to trunk. Does this need a 1.1.1 backport?

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket review: #848, leak in PyArray_DescrFromType

2008-07-16 Thread Stéfan van der Walt
2008/7/16 Charles R Harris [EMAIL PROTECTED]:
 Yes, there does look to be a memory leak here. Not to mention a missing NULL
 check since PyArray_Scalar not only doesn't swallow a reference, it can't
 take a Null value for desc. But the whole function is such a mess I want to
 see if we can rewrite it to have a better flow of logic puts on todo list

Can we apply the patch in the meantime?  (My) TODO lists tend to get
very long...

Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket review: #843

2008-07-16 Thread Charles R Harris
On Wed, Jul 16, 2008 at 2:48 PM, Alan G Isaac [EMAIL PROTECTED] wrote:

 On Wed, 16 Jul 2008, Charles R Harris apparently wrote:
  the python standard is four spaces

 It is only a recommendation:
 http://www.python.org/dev/peps/pep-0008/
 (And a misguided one at that.  ;-) )


I see your pep-0008 and raise you pep-3100 ;)


   - The C style guide will be updated to use 4-space indents, never tabs.
   This style should be used for all new files; existing files can be updated
   only if there is no hope to ever merge a particular file from the Python 2
   HEAD. Within a file, the indentation style should be consistent. No other
   style guide changes are planned ATM.


I'll bet you use Emacs too ;)

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket #837

2008-07-16 Thread Charles R Harris
On Wed, Jul 16, 2008 at 3:05 PM, Pauli Virtanen [EMAIL PROTECTED] wrote:


 http://scipy.org/scipy/numpy/ticket/837

 Infinite loop in fromfile and fromstring with sep=' ' and malformed input.

 I committed a fix to trunk. Does this need a 1.1.1 backport?


Yes, I think so. TIA,

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy Advanced Indexing Question

2008-07-16 Thread Jack.Cook
Greetings, 

I have an I,J,K 3D volume of amplitude values at regularly sampled time 
intervals. I have an I,J 2D slice which contains a time (K) value at each I, J 
location. What I would like to do is extract a subvolume at a constant +/- K 
window around the slice. Is there an easy way to do this using advanced 
indexing or some other method? Thanks in advanced for your help.

- Jack


Kind Regards,

Jack Cook
Jack Cook

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Masked arrays and pickle/unpickle

2008-07-16 Thread Anthony Floyd
We have an application that has previously used masked arrays from Numpy
1.0.3.  Part of saving files from that application involved pickling
data types that contained these masked arrays.

In the latest round of library updates, we've decided to move to the
most recent version of matplotlib, which requires Numpy 1.1.

Unfortunately, when we try to unpickle the data saved with Numpy 1.0.3
in the new code using Numpy 1.1.0, it chokes because it can't import
numpy.core.ma for the masked arrays.  A check of Numpy 1.1.0 shows that
this is now numpy.ma.core.

Does anyone have any advice on how we can unpickle the old data files
and update the references to the new classes?

Thanks,
Anthony.

--
Anthony Floyd, PhD
Convergent Manufacturing Technologies Inc.
6190 Agronomy Rd, Suite 403
Vancouver BC  V6T 1Z3
CANADA

Email: [EMAIL PROTECTED] | Tel:   604-822-9682 x102
WWW:   http://www.convergent.ca| Fax:   604-822-9659  

CMT is hiring: See http://www.convergent.ca for details

 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] svd

2008-07-16 Thread Charles R Harris
On Wed, Jul 16, 2008 at 3:58 PM, Charles Doutriaux [EMAIL PROTECTED]
wrote:

 Hello,

 I'm using 1.1.0 and I have a bizarre thing happening

 it seems as if:
 doing:
 import numpy
 SVD = numpy.linalg.svd

 if different as doing
 import numpy.oldnumeric.linear_algebra
 SVD = numpy.oldnumeric.linear_algebra.singular_value_decomposition

 In the first case passing an array (204,1484) retuns array of shape:
 svd: (204, 204) (204,) (1484, 1484)

 in the second case I get (what i expected actually):
 svd: (204, 204) (204,) (204, 1484)

 But looking at the code, it seems like
 numpy.oldnumeric.linear_algebra.singular_value_decomposition
 is basicalyy numpy.linalg.svd

 Any idea on what's happening here?


There is a full_matrices flag that determines if you get the full orthogonal
matrices, or the the minimum size needed, i.e.

In [12]: l,d,r = linalg.svd(x, full_matrices=0)

In [13]: shape(r)
Out[13]: (2, 4)

In [14]: x = zeros((2,4))

In [15]: l,d,r = linalg.svd(x)

In [16]: shape(r)
Out[16]: (4, 4)

In [17]: l,d,r = linalg.svd(x, full_matrices=0)

In [18]: shape(r)
Out[18]: (2, 4)


Chuck






 Thx,

 C.


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket review: #843

2008-07-16 Thread Stéfan van der Walt
2008/7/16 Christopher Barker [EMAIL PROTECTED]:
 Indentation is syntax in python -- we do need to all do it the same way,
 and four spaces is the standard -- there simply isn't another reasonable
 option if you want o share code with anyone else.

I agree.  Let's just end this thread here.  It simply can't lead to
any useful discussion.

Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Advanced Indexing Question

2008-07-16 Thread Jack.Cook
Robert, 

I can understand how this works if K is a constant time value but in my case K 
varies at each location in the two-dimensional slice. In other words, if I was 
doing this in a for loop I would do something like this

for i in range(numI):
   for j in range(numJ):
  k = slice(i,j)
  trace = cube(i,j,k-half_width:k+half_width)
  # shove trace in sub volume

What am I missing?

- Jack


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Robert Kern
Sent: Wednesday, July 16, 2008 4:56 PM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] Numpy Advanced Indexing Question


On Wed, Jul 16, 2008 at 16:45,  [EMAIL PROTECTED] wrote:
 Greetings,

 I have an I,J,K 3D volume of amplitude values at regularly sampled time
 intervals. I have an I,J 2D slice which contains a time (K) value at each I,
 J location. What I would like to do is extract a subvolume at a constant +/-
 K window around the slice. Is there an easy way to do this using advanced
 indexing or some other method? Thanks in advanced for your help.

cube[:,:,K-half_width:K+half_width]

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket review: #843

2008-07-16 Thread Alan G Isaac
On Wed, 16 Jul 2008, Christopher Barker apparently wrote:
 Indentation is syntax in python -- we do need to all do it 
 the same way, and four spaces is the standard -- there 
 simply isn't another reasonable option if you want o share 
 code with anyone else. 


Last comment (since this has already gone too long):

There are large projects that accept the use of either 
convention.  (E.g., Zope, if I recall correctly.)

BUT projects set their own style guidelines,
and I am NOT in any way proposing that NumPy change.
(Not that it would possibly matter if I did so.)

But just to be clear: the common arguments against the use 
of tabs are demonstrably false and illustrate either 
ignorance or use of an incapable editor.  Nobody with 
a decent editor will ever have problems with code that 
consistently uses tabs for indentation---a choice that can 
easily be signalled by modelines when code is shared on 
a project that allows both conventions.

Cheers,
Alan


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] svd

2008-07-16 Thread Charles Doutriaux
doh...

Thanks Charles... I guess I've been staring at this code for too long 
now...

C.

Charles R Harris wrote:


 On Wed, Jul 16, 2008 at 3:58 PM, Charles Doutriaux 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Hello,

 I'm using 1.1.0 and I have a bizarre thing happening

 it seems as if:
 doing:
 import numpy
 SVD = numpy.linalg.svd

 if different as doing
 import numpy.oldnumeric.linear_algebra
 SVD = numpy.oldnumeric.linear_algebra.singular_value_decomposition

 In the first case passing an array (204,1484) retuns array of shape:
 svd: (204, 204) (204,) (1484, 1484)

 in the second case I get (what i expected actually):
 svd: (204, 204) (204,) (204, 1484)

 But looking at the code, it seems like
 numpy.oldnumeric.linear_algebra.singular_value_decomposition
 is basicalyy numpy.linalg.svd

 Any idea on what's happening here?


 There is a full_matrices flag that determines if you get the full 
 orthogonal matrices, or the the minimum size needed, i.e.

 In [12]: l,d,r = linalg.svd(x, full_matrices=0)

 In [13]: shape(r)
 Out[13]: (2, 4)

 In [14]: x = zeros((2,4))

 In [15]: l,d,r = linalg.svd(x)

 In [16]: shape(r)
 Out[16]: (4, 4)

 In [17]: l,d,r = linalg.svd(x, full_matrices=0)

 In [18]: shape(r)
 Out[18]: (2, 4)


 Chuck



  


 Thx,

 C.


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org mailto:Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion


 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Advanced Indexing Question

2008-07-16 Thread Robert Kern
On Wed, Jul 16, 2008 at 17:12,  [EMAIL PROTECTED] wrote:
 Robert,

 I can understand how this works if K is a constant time value but in my case 
 K varies at each location in the two-dimensional slice. In other words, if I 
 was doing this in a for loop I would do something like this

 for i in range(numI):
   for j in range(numJ):
  k = slice(i,j)
  trace = cube(i,j,k-half_width:k+half_width)
  # shove trace in sub volume

 What am I missing?

Ah, okay. It's a bit tricky, though. Yes, you need to use fancy
indexing. Since axis you want to be index fancifully is not the first
one, you have to be more explicit than you might otherwise want. For
example, it would be great if you could just use slices for the first
two axes:

  cube[:,:,slice + numpy.arange(-half_width,half_width)]

but the semantics of that are a bit different for reasons I can
explain later, if you want. Instead, you have to have explicit
fancy-index arrays for the first two axes. Further, the arrays for
each axis need to be broadcastable to each other. Fancy indexing will
iterate over these broadcasted arrays in parallel with each other to
form the new array. The liberal application of numpy.newaxis will help
us achieve that. So this is the complete recipe:


In [29]: import numpy
In [30]: ni, nj, nk = (10, 15, 20)

# Make a fake data cube such that cube[i,j,k] == k for all i,j,k.
In [31]: cube = numpy.empty((ni,nj,nk), dtype=int)
In [32]: cube[:,:,:] = numpy.arange(nk)[numpy.newaxis,numpy.newaxis,:]

# Pick out a random fake horizon in k.
In [34]: kslice = numpy.random.randint(5, 15, size=(ni, nj))
In [35]: kslice
Out[35]:
array([[13, 14,  9, 12, 12, 11,  8, 14, 11, 13, 13, 13,  8, 11,  8],
   [ 7, 12, 12,  6, 10, 12,  9, 11, 13,  9, 14, 11,  5, 12, 12],
   [ 7,  5, 10,  9,  6,  5,  5, 14,  5,  6,  7, 10,  6, 10, 11],
   [ 6,  9, 11, 14,  7, 11, 10,  6,  6,  9,  9, 11,  5,  5, 14],
   [12,  8, 11,  6, 10,  8,  5,  9,  8, 10,  7,  5,  9,  9, 14],
   [ 9,  8, 10,  9, 10, 12, 10, 10,  6, 10, 11,  6,  8,  7,  7],
   [11, 12,  7, 13,  5,  5,  8, 14,  5, 14,  9, 10, 12,  7, 14],
   [ 7,  7,  7, 12, 10,  6, 13, 13, 11, 13,  8, 11, 13, 14, 14],
   [ 6, 13, 13, 10, 10, 14, 10,  8,  9, 14, 13, 12,  9,  9,  5],
   [13, 14, 10,  8, 11, 11, 10,  6, 12, 11, 12, 12, 13, 11,  7]])

In [36]: half_width = 3

# These two replace the empty slices for the first two axes.
In [37]: idx_i = numpy.arange(ni)[:,numpy.newaxis,numpy.newaxis]
In [38]: idx_j = numpy.arange(nj)[numpy.newaxis,:,numpy.newaxis]

# This is the substantive part that actually picks out our window.
In [41]: idx_k = kslice[:,:,numpy.newaxis] +
numpy.arange(-half_width,half_width+1)
In [42]: smallcube = cube[idx_i,idx_j,idx_k]
In [43]: smallcube.shape
Out[43]: (10, 15, 7)

# Now verify that our window is centered on kslice everywhere:
In [47]: smallcube[:,:,3]
Out[47]:
array([[13, 14,  9, 12, 12, 11,  8, 14, 11, 13, 13, 13,  8, 11,  8],
   [ 7, 12, 12,  6, 10, 12,  9, 11, 13,  9, 14, 11,  5, 12, 12],
   [ 7,  5, 10,  9,  6,  5,  5, 14,  5,  6,  7, 10,  6, 10, 11],
   [ 6,  9, 11, 14,  7, 11, 10,  6,  6,  9,  9, 11,  5,  5, 14],
   [12,  8, 11,  6, 10,  8,  5,  9,  8, 10,  7,  5,  9,  9, 14],
   [ 9,  8, 10,  9, 10, 12, 10, 10,  6, 10, 11,  6,  8,  7,  7],
   [11, 12,  7, 13,  5,  5,  8, 14,  5, 14,  9, 10, 12,  7, 14],
   [ 7,  7,  7, 12, 10,  6, 13, 13, 11, 13,  8, 11, 13, 14, 14],
   [ 6, 13, 13, 10, 10, 14, 10,  8,  9, 14, 13, 12,  9,  9,  5],
   [13, 14, 10,  8, 11, 11, 10,  6, 12, 11, 12, 12, 13, 11,  7]])

In [50]: (smallcube[:,:,3] == kslice).all()
Out[50]: True


Clear as mud? I can go into more detail if you like particularly about
how newaxis works.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy date/time types and the resolu tion concept

2008-07-16 Thread Matt Knox
 Maybe you are right, but by providing many resolutions we are trying to 
 cope with the needs of people that are using them a lot.  In 
 particular, we are willing that the authors of the timseries scikit can 
 find on these new dtype a fair replacement of their Date class (our 
 proposal will be not so featured, but...).

I think a basic date/time dtype for numpy would be a nice addition for general
usage.

Now as for the timeseries module using this dtype for most of the date-fu that
goes on... that would be a bit more challenging. Unless all of the
frequencies/resolutions currently supported in the timeseries scikit are
supported with the new dtype, it is unlikely we would be able to replace our
implementation. In particular, business day frequency (Monday - Friday) is of
central importance for working with financial time series (which was my
motivation for the original prototype of the module). But using plain integers
for the DateArray class actually seems to work pretty well and I'm not sure a
whole lot would be gained by using a date dtype.

That being said, if someone creates a fork of the timeseries module using a new
date dtype at it's core and it works amazingly well, then I'd probably get on
board. I just think that may be difficult to do with a general purpose date
dtype suitable for inclusion in the numpy core. 

- Matt



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Monkeypatching vs nose plugin?

2008-07-16 Thread Fernando Perez
Howdy,

In working on the ipython testing  machinery, I looked at the numpy
nosetester.py file and found that it works by monkeypatching nose
itself.  I'm curious as to why this approach was taken rather than
constructing a plugin object.  In general, monkeypatching should be
done as a last-resort trick, because it tends to be brittle and can
cause bizarre problems to users who after running numpy.test() find
that their normal nose-using code starts doing funny things.

Any thoughts/insights?

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Monkeypatching vs nose plugin?

2008-07-16 Thread Robert Kern
On Wed, Jul 16, 2008 at 20:42, Fernando Perez [EMAIL PROTECTED] wrote:
 Howdy,

 In working on the ipython testing  machinery, I looked at the numpy
 nosetester.py file and found that it works by monkeypatching nose
 itself.  I'm curious as to why this approach was taken rather than
 constructing a plugin object.  In general, monkeypatching should be
 done as a last-resort trick, because it tends to be brittle and can
 cause bizarre problems to users who after running numpy.test() find
 that their normal nose-using code starts doing funny things.

 Any thoughts/insights?

Is there a way to do it programatically without requiring numpy to be
installed with setuptools?

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Monkeypatching vs nose plugin?

2008-07-16 Thread Fernando Perez
On Wed, Jul 16, 2008 at 7:00 PM, Robert Kern [EMAIL PROTECTED] wrote:

 Is there a way to do it programatically without requiring numpy to be
 installed with setuptools?

I think so, though I'm not 100% certain because I haven't finished the
ipython work.  So far what I have for ip is all nose mods done as a
nose plugin.  Right now that plugin needs to be installed as a true
plugin (i.e. via setuptools), but IPython does NOT need to be
installed via st.  What I need to add is a way to  run the testing via
a python script (right now I use the command line, hence the
requirement for the plugin to be really available to nose) that would
correctly load and configure everything needed.

I think under this scenario, it should be possible to load this plugin
from a private package (IPython.testing.plugin) instead of the nose
namespace, but that's the part I have yet to confirm with an actual
implementation.

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Monkeypatching vs nose plugin?

2008-07-16 Thread Alan McIntyre
On Wed, Jul 16, 2008 at 10:00 PM, Robert Kern [EMAIL PROTECTED] wrote:
 Is there a way to do it programatically without requiring numpy to be
 installed with setuptools?

There is; you have to pass a list of plugin instances to the
constructor of TestProgram--all plugins that you might want to use,
even the builtin ones.  (As far as I know, that is.)

The monkeypatching approach was the first one that I could make to
work with the least amount of hassle, but it's definitely not the best
way.  I only had to monkeypatch a couple of things at first, but as I
figured out what the test framework needed to do, it just got worse,
so I was beginning to get uncomfortable with it myself. (Honest! :)
Once the NumPy and SciPy test suites are mostly fixed up to work under
the current rules, I'll go back and use a method that doesn't require
monkeypatching.  It shouldn't have any effect on the public interface
or the tests themselves.

Since we're discussing this sort of thing, there's something I've been
meaning to ask anyway: do we really need to allow end users to pass in
arbitrary extra arguments to nose (via the extra_argv in test())?
This seems to lock us in to having a mostly unobstructed path from
test() through to an uncustomized nose backend.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Monkeypatching vs nose plugin?

2008-07-16 Thread Robert Kern
On Wed, Jul 16, 2008 at 22:21, Alan McIntyre [EMAIL PROTECTED] wrote:
 On Wed, Jul 16, 2008 at 10:00 PM, Robert Kern [EMAIL PROTECTED] wrote:
 Is there a way to do it programatically without requiring numpy to be
 installed with setuptools?

 There is; you have to pass a list of plugin instances to the
 constructor of TestProgram--all plugins that you might want to use,
 even the builtin ones.  (As far as I know, that is.)

 The monkeypatching approach was the first one that I could make to
 work with the least amount of hassle, but it's definitely not the best
 way.  I only had to monkeypatch a couple of things at first, but as I
 figured out what the test framework needed to do, it just got worse,
 so I was beginning to get uncomfortable with it myself. (Honest! :)
 Once the NumPy and SciPy test suites are mostly fixed up to work under
 the current rules, I'll go back and use a method that doesn't require
 monkeypatching.  It shouldn't have any effect on the public interface
 or the tests themselves.

Sounds good.

 Since we're discussing this sort of thing, there's something I've been
 meaning to ask anyway: do we really need to allow end users to pass in
 arbitrary extra arguments to nose (via the extra_argv in test())?
 This seems to lock us in to having a mostly unobstructed path from
 test() through to an uncustomized nose backend.

At least with other projects, I occasionally want to do things like
run with --pdb-failure or --detailed-errors, etc. What exactly is
extra_argv blocking? My preference, actually, is for the nosetests
command to be able to run our tests correctly if at all possible.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Monkeypatching vs nose plugin?

2008-07-16 Thread Alan McIntyre
On Wed, Jul 16, 2008 at 11:32 PM, Robert Kern [EMAIL PROTECTED] wrote:
 Since we're discussing this sort of thing, there's something I've been
 meaning to ask anyway: do we really need to allow end users to pass in
 arbitrary extra arguments to nose (via the extra_argv in test())?
 This seems to lock us in to having a mostly unobstructed path from
 test() through to an uncustomized nose backend.

 At least with other projects, I occasionally want to do things like
 run with --pdb-failure or --detailed-errors, etc. What exactly is
 extra_argv blocking?

It's not blocking anything; it just feels wrong for some reason.
Probably because I've been duck-punching nose and doctest to death to
make them act the way I want, and I can't fit all the
doctest/nose/unittest behavior in my head all at once to comfortably
say that any of those other options will still work correctly. ;)

It's probably just a pointless worry that will be moot after all the
monkeypatching is removed, since the underlying test libraries will be
in an unaltered state.

 My preference, actually, is for the nosetests
 command to be able to run our tests correctly if at all possible.

The unit tests will run just fine via nosetests, but the doctests
generally will not, because of the limited execution context
NoseTester now enforces on them.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Monkeypatching vs nose plugin?

2008-07-16 Thread Robert Kern
On Wed, Jul 16, 2008 at 22:52, Alan McIntyre [EMAIL PROTECTED] wrote:
 On Wed, Jul 16, 2008 at 11:32 PM, Robert Kern [EMAIL PROTECTED] wrote:
 Since we're discussing this sort of thing, there's something I've been
 meaning to ask anyway: do we really need to allow end users to pass in
 arbitrary extra arguments to nose (via the extra_argv in test())?
 This seems to lock us in to having a mostly unobstructed path from
 test() through to an uncustomized nose backend.

 At least with other projects, I occasionally want to do things like
 run with --pdb-failure or --detailed-errors, etc. What exactly is
 extra_argv blocking?

 It's not blocking anything; it just feels wrong for some reason.
 Probably because I've been duck-punching nose and doctest to death to
 make them act the way I want, and I can't fit all the
 doctest/nose/unittest behavior in my head all at once to comfortably
 say that any of those other options will still work correctly. ;)

 It's probably just a pointless worry that will be moot after all the
 monkeypatching is removed, since the underlying test libraries will be
 in an unaltered state.

That's what I expect.

 My preference, actually, is for the nosetests
 command to be able to run our tests correctly if at all possible.

 The unit tests will run just fine via nosetests, but the doctests
 generally will not, because of the limited execution context
 NoseTester now enforces on them.

Personally, I could live with that. I don't see the extra options as
very useful for testing examples. However, I would prefer to leave the
capability there until a concrete practical problem arises.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Python version support for NumPy 1.2

2008-07-16 Thread Alan McIntyre
Which versions of Python are to be officially supported by NumPy 1.2?
I've been working against 2.5 and testing against 2.4 occasionally,
but 2.3 still has some issues I need to address (or at least that was
the case the last time I checked).
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Python version support for NumPy 1.2

2008-07-16 Thread Charles R Harris
On Wed, Jul 16, 2008 at 10:37 PM, Alan McIntyre [EMAIL PROTECTED]
wrote:

 Which versions of Python are to be officially supported by NumPy 1.2?
 I've been working against 2.5 and testing against 2.4 occasionally,
 but 2.3 still has some issues I need to address (or at least that was
 the case the last time I checked).
 


You don't need to worry about 2.3. So 2.4 and 2.5. Python 2.6 is scheduled
for release in October and might cause some problems. I have 2.6 installed
and can maybe do some testing if you need it.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion