Re: [Numpy-discussion] NumPy date/time types and the resolution concept

2008-07-17 Thread Francesc Alted
A Tuesday 15 July 2008, Pierre GM escrigué:
 On Tuesday 15 July 2008 07:30:09 Francesc Alted wrote:
  Maybe is only that.  But by using the term 'frequency' I tend to
  think that you are expecting to have one entry (observation) in
  your array for each time 'tick' since time start.  OTOH, the term
  'resolution' doesn't have this implication, and only states the
  precision of the timestamp.

 OK, now I get it.

  I don't know whether my impression is true or not, but after
  reading about your TimeSeries package, I'm still thinking that this
  expectation of one observation per 'tick' was what driven you to
  choose the 'frequency' name.

 Well, we do require a one point per tick for some operations, such
 as conversion from one frequency to another, but only for TimeSeries.
 A Date Array doesn't have to be regularly spaced.

Ok, I see.  So, it is just the 'frequency' keyword that was misleading 
me.  Thanks for the clarification.

Cheers,

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy date/time types and the r esolution concept

2008-07-17 Thread Francesc Alted
A Thursday 17 July 2008, Matt Knox escrigué:
  Maybe you are right, but by providing many resolutions we are
  trying to cope with the needs of people that are using them a lot. 
  In particular, we are willing that the authors of the timseries
  scikit can find on these new dtype a fair replacement of their Date
  class (our proposal will be not so featured, but...).

 I think a basic date/time dtype for numpy would be a nice addition
 for general usage.

 Now as for the timeseries module using this dtype for most of the
 date-fu that goes on... that would be a bit more challenging. Unless
 all of the frequencies/resolutions currently supported in the
 timeseries scikit are supported with the new dtype, it is unlikely we
 would be able to replace our implementation. In particular, business
 day frequency (Monday - Friday) is of central importance for working
 with financial time series (which was my motivation for the original
 prototype of the module). But using plain integers for the DateArray
 class actually seems to work pretty well and I'm not sure a whole lot
 would be gained by using a date dtype.

Yeah, the business week.  We've pondered including this, but we are not 
sure about the differences of such a thing and a calendar week in terms 
of a time unit.  I see for sure its merits on the TimeSeries module, 
but I'm afraid that it would be non-sense in the context of a general 
date/time dtype.

Now that I think about it, maybe we should revise our initial intention 
of adding a quarter too, because ISO 8601 does not offer a way to print 
it nicely.  We can also opt by extending the ISO 8601 representation in 
order to allow the next sort of string representation:

In [35]: array([70, 72, 19], 'datetime64[Q]')
Out[35]: array([1988Q2, 1988Q4, 1975Q3], dtype=datetime64[Q])

but, I don't know if this would innecessarily complicate things (apart 
of representing a departure from standards :-/).

 That being said, if someone creates a fork of the timeseries module
 using a new date dtype at it's core and it works amazingly well, then
 I'd probably get on board. I just think that may be difficult to do
 with a general purpose date dtype suitable for inclusion in the numpy
 core.

Yeah, I understand your reasons.  In fact, it is a pity that your 
requeriments diverge in some key points from our proposal for the 
general dtypes.  I have had a look at how you have integrated recarrays 
in your TimeSeries module, and I'm sure that by choosing a date/time 
dtype you would be able to reduce the complexity (and specially the 
efficiency too) of your code quite a few.

Cheers,

-- 
Francesc Alted
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket #837

2008-07-17 Thread Pauli Virtanen
Wed, 16 Jul 2008 15:43:00 -0600, Charles R Harris wrote:

 On Wed, Jul 16, 2008 at 3:05 PM, Pauli Virtanen [EMAIL PROTECTED] wrote:
 
 
 http://scipy.org/scipy/numpy/ticket/837

 Infinite loop in fromfile and fromstring with sep=' ' and malformed
 input.

 I committed a fix to trunk. Does this need a 1.1.1 backport?


 Yes, I think so. TIA,

Done, r5444.

Pauli

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Advanced Indexing Question

2008-07-17 Thread Stéfan van der Walt
Hi Robert

2008/7/17 Robert Kern [EMAIL PROTECTED]:
 In [42]: smallcube = cube[idx_i,idx_j,idx_k]

Fantastic -- a good way to warm up the brain-circuit in the morning!
Is there an easy-to-remember rule that predicts the output shape of
the operation above?  I'm trying to imaging how the output would
change if I altered the dimensions of idx_i or idx_j, but it's hard.

It looks like you can do all sorts of interesting things by
manipulation the indices.  For example, if I take

In [137]: x = np.arange(12).reshape((3,4))

I can produce either

In [138]: x[np.array([[0,1]]), np.array([[1, 2]])]
Out[138]: array([[1, 6]])

or

In [140]: x[np.array([[0],[1]]), np.array([[1], [2]])]
Out[140]:
array([[1],
   [6]])

and even

In [141]: x[np.array([[0],[1]]), np.array([[1, 2]])]
Out[141]:
array([[1, 2],
   [5, 6]])

or its transpose

In [143]: x[np.array([[0,1]]), np.array([[1], [2]])]
Out[143]:
array([[1, 5],
   [2, 6]])

Is it possible to separate the indexing in order to understand it
better?  My thinking was

cube_i = cube[idx_i,:,:].squeeze()
cube_j = cube_i[:,idx_j,:].squeeze()
cube_k = cube_j[:,:,idx_k].squeeze()

Not sure what would happen if the original array had single dimensions, though.

Back to the original problem:

In [127]: idx_i.shape
Out[127]: (10, 1, 1)

In [128]: idx_j.shape
Out[128]: (1, 15, 1)

In [129]: idx_k.shape
Out[129]: (10, 15, 7)

For the constant slice case, I guess idx_k also have been (1, 1, 7)?

The construction of the cube could probably be done using only

cube.flat = np.arange(nk)

Fernando is right: this is good food for thought and excellent
cookbook material!

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Buildbot failures since r5443

2008-07-17 Thread Pauli Virtanen
Hi,

Since r5443 the Sparc buildbots show a Bus error in the test phase:

http://buildbot.scipy.org/builders/Linux_SPARC_64_Debian/
builds/102/steps/shell_2/logs/stdio

while the one on FreeBSD-64 passes.

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Testing -heads up with #random

2008-07-17 Thread Fernando Perez
Hi Alan,

I was trying to reuse your #random checker for ipython but kept
running into problems.  Is it working for you in numpy in actual code?
 Because in the entire SVN tree I only see it mentioned here:

maqroll[numpy] grin #random
./numpy/testing/nosetester.py:
   43 : if #random in want:
   67 : # #random directive to allow executing a command
while ignoring its
  375 : # try the #random directive on the output line
  379 : BadExample object at 0x084D05AC  #random: may vary on your system
maqroll[numpy]

I'm asking because I suspect it is NOT working for numpy.  The reason
is some really nasty, silent exception trapping being done by nose.
In nose's loadTestsFromModule,  which you've overridden to include:

yield NumpyDocTestCase(test,
   optionflags=optionflags,
   checker=NumpyDoctestOutputChecker())

it's likely that this line can cause an exception (at least it was
doing it for me in ipython, because this class inherits from npd but
tries to directly call __init__ from doctest.DocTestCase).
Unfortunately, nose  will  silently swallow *any* exception there,
simply ignoring your tests and not even telling you what happened.
Very, very annoying.  You can see if you have an exception by doing
something like

try:
dt = DocTestCase(test,
 optionflags=optionflags,
 checker=checker)
except:
from IPython import ultraTB
ultraTB.AutoFormattedTB()()
yield dt

to force a traceback printing.

Anyway, I mention this because I just wasted a good chunk of time
fighting this one for ipython, where I need the #random functionality.
 It seems it's not used in numpy yet, but I imagine it will soon, and
I figured I'd save you some time.

Cheers,

f
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Masked arrays and pickle/unpickle

2008-07-17 Thread Stéfan van der Walt
Hi Anthony

2008/7/16 Anthony Floyd [EMAIL PROTECTED]:
 Unfortunately, when we try to unpickle the data saved with Numpy 1.0.3
 in the new code using Numpy 1.1.0, it chokes because it can't import
 numpy.core.ma for the masked arrays.  A check of Numpy 1.1.0 shows that
 this is now numpy.ma.core.

The maskedarray functionality has been rewritten, and is now
`numpy.ma`.  For the time being, the old package is still available as
`numpy.oldnumeric.ma`.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Advanced Indexing Question

2008-07-17 Thread Robert Kern
On Thu, Jul 17, 2008 at 03:16, Stéfan van der Walt [EMAIL PROTECTED] wrote:
 Hi Robert

 2008/7/17 Robert Kern [EMAIL PROTECTED]:
 In [42]: smallcube = cube[idx_i,idx_j,idx_k]

 Fantastic -- a good way to warm up the brain-circuit in the morning!
 Is there an easy-to-remember rule that predicts the output shape of
 the operation above?  I'm trying to imaging how the output would
 change if I altered the dimensions of idx_i or idx_j, but it's hard.

Like I said, they all get broadcasted against each other. The final
output is the shape of the broadcasted index arrays and takes values
found by iterating in parallel over those broadcasted index arrays.

 It looks like you can do all sorts of interesting things by
 manipulation the indices.  For example, if I take

 In [137]: x = np.arange(12).reshape((3,4))

 I can produce either

 In [138]: x[np.array([[0,1]]), np.array([[1, 2]])]
 Out[138]: array([[1, 6]])

 or

 In [140]: x[np.array([[0],[1]]), np.array([[1], [2]])]
 Out[140]:
 array([[1],
   [6]])

 and even

 In [141]: x[np.array([[0],[1]]), np.array([[1, 2]])]
 Out[141]:
 array([[1, 2],
   [5, 6]])

 or its transpose

 In [143]: x[np.array([[0,1]]), np.array([[1], [2]])]
 Out[143]:
 array([[1, 5],
   [2, 6]])

 Is it possible to separate the indexing in order to understand it
 better?  My thinking was

 cube_i = cube[idx_i,:,:].squeeze()
 cube_j = cube_i[:,idx_j,:].squeeze()
 cube_k = cube_j[:,:,idx_k].squeeze()

 Not sure what would happen if the original array had single dimensions, 
 though.

You'd have a problem.

So the way fancy indexing interacts with slices is a bit tricky, and
this is why we couldn't use the nicer syntax of cube[:,:,idx_k]. All
axes with fancy indices are collected together. Their index arrays are
broadcasted and iterated over. *For each iterate*, all of the slices
are collected, and those sliced axes are *added* to the output array.
If you had used fancy indexing on all of the axes, then the iterate
would be a scalar value pulled from the original array. If you mix
fancy indexing and slices, the iterate is the *array* formed by the
remaining slices.

So if idx_k is shaped (ni,nj,3), for example, cube[:,:,idx_k] will
have the shape (ni,nj,ni,nj,3). So
smallcube[:,:,i,j,k]==cube[:,:,idx_k[i,j,k]].

Is that clear, or am I obfuscating the subject more?

 Back to the original problem:

 In [127]: idx_i.shape
 Out[127]: (10, 1, 1)

 In [128]: idx_j.shape
 Out[128]: (1, 15, 1)

 In [129]: idx_k.shape
 Out[129]: (10, 15, 7)

 For the constant slice case, I guess idx_k also have been (1, 1, 7)?

 The construction of the cube could probably be done using only

 cube.flat = np.arange(nk)

Yes, but only due to a weird feature of assigning to .flat. If the RHS
is too short, it gets repeated. Since the last axis is contiguous,
repeating arange(nk) happily coincides with the desired result of
cube[i,j] == arange(nk) for all i,j. This won't check the size,
though. If I give it cube.flat=np.arange(nk+1), it will repeat that
array just fine, although it doesn't line up.

cube[:,:,:]=np.arange(nk), on the other hand broadcasts the RHS to the
shape of cube, then does the assignment. If the RHS cannot be
broadcasted to the right shape (in this case because it is not the
same length as the final axis of the LHS), an error is raised. I find
the reuse of the broadcasting concept to be more memorable, and robust
over the (mostly) ad hoc use of plain repetition with .flat.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Buildbot failures since r5443

2008-07-17 Thread Robert Kern
On Thu, Jul 17, 2008 at 03:51, Robert Kern [EMAIL PROTECTED] wrote:
 On Thu, Jul 17, 2008 at 03:19, Pauli Virtanen [EMAIL PROTECTED] wrote:
 Hi,

 Since r5443 the Sparc buildbots show a Bus error in the test phase:

http://buildbot.scipy.org/builders/Linux_SPARC_64_Debian/
 builds/102/steps/shell_2/logs/stdio

 while the one on FreeBSD-64 passes.

 In the test that's failing (test_filled_w_flexible_dtype), a
 structured array with a dtype of [('i',int), ('s','|S3'), ('f',float)]
 is created. I'm guessing that the final C double in that record is not
 getting aligned properly. On that architecture, I'm willing to bet
 that doubles need to be aligned on a 4-byte or 8-byte boundary.

I think this is the case. Changing the dtype to use |S8 fixes that
test. I get another bus error where the same dtype is used. I've
changed these over in r5445 and r5446. We'll see if the buildbots
pass, but I suspect they will. I'm not sure where the real bug is,
though. We'll need real access to such a machine to fix the problem, I
suspect. Volunteers?

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket #837

2008-07-17 Thread Charles R Harris
On Thu, Jul 17, 2008 at 2:14 AM, Pauli Virtanen [EMAIL PROTECTED] wrote:

 Wed, 16 Jul 2008 15:43:00 -0600, Charles R Harris wrote:

  On Wed, Jul 16, 2008 at 3:05 PM, Pauli Virtanen [EMAIL PROTECTED] wrote:
 
 
  http://scipy.org/scipy/numpy/ticket/837
 
  Infinite loop in fromfile and fromstring with sep=' ' and malformed
  input.
 
  I committed a fix to trunk. Does this need a 1.1.1 backport?
 
 
  Yes, I think so. TIA,

 Done, r5444.


Thanks,

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Buildbot failures since r5443

2008-07-17 Thread Charles R Harris
On Thu, Jul 17, 2008 at 3:51 AM, Neil Muller
[EMAIL PROTECTED][EMAIL PROTECTED]
wrote:

 On Thu, Jul 17, 2008 at 10:51 AM, Robert Kern [EMAIL PROTECTED]
 wrote:
  On Thu, Jul 17, 2008 at 03:19, Pauli Virtanen [EMAIL PROTECTED] wrote:
  Hi,
 
  Since r5443 the Sparc buildbots show a Bus error in the test phase:
 
 http://buildbot.scipy.org/builders/Linux_SPARC_64_Debian/
  builds/102/steps/shell_2/logs/stdio
 
  while the one on FreeBSD-64 passes.
 
  In the test that's failing (test_filled_w_flexible_dtype), a
  structured array with a dtype of [('i',int), ('s','|S3'), ('f',float)]
  is created. I'm guessing that the final C double in that record is not
  getting aligned properly. On that architecture, I'm willing to bet
  that doubles need to be aligned on a 4-byte or 8-byte boundary.

 The Sparc ABI requires that doubles be aligned on a 4-byte boundary.
 However, gcc uses instructions which require 8-byte alignment of
 doubles on SPARC by default - there are a couple of flags which can be
 used to force 4-byte alignment, but that imposes a (usually
 significant) speed penalty. AFAIK, the Solaris compilers also require
 8-byte alignment for doubles.

  In [4]: from numpy import dtype
 
  In [5]: dtype([('i',int), ('s','|S3'), ('f',float)]).fields.items()
  Out[5]:
  [('i', (dtype('int32'), 0)),
   ('s', (dtype('|S3'), 4)),
   ('f', (dtype('float64'), 7))]


   os.uname()[4]
 'sparc64'
  from numpy import dtype
  dtype([('i',int), ('s','|S3'), ('f',float)]).fields.items()

 [('i', (dtype('int32'), 0)), ('s', (dtype('|S3'), 4)), ('f',
 (dtype('float64'), 7))]


I wonder what descr-alignment is for doubles on SPARC.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Testing -heads up with #random

2008-07-17 Thread Alan McIntyre
On Thu, Jul 17, 2008 at 4:25 AM, Fernando Perez [EMAIL PROTECTED] wrote:
 I was trying to reuse your #random checker for ipython but kept
 running into problems.  Is it working for you in numpy in actual code?
  Because in the entire SVN tree I only see it mentioned here:

 maqroll[numpy] grin #random
 ./numpy/testing/nosetester.py:
   43 : if #random in want:
   67 : # #random directive to allow executing a command
 while ignoring its
  375 : # try the #random directive on the output line
  379 : BadExample object at 0x084D05AC  #random: may vary on your system
 maqroll[numpy]

The second example is a doctest for the feature; for me it fails if
#random is removed, and passes otherwise.

 I'm asking because I suspect it is NOT working for numpy.  The reason
 is some really nasty, silent exception trapping being done by nose.
 In nose's loadTestsFromModule,  which you've overridden to include:

Ah, thanks; I recall seeing a comment somewhere about nose swallowing
exceptions in code under test, but I didn't know it would do things
like that.

 Unfortunately, nose  will  silently swallow *any* exception there,
 simply ignoring your tests and not even telling you what happened.
 Very, very annoying.  You can see if you have an exception by doing
 something like

I added that to my local nosetester.py, but it didn't turn up any
exceptions.  I'll keep it in my working copy so I'm not as likely to
miss some problem in the future.

 Anyway, I mention this because I just wasted a good chunk of time
 fighting this one for ipython, where I need the #random functionality.
  It seems it's not used in numpy yet, but I imagine it will soon, and
 I figured I'd save you some time.

Thanks :)
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] proposal: add a header and footer function to numpy.savetxt

2008-07-17 Thread Tim Michelsen
Hello,
sometime scripts and programs create a lot of data output.
For the programmer and also others not involved in the scripting but in the
evaluation of the output it would be very nice the output files could be
prepended with a file header describing what is written in the columns below and
to append a footer.

A good example has been developed by the scipy.scikits.timeseries developers:
http://scipy.org/scipy/scikits/wiki/TimeSeries#Parameters

These formatting flags are a convenient way to save additional meta information.
E. g. of it is important to state the physical units of the data saved.

I would be happy if such a thing could be added to np.savetxt().

What is the current common way to save a header above the saved ascii array?

Kind regaed,
Timmie



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Masked arrays and pickle/unpickle

2008-07-17 Thread Stéfan van der Walt
2008/7/17 Anthony Floyd [EMAIL PROTECTED]:
 What I need to know is how I can trick pickle or Numpy to put the old class 
 into the new class.

If you have an example data-file, send it to me off-list and I'll
figure out what to do.  Maybe it is as simple as

np.core.ma = np.oldnumeric.ma

 It's extremely surprising to find a significant API change like this in a 
 stable package.

I don't know if renaming things in np.core counts as an API change.
Pickling is notoriously unreliable for storing arrays, which is why
Robert wrote `load` and `save`.  I hope that Pierre can get around to
implementing MaskedArray storage for 1.2.  Otherwise, you can already
save the array and mask separately.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Masked arrays and pickle/unpickle

2008-07-17 Thread Pierre GM
On Thursday 17 July 2008 12:54:10 Stéfan van der Walt wrote:
 I don't know if renaming things in np.core counts as an API change.
 Pickling is notoriously unreliable for storing arrays, which is why
 Robert wrote `load` and `save`.  I hope that Pierre can get around to
 implementing MaskedArray storage for 1.2. 

Wow, I'll see what I can do, but no promises.

 Otherwise, you can already 
 save the array and mask separately.

An other possibility is to store the MaskedArray as a record array, with one 
field for the data and one field for the mask. 
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Masked arrays and pickle/unpickle

2008-07-17 Thread Anthony Floyd

  
  What I need to know is how I can trick pickle or Numpy to 
 put the old class into the new class.
 
 If you have an example data-file, send it to me off-list and I'll
 figure out what to do.  Maybe it is as simple as
 
 np.core.ma = np.oldnumeric.ma

Yes, pretty much.  We've put ma.py into numpy.core where ma.py is
nothing more than:

import numpy.oldnumeric.ma as ma

class MaskedArray(ma.MaskedArray):
pass

It works, but becomes a bit of a headache because we now have to
maintain our own numpy package so that all the developers get these
three lines when they install numpy.

Anyway, it lets us unpickle/unshelve the old data files with 1.1.0.  The
next step is to transition on the fly the old numpy.core.ma.MaskedArray
classes to numpy.ma.core.MaskedArray classes so that when oldnumeric
gets depreciated we're not stuck.

Thanks for the input,
Anthony.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Documtation updates for 1.1.1

2008-07-17 Thread Charles R Harris
Hi Stephan,

I'm thinking it would nice to backport as many documentation updates to
1.1.1 as possible. It looks like the following steps should do the trick.

1) Make ptvirtan's changes for ufunc documentation.
2) Copy add_newdocs.py
3) Copy fromnumeric.py

Does that look reasonable to you?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy Trac malfunctions

2008-07-17 Thread Pauli Virtanen
Hi,

Trac seems to malfunction again with permission problems. At
http://projects.scipy.org/scipy/numpy/changeset/5447 there is

Traceback (most recent call last):
  File /usr/lib/python2.4/site-packages/trac/web/main.py, line 387, in 
dispatch_request
dispatcher.dispatch(req)
  File /usr/lib/python2.4/site-packages/trac/web/main.py, line 238, in 
dispatch
resp = chosen_handler.process_request(req)
  File 
/usr/lib/python2.4/site-packages/trac/versioncontrol/web_ui/changeset.py, 
line 188, in process_request
prev = repos.get_node(new_path, new).get_previous()
  File /usr/lib/python2.4/site-packages/trac/versioncontrol/cache.py, line 
120, in get_node
return self.repos.get_node(path, rev)
  File /usr/lib/python2.4/site-packages/trac/versioncontrol/svn_fs.py, line 
356, in get_node
self.pool)
  File /usr/lib/python2.4/site-packages/trac/versioncontrol/svn_fs.py, line 
533, in __init__
self.root = fs.revision_root(fs_ptr, rev, self.pool())
SubversionException: (Can't open file '/home/scipy/svn/numpy/db/revs/5447': 
Permission denied, 13)

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy Trac malfunctions

2008-07-17 Thread Jarrod Millman
On Thu, Jul 17, 2008 at 11:34 AM, Pauli Virtanen [EMAIL PROTECTED] wrote:
 Trac seems to malfunction again with permission problems. At
 http://projects.scipy.org/scipy/numpy/changeset/5447 there is

Fixed.

-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Documtation updates for 1.1.1

2008-07-17 Thread Pauli Virtanen
Thu, 17 Jul 2008 12:00:28 -0600, Charles R Harris wrote:

 Hi Stephan,
 
 I'm thinking it would nice to backport as many documentation updates to
 1.1.1 as possible. It looks like the following steps should do the
 trick.
 
 1) Make ptvirtan's changes for ufunc documentation. 
 2) Copy add_newdocs.py
 3) Copy fromnumeric.py
 
 Does that look reasonable to you?

I'm not sure if 1) is needed for 1.1.1. It's only needed when we start 
putting back the improved ufunc documentation to SVN. The documentation 
has not yet made its way yet from the doc wiki to SVN trunk, so there's 
nothing to backport at the moment.

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Removing some warnings from numpy.i

2008-07-17 Thread Matthieu Brucher
Hi,

I've enclosed a patch for numpy.i (against the trunk). Its goal is to
add const char*
instead of char* in some functions (pytype_string and
typecode_string). The char* use raises some warnings in GCC 4.2.3 (and
it is indeed not type safe IMHO).

Matthieu
--
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher



-- 
French PhD student
Website : http://matthieu-brucher.developpez.com/
Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn : http://www.linkedin.com/in/matthieubrucher


patch
Description: Binary data
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Masked arrays and pickle/unpickle

2008-07-17 Thread Stéfan van der Walt
Hi Pierre,

2008/7/17 Pierre GM [EMAIL PROTECTED]:
 Otherwise, you can already
 save the array and mask separately.

 An other possibility is to store the MaskedArray as a record array, with one
 field for the data and one field for the mask.

What about the other parameters, such as fill value?  Do we know its
type beforehand?  If we can come up with a robust way to convert a
MaskedArray into (one or more) structured array(s), that would be
perfect for storage purposes.  Also, you wouldn't need to be
volunteered to implement it :)

Further, could we rename numpy.ma.core to numpy.ma._core?  I think we
should make it clear that users should not import from core directly.

Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Documtation updates for 1.1.1

2008-07-17 Thread Stéfan van der Walt
2008/7/17 Charles R Harris [EMAIL PROTECTED]:
 I'm thinking it would nice to backport as many documentation updates to
 1.1.1 as possible. It looks like the following steps should do the trick.

 1) Make ptvirtan's changes for ufunc documentation.
 2) Copy add_newdocs.py
 3) Copy fromnumeric.py

 Does that look reasonable to you?

I don't mind, but did we make changes to those files?  As Pauli
mentioned, we haven't yet merged back the edited docstrings.   They
haven't been reviewed, but are probably better than what we currently
have; would you like me to do a merge?

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Documtation updates for 1.1.1

2008-07-17 Thread Jarrod Millman
On Thu, Jul 17, 2008 at 1:34 PM, Stéfan van der Walt [EMAIL PROTECTED] wrote:
 I don't mind, but did we make changes to those files?  As Pauli
 mentioned, we haven't yet merged back the edited docstrings.   They
 haven't been reviewed, but are probably better than what we currently
 have; would you like me to do a merge?

Personally, I have a slight preference for just focusing on 1.2 for
the documentation work, but would be happy if some updates made it to
1.1.1.  My main concern is that I don't want to divert your attention.
 I would just merge the reviewed portions.  Also the release candidate
is scheduled for Sunday, so you would need to make whatever merges you
have in the next two or three days.

Thanks,

-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Masked arrays and pickle/unpickle

2008-07-17 Thread Anthony Floyd
 Further, could we rename numpy.ma.core to numpy.ma._core?  I think we
 should make it clear that users should not import from core directly.

Just to add a bit of noise here, it's not that we were importing
directly from .core, it's that pickle was telling us that the actual
class associated with the masked array was numpy.ma.core.MaskedArray
(erm, well, numpy.core.ma.MaskedArray in the older version).  

Changing the location *again* will break it again, in the exact same
way.

A
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ticket review: #848, leak in PyArray_DescrFromType

2008-07-17 Thread Charles R Harris
On Tue, Jul 15, 2008 at 9:28 AM, Michael Abbott [EMAIL PROTECTED]
wrote:

 On Tue, 15 Jul 2008, Michael Abbott wrote:
  Only half of my patch for this bug has gone into trunk, and without the
  rest of my patch there remains a leak.

 I think I might need to explain a little more about the reason for this
 patch, because obviously the bug it fixes was missed the last time I
 posted on this bug.

 So here is the missing part of the patch:

  --- numpy/core/src/scalartypes.inc.src  (revision 5411)
  +++ numpy/core/src/scalartypes.inc.src  (working copy)
  @@ -1925,19 +1925,30 @@
   goto finish;
   }
 
  +Py_XINCREF(typecode);
   arr = PyArray_FromAny(obj, typecode, 0, 0, FORCECAST, NULL);
  -if ((arr==NULL) || (PyArray_NDIM(arr)  0)) return arr;
  +if ((arr==NULL) || (PyArray_NDIM(arr)  0)) {
  +Py_XDECREF(typecode);
  +return arr;
  +}
   robj = PyArray_Return((PyArrayObject *)arr);
 
   finish:
  -if ((robj==NULL) || (robj-ob_type == type)) return robj;
  +if ((robj==NULL) || (robj-ob_type == type)) {
  +Py_XDECREF(typecode);
  +return robj;
  +}
   /* Need to allocate new type and copy data-area over */
   if (type-tp_itemsize) {
   itemsize = PyString_GET_SIZE(robj);
   }
   else itemsize = 0;
   obj = type-tp_alloc(type, itemsize);
  -if (obj == NULL) {Py_DECREF(robj); return NULL;}
  +if (obj == NULL) {
  +Py_XDECREF(typecode);
  +Py_DECREF(robj);
  +return NULL;
  +}
   if (typecode==NULL)
   typecode = PyArray_DescrFromType([EMAIL PROTECTED]@);
   dest = scalar_value(obj, typecode);

 On the face of it it might appear that all the DECREFs are cancelling out
 the first INCREF, but not so.  Let's see two more lines of context:

   src = scalar_value(robj, typecode);
   Py_DECREF(typecode);

 Ahah.  That DECREF balances the original PyArray_DescrFromType, or maybe
 the later call ... and of course this has to happen on *ALL* return paths.
 If we now take a closer look at the patch we can see that it's doing two
 separate things:

 1. There's an extra Py_XINCREF to balance the ref count lost to
 PyArray_FromAny and ensure that typecode survives long enough;

 2. Every early return path has an extra Py_XDECREF to balance the creation
 of typecode.

 I rest my case for this patch.
 __


I still haven't convinced myself of this. By the time we hit finish, robj is
NULL or holds a reference to typecode and the NULL case is taken care of up
front. Later on, the reference to typecode might be decremented, perhaps
leaving robj crippled, but in that case robj itself is marked for deletion
upon exit. If the garbage collector can handle zero reference counts I think
we are alright. I admit I haven't quite followed all the subroutines and
macros, which descend into the hazy depths without the slightest bit of
documentation, but at this point I'm inclined to leave things alone unless
you have a test that shows a leak from this source.

Chuck

 _
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Documtation updates for 1.1.1

2008-07-17 Thread Charles R Harris
On Thu, Jul 17, 2008 at 2:34 PM, Stéfan van der Walt [EMAIL PROTECTED]
wrote:

 2008/7/17 Charles R Harris [EMAIL PROTECTED]:
  I'm thinking it would nice to backport as many documentation updates to
  1.1.1 as possible. It looks like the following steps should do the trick.
 
  1) Make ptvirtan's changes for ufunc documentation.
  2) Copy add_newdocs.py
  3) Copy fromnumeric.py
 
  Does that look reasonable to you?

 I don't mind, but did we make changes to those files?  As Pauli
 mentioned, we haven't yet merged back the edited docstrings.   They
 haven't been reviewed, but are probably better than what we currently
 have; would you like me to do a merge?


My intent is to simply copy over files from the trunk. If you don't think
things are ready yet, then let's wait and do a 1.1.2 after the documentation
merge happens.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] arccosh for complex numbers, goofy choice of branch

2008-07-17 Thread Pauli Virtanen
Mon, 17 Mar 2008 08:07:38 -0600, Charles R Harris wrote:
[clip]
 OK, that does it. I'm going to change it's behavior.

The problem with bad arccosh branch cuts is still present:

 import numpy as np
 numpy.__version__
'1.2.0.dev5436.e45a7627a39d'
 np.arccosh(-1e-9 + 0.1j)
(-0.099834078899207618-1.5707963277899337j)
 np.arccosh(1e-9 + 0.1j)
(0.099834078899207576+1.5707963257998594j)
 np.arccosh(-1e-9 - 0.1j)
(-0.099834078899207618+1.5707963277899337j)
 np.arccosh(1e-9 - 0.1j)
(0.099834078899207576-1.5707963257998594j)

Ticket #854. http://scipy.org/scipy/numpy/ticket/854

I'll write up some tests for all the functions with branch cuts to verify 
that the cuts and their continuity are correct. (Where correct bears 
some resemblance to ISO C standard, I think...)

-- 
Pauli Virtanen

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Masked arrays and pickle/unpickle

2008-07-17 Thread Pierre GM
On Thursday 17 July 2008 16:29:48 Stéfan van der Walt wrote:
  An other possibility is to store the MaskedArray as a record array, with
  one field for the data and one field for the mask.

 What about the other parameters, such as fill value? 

Dang, forgot about that. Having a dictionary of options would be cool, but we 
can't store it inside a regular ndarray. If we write to a file, we may want 
to write a header first that would store all the metadata we need.

 If we can come up with a robust way to convert a
 MaskedArray into (one or more) structured array(s), that would be
 perfect for storage purposes.  Also, you wouldn't need to be
 volunteered to implement it :)

A few weeks ago, I played a bit with interfacing TimeSeries and pytables: the 
idea is to transform the series (basically a MaskedArray) into a record 
array, and add the parameters such as fill_value in the metadata section of 
the table. Works great, we may want to follow the same pattern. Moreover, 
hdf5 is portable.

 Further, could we rename numpy.ma.core to numpy.ma._core?  I think we
 should make it clear that users should not import from core directly.

Anthony raised a very good point against that, and I agree. There's no need 
for that.


Anthony, just making a symlink from numpy/oldnumeric/ma.py to numpy/core/ma.py 
works to unpickle your array. I agree it's still impractical...
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] arccosh for complex numbers, goofy choice of branch

2008-07-17 Thread Charles R Harris
On Thu, Jul 17, 2008 at 3:56 PM, Pauli Virtanen [EMAIL PROTECTED] wrote:

 Mon, 17 Mar 2008 08:07:38 -0600, Charles R Harris wrote:
 [clip]
  OK, that does it. I'm going to change it's behavior.

 The problem with bad arccosh branch cuts is still present:

  import numpy as np
  numpy.__version__
 '1.2.0.dev5436.e45a7627a39d'
  np.arccosh(-1e-9 + 0.1j)
 (-0.099834078899207618-1.5707963277899337j)
  np.arccosh(1e-9 + 0.1j)
 (0.099834078899207576+1.5707963257998594j)
  np.arccosh(-1e-9 - 0.1j)
 (-0.099834078899207618+1.5707963277899337j)
  np.arccosh(1e-9 - 0.1j)
 (0.099834078899207576-1.5707963257998594j)

 Ticket #854. http://scipy.org/scipy/numpy/ticket/854

 I'll write up some tests for all the functions with branch cuts to verify
 that the cuts and their continuity are correct. (Where correct bears
 some resemblance to ISO C standard, I think...)


Hmm,

The problem here is arccosh = log(x + sqrt(x**2 - 1))

when the given numbers are plugged into x**2 - 1, one lies above the
negative real axis, the other below and the branch cut [-inf,0] of sqrt
introduces the discontinuity. Maybe sqrt(x - 1)*sqrt(x+1) will fix that. I
do think the branch cut should be part of the documentation of all the
complex functions. I wonder what arccos does here?

Ah, here is a reference. http://www.cs.umbc.edu/help/theory/identities.txtNote

arccosh z = ln(z + sqrt(z-1) sqrt(z+1) )not sqrt(z**2-1)


So I guess that is the fix.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Masked arrays and pickle/unpickle

2008-07-17 Thread Mark Miller
On Thu, Jul 17, 2008 at 3:18 PM, Pierre GM [EMAIL PROTECTED] wrote:


 Dang, forgot about that. Having a dictionary of options would be cool, but
 we
 can't store it inside a regular ndarray. If we write to a file, we may want
 to write a header first that would store all the metadata we need.


Not to derail the discussion, but I am a frequent user of Python's shelve
function to archive large numpy arrays and associated sets of parameters
into one very handy and  accessible file.  If numpy developers are
discouraging use of this type of thing (shelve relies on pickle, is this
correct?), then it would be super handy to be able to also include other
data when saving arrays using numpy's intrinsic functions.

Just a thought.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion