On May 25, 2009, at 8:06 PM, josef.p...@gmail.com wrote:
The problem is, if the functions are enhanced in the current numpy,
then scikits.timeseries is not (yet) available.
Mmh, I'm not following you here...
The original question was how we can enhance numpy.financial, eg.
np.irr
So we
On May 22, 2009, at 12:31 PM, Andrea Gavana wrote:
Hi All,
this should be a very easy question but I am trying to make a
script run as fast as possible, so please bear with me if the solution
is easy and I just overlooked it.
I have a list of integers, like this one:
indices =
On May 20, 2009, at 11:04 AM, Nils Wagner wrote:
Hi all,
Is the value of skiprows in loadtxt restricted to values
in [0-10] ?
It doesn't work for skiprows=11.
Please post an example
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
On May 20, 2009, at 9:57 PM, Jochen Schroeder wrote:
Sorry maybe I phrased my question wrongly. I don't want to change
the code (This was just a short example).
I just want to know why it is failing on his system and what he
can do so that a.view(dtype='...') is working. I suspected it was
All,
I just committed (r6994) some modifications to numpy.ma.getdata (Eric
Firing's patch) and to the ufunc wrappers that were too slow with
large arrays. We're roughly 3 times faster than we used to, but still
slower than the equivalent classic ufuncs (no surprise here).
Here's the catch:
On May 13, 2009, at 7:36 PM, Matt Knox wrote:
Here's the catch: it's basically cheating. I got rid of the pre-
processing (where a mask was calculated depending on the domain and
the input set to a filling value depending on this mask, before the
actual computation). Instead, I force
On May 13, 2009, at 8:07 PM, Matt Knox wrote:
hmm. While this doesn't affect me personally... I wonder if everyone
is aware of
this. Importing modules generally shouldn't have side effects either
I would
think. Has this always been the case for the masked array module?
Well, can't
On May 11, 2009, at 5:44 PM, Wei Su wrote:
Coming from SAS and R, this is probably the first thing I want to do
now that I can convert my data into record arrays. But I could not
find any clues after googling for a while. Any hint or suggestions
will be great!
That depends what you
On May 11, 2009, at 6:18 PM, Wei Su wrote:
Thanks for the reply. I can now actually turn a big list into a
record array. My question is actually how to join related record
arrays in Python.. This is done in SAS by MERGE and PROC SQL and by
merge() in R. But I have no idea how to do it
On May 11, 2009, at 6:36 PM, Skipper Seabold wrote:
On Mon, May 11, 2009 at 6:18 PM, Wei Su taste_o...@yahoo.com wrote:
Hi, Pierre:
Thanks for the reply. I can now actually turn a big list into a
record
array. My question is actually how to join related record arrays in
Python..
Short answer to the subject: Oh yes.
Basically, MaskedArrays in its current implementation is more of a
convenience class than anything. Most of the functions manipulating
masked arrays create a lot of temporaries. When performance is needed,
I must advise you to work directly on the data
On May 9, 2009, at 8:17 PM, Eric Firing wrote:
Eric Firing wrote:
A part of the slowdown is what looks to me like unnecessary copying
in _MaskedBinaryOperation.__call__. It is using getdata, which
applies numpy.array to its input, forcing a copy. I think the copy
is actually
On May 5, 2009, at 2:42 PM, Wei Su wrote:
Hi, Everyone:
This is what I need to do everyday. Now I have to first save data
as .csv file and the use csv2rec() to read the data as a record
array. Anybody can give me some advice on how to directly get the
data as record arrays? It will
On Apr 25, 2009, at 5:36 AM, Gael Varoquaux wrote:
On Fri, Apr 24, 2009 at 10:11:07PM -0400, Chris Colbert wrote:
Like the subject says, is there a way to register numpy with
synaptic
after building numpy from source?
Don't play with the system's packaging system unless you really
On Apr 22, 2009, at 5:21 PM, Gökhan SEVER wrote:
Hello,
Could you please give me some hints about how to mask an array using
another arrays like in the following example.
What about that ?
numpy.logical_or.reduce([a==i for i in b])
___
On Apr 22, 2009, at 9:03 PM, josef.p...@gmail.com wrote:
I prefer broad casting to list comprehension in numpy:
Pretty neat! I still dont have the broadcasting reflex. Now, any idea
which one is more efficient in terms of speed? in terms of temporaries?
On Apr 8, 2009, at 5:57 PM, Elaine Angelino wrote:
hi there --
for a numpy.recarray, is it possible to rename the fields in the
dtype?
Take a new view:
a = np.array([(1,1)],dtype=[('a',int),('b',int)])
b = a.view([(A,int), ('b', int)])
or:
use numpy.lib.recfunctions.rename_fields
On Apr 8, 2009, at 6:18 PM, Stéfan van der Walt wrote:
2009/4/9 Pierre GM pgmdevl...@gmail.com:
for a numpy.recarray, is it possible to rename the fields in the
dtype?
Take a new view:
a = np.array([(1,1)],dtype=[('a',int),('b',int)])
b = a.view([(A,int), ('b', int)])
or:
use
All,
I'm trying to build a relatively complicated symmetric matrix. I can
build the upper-right block without pb. What is the fastest way to get
the LL corner from the UR one ?
Thanks a lot in advance for any idea.
P.
___
Numpy-discussion mailing
On Apr 1, 2009, at 11:57 AM, David Cournapeau wrote:
preparing documents... done
Exception occurred: 0%] contents
File /usr/lib64/python2.6/site-packages/docutils/nodes.py, line
471, in __getitem__
return self.attributes[key]
KeyError: 'entries'
The full traceback has been saved in
Ciao Marty,
Great idea indeed ! However, I'd really like to have an easy way to
plug the suggested dtype w/ the existing Date class from the
scikits.timeseries package (Date is implemented in C, you can find the
sources through the link on http://pytseries.sourceforge.net). I agree
that
Kevin,
Sorry for the delayed answer.
(a) Is MA intended to be subclassed?
Yes, that's actually the reason why the class was rewritten, to
simplify subclassing. As Josef suggested, you can check the
scikits.timeseries package that makes an extensive use of MaskedArray
as baseclass.
(b)
David,
I also started to update the release notes:
http://projects.scipy.org/scipy/numpy/browser/trunk/doc/release/1.3.0-notes.rst
I get a 404.
Anyhow, on the ma side:
* structured arrays should now be fully supported by MaskedArray
(r6463, r6324, r6305, r6300, r6294...)
* Minor bug fixes
As a follow-up to Robert's answer:
r[r.field1 == 1].field2 = 1
doesn't work, but
r.field2[r.field1==1] = 1
does.
So far, so good.
Now I want to change the value of field2 for those same elements:
In [128]: r[where(r.field1 == 1.)].field2 = 1
Ok, so now the values of field
On Feb 22, 2009, at 6:21 PM, Eric Firing wrote:
Darren Dale wrote:
Does anyone know why __array_wrap__ is not called for subclasses
during
arithmetic operations where an iterable like a list or tuple
appears to
the right of the subclass? When I do mine*[1,2,3], array_wrap is
not
On Feb 12, 2009, at 8:22 PM, A B wrote:
Hi,
Are there any routines to fill in the gaps in an array. The simplest
would be by carrying the last known observation forward.
0,0,10,8,0,0,7,0
0,0,10,8,8,8,7,7
Or by somehow interpolating the missing values based on the previous
and next known
On Feb 11, 2009, at 11:38 PM, Ryan May wrote:
Pierre,
I noticed that using dtype=None with a heterogeneous set of data,
trying to use unpack=True to get the columns into separate arrays
(instead of a structured array) doesn't work. I've attached a patch
that, in the case of
On Feb 7, 2009, at 8:03 AM, Nils Wagner wrote:
==
ERROR: Test flat on masked_matrices
--
Traceback (most recent call last):
File
On Feb 6, 2009, at 4:25 PM, Darren Dale wrote:
I've been looking at how ma implements things like multiply() and
MaskedArray.__mul__. I'm surprised that MaskedArray.__mul__ actually
calls ma.multiply() rather than calling
super(MaskedArray,self).__mul__().
There's some under-the-hood
On Feb 5, 2009, at 6:08 PM, Travis E. Oliphant wrote:
Hi all,
I've been fairly quiet on this list for awhile due to work and family
schedule, but I think about how things can improve regularly.One
feature that's been requested by a few people is the ability to select
multiple fields
On Feb 4, 2009, at 11:00 AM, josef.p...@gmail.com wrote:
I just had a hard to find bug in my program. poly1d treats numpy
scalars differently than python numbers when left or right
multiplication is used.
Essentially, if the first term is the numpy scalar, multiplied by a
polynomial, then
On Feb 4, 2009, at 3:56 PM, Robert Kern wrote:
No, rewrite the test to not use external libraries, please. Test the
functionality without needing dateutils.
OK then, should be fixed in r6340.
___
Numpy-discussion mailing list
All,
I'm a tad puzzled by the following behavior (I'm trying to correct a
bug in genfromtxt):
I'm creating an empty structured ndarray, using np.object as dtype.
a = np.empty(1,dtype=[('',np.object)])
array([(None,)],
dtype=[('f0', '|O4')])
Now, I'd like to rename the field:
, Brent Pedersen wrote:
On Wed, Feb 4, 2009 at 9:36 AM, Pierre GM pgmdevl...@gmail.com
wrote:
On Feb 4, 2009, at 12:09 PM, Brent Pedersen wrote:
hi, i am using genfromtxt, with a dtype like this:
[('seqid', '|S24'), ('source', '|S16'), ('type', '|S16'), ('start',
'i4'), ('end', 'i4
All,
When can we expect numpy 1.3 to be released ?
Sincerely,
P.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
On Feb 3, 2009, at 11:24 AM, Ryan May wrote:
Pierre,
Should the following work?
import numpy as np
from StringIO import StringIO
converter = {'date':lambda s: datetime.strptime(s,'%Y-%m-%d %H:%M:
%SZ')}
data = np.ndfromtxt(StringIO('2009-02-03 12:00:00Z,72214.0'),
delimiter=',',
On Feb 3, 2009, at 4:00 PM, Ryan May wrote:
Well, I guess I hit send too soon. Here's one easy solution
(consistent with
what you did for __radd__), change the code for __rmul__ to do:
return multiply(self, other)
instead of:
return multiply(other, self)
That fixes it
On Feb 1, 2009, at 6:32 PM, Darren Dale wrote:
Is there an analog to __array_wrap__ for preprocessing arrays on
their way *into* a ufunc? For example, it would be nice if one could
do something like:
numpy.sin([1,2,3]*arcseconds)
where we have the opportunity to inspect the context,
On Jan 30, 2009, at 1:11 PM, Raik Gruenberg wrote:
Mhm, I got this far. But how do I get from here to a single index
array
[ 4, 5, 6, ... 10, 0, 1, 2, 3, 11, 12, 13, 14 ] ?
np.concatenate([np.arange(aa,bb) for (aa,bb) in zip(a,b)])
___
On Jan 30, 2009, at 1:53 PM, Raik Gruenberg wrote:
Pierre GM wrote:
On Jan 30, 2009, at 1:11 PM, Raik Gruenberg wrote:
Mhm, I got this far. But how do I get from here to a single index
array
[ 4, 5, 6, ... 10, 0, 1, 2, 3, 11, 12, 13, 14 ] ?
np.concatenate([np.arange(aa,bb) for (aa,bb
On Jan 29, 2009, at 3:17 AM, Pauli Virtanen wrote:
Thu, 29 Jan 2009 00:28:46 -0500, Pierre GM wrote:
Is there an objects.inv lying around for the numpy reference guide,
or
should I start one from scratch ?
It's automatically generated by Sphinx, and can be found at
http
On Jan 28, 2009, at 3:56 PM, Timmie wrote:
### this is the loop I would like to optimize:
### looping over arrays is considered inefficient.
### what could be a better way?
hours_array = dates_array.copy()
for i in range(0, dates_array.size):
hours_array[i] = dates_array[i].hour
You
On Jan 28, 2009, at 5:43 PM, Timmie wrote:
You could try:
np.fromiter((_.hour for _ in dates_li), dtype=np.int)
or
np.array([_.hour for _ in dates_li], dtype=np.int)
I used dates_li only for the preparation of example data.
So let's suppose I have the array dates_array returned from a
a
[Some background: we're talking about numpy.lib.recfunctions, a set of
functions to manipulate structured arrays]
Ryan,
If the two files have the same structure, you can use that fact and
specify the dtype of the output directly with the dtype parameter of
mafromtxt. That way, you're sure
On Jan 27, 2009, at 4:23 PM, Ryan May wrote:
I definitely wouldn't advocate magic by default, but I think it
would be nice to
be able to get the functionality if one wanted to.
OK. Put on the TODO list.
There is one problem I
noticed, however. I found common_type and lib.mintypecode,
JH,
Thx for the links, but I'm afraid I need something more basic than
that. For example, I'm referring to Python as:
van Rossum, G. and Drake, F. L. (eds), 2006. Python Reference Manual,
Python Software Foundation,. http://docs.python.org/ref/ref.html.
I could indeed use
On Jan 24, 2009, at 6:23 PM, Ryan May wrote:
Ok, thanks. I've dug a little further, and it seems like the
problem is that a
column of all missing values ends up as a column of all None's.
When you create
a (masked) array from a list of None's, you end up with an object
array. On
All,
What is the most up-to-date way to cite Numpy and Scipy in an academic
journal ?
Thanks a lot in advance
P.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
:
I believe this is what you're looking for:
http://www.scipy.org/Citing_SciPy
On 25-Jan-09, at 6:45 PM, Pierre GM wrote:
All,
What is the most up-to-date way to cite Numpy and Scipy in an
academic
journal ?
Thanks a lot in advance
P
Ryan,
Thanks for reporting. An idea would be to force the dtype of the
masked column to the largest dtype of the other columns (in your
example, that would be int). I'll try to see how easily it can be done
early next week. Meanwhile, you can always give an explicit dtype at
creation.
On
Darren,
The type returned by np.array is ndarray, unless I specifically set
subok=True, in which case I get a MyArray. The default value of
subok is True, so I dont understand why I have to specify subok
unless I want it to be False. Is my subclass missing something
important?
Interesting.
The tests pass on my machine
OS X,
Python version 2.5.4 (r254:67916, Dec 29 2008, 17:02:44) [GCC 4.0.1
(Apple Inc. build 5488)]
nose version 0.10.4
For
File
/home/nwagner/local/lib64/python2.6/site-packages/numpy/lib/tests/
test_recfunctions.py,
line 34, in test_zip_descr
On Jan 22, 2009, at 1:31 PM, Nils Wagner wrote:
Hi Pierre,
Thank you. Works for me.
You're welcome, thanks for reporting!
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
On Jan 21, 2009, at 11:34 AM, Darren Dale wrote:
I have a simple test script here that multiplies an ndarray subclass
with another number. Can anyone help me understand why each of these
combinations returns a new instance of MyArray:
mine = MyArray()
print type(np.float32(1)*mine)
Brent,
Currently, no, you won't be able to retrieve the header if it's
commented.
I'll see what I can do.
P.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Brent,
Mind trying r6330 and let me know if it works for you ? Make sure that
you use names=True to detect a header.
P.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
Till I write some proper doc, you can check the examples in tests/
test_io (TestFromTxt suitcase)
On Jan 20, 2009, at 4:17 AM, Nils Wagner wrote:
Hi all,
Where can I find some sophisticated examples for the usage
of numpy.genfromtxt ?
Nils
On Jan 16, 2009, at 10:51 AM, josef.p...@gmail.com wrote:
I have a regression result with masked arrays that produces a masked
array output, estm5.yhat, and I want to test equality to the benchmark
case, estm1.yhat, with the asserts in numpy.testing, but I am getting
strange results.
...
On Jan 4, 2009, at 4:47 PM, Robert Kern wrote:
On Sun, Jan 4, 2009 at 15:44, Pierre GM pgmdevl...@gmail.com wrote:
If we used np.asanyarray instead, subclasses are recognized properly,
the mask is recognized by argsort and the result correct.
Is there a reason why we use np.asarray instead
All,
You'll probably remember that last December, I started rewriting
np.loadtxt and ame up with a series of functions that support missing
data. I tried to copy/paste the code in numpy.lib.io.py but ran into
dependency problems and left it at that. I think that part of the
reason is that
Jean-Baptiste,
As you stated, everything depends on what you want to do.
If you need to keep the correspondence ageweight for each entry,
then yes, record arrays, or at least flexible-type arrays, are the
best. (The difference between a recarray and a flexible-type array is
that fields can
On Dec 21, 2008, at 10:19 PM, josef.p...@gmail.com wrote:
From the examples that I tried out np.sort, sorts each column
separately (with axis = 0). If the elements of a row is supposed to
stay together, then np.sort doesn't work
Well, if the elements are supposed to stay together, why
On Dec 17, 2008, at 12:13 PM, Jim Vickroy wrote:
Sorry for being dense about this, but I really do not understand why
masked values should not be trusted. If I apply a procedure to an
array with elements designated as untouchable, I would expect that
contract to be honored. What am I
is untouched.
On Dec 16, 2008, at 6:07 PM, Ryan May wrote:
Pierre GM wrote:
All,
Here's the latest version of genloadtxt, with some recent
corrections.
With just a couple of tweaking, we end up with some decent speed:
it's
still slower than np.loadtxt, but only 15% so according
On Dec 16, 2008, at 1:57 PM, Ryan May wrote:
I just noticed the following and I was kind of surprised:
a = ma.MaskedArray([1,2,3,4,5], mask=[False,True,True,False,False])
b = a*5
b
masked_array(data = [5 -- -- 20 25],
mask = [False True True False False],
fill_value=99)
Robert,
Transforming your matrix to a list before computation isn't very
efficient. If you do need some extra parameters in your __init__ to be
compatible with other functions such as asmatrix, well, just add them,
or use a coverall **kwargs
def __init__(self, instruments, **kwargs)
No
On Dec 9, 2008, at 12:59 PM, Christopher Barker wrote:
Jarrod Millman wrote:
From the user's perspective, I would like all the NumPy IO code to
be
in the same place in NumPy; and all the SciPy IO code to be in the
same place in SciPy.
+1
So, no problem w/ importing numpy.ma and
All,
* What versions of Python should be supported by what version of
numpy ? Are we to expect users to rely on Python2.5 for the upcoming
1.3.x ? Could we have some kind of timeline on the trac site or
elsewhere (and if such a timeline exists already, can I get the link?) ?
* Talking
On Dec 7, 2008, at 4:21 PM, Jarrod Millman wrote:
NumPy 1.3.x should work with Python 2.4, 2.5, and 2.6. At some point
we can drop 2.4, but I would like to wait a bit since we just dropped
2.3 support. The timeline is on the trac site:
http://projects.scipy.org/scipy/numpy/milestone/1.3.0
All,
Here's the latest version of genloadtxt, with some recent corrections.
With just a couple of tweaking, we end up with some decent speed: it's
still slower than np.loadtxt, but only 15% so according to the test at
the end of the package.
And so, now what ? Should I put the module in
All,
Here's the second round of genloadtxt. That's a tad cleaner version
than the previous one, where I tried to take into account the
different comments and suggestions that were posted. So, tabs should
be supported and explicit whitespaces are not collapsed.
FYI, in the __main__ section,
And now for the tests:
# pylint disable-msg=E1101, W0212, W0621
import numpy as np
import numpy.ma as ma
from numpy.ma.testutils import *
from StringIO import StringIO
from _preview import *
class TestLineSplitter(TestCase):
Tests the LineSplitter class.
#
def
On Dec 4, 2008, at 7:22 AM, Manuel Metz wrote:
Will loadtxt in that case remain as is? Or will the _faulttolerantconv
class be used?
No idea, we need to discuss it. There's a problem with
_faulttolerantconv: using np.nan as default value will not work in
Python2.6 if the output is to be
On Nov 25, 2008, at 12:23 PM, Pierre GM wrote:
All,
Sorry to bump my own post, and I was kinda threadjacking anyway:
Some functions of numy.ma (eg, ma.max, ma.min...) accept explicit
outputs that may not be MaskedArrays.
When such an explicit output is not a MaskedArray, a value
On Dec 4, 2008, at 3:24 PM, [EMAIL PROTECTED] wrote:
On Thu, Dec 4, 2008 at 2:40 PM, Jarrod Millman
[EMAIL PROTECTED] wrote:
On Thu, Dec 4, 2008 at 11:14 AM, Pierre GM [EMAIL PROTECTED]
wrote:
Raise a ValueError (even in 2.5, therefore risking to break
something)
+1
+1
OK
On Dec 3, 2008, at 12:48 PM, Christopher Barker wrote:
Pierre GM wrote:
I can try, but in that case, please write me a unittest, so that I
have a clear and unambiguous idea of what you expect.
fair enough, though I'm not sure when I'll have time to do it.
Oh, don;t worry, nothing too fancy
On Dec 3, 2008, at 12:32 PM, Alan G Isaac wrote:
If I know my data is already clean
and is handled nicely by the
old loadtxt, will I be able to turn
off and the special handling in
order to retain the old load speed?
Hopefully. I'm looking for the best way to do it. Do you have an
On Dec 3, 2008, at 1:00 PM, Christopher Barker wrote:
by the way, should this work:
io.loadtxt('junk.dat', delimiter=' ')
for more than one space between numbers, like:
1 2 3 4 5
6 7 8 9 10
On the version I'm working on, both delimiter='' and delimiter=None
(default) would
Manuel,
Looks nice, I gonna try to see how I can incorporate yours. Note that
returning np.nan by default will not work w/ Python 2.6 if you want an
int...
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
Eric,
That's quite a handful you have with this dtype...
So yes, the fix I gave works with nested dtypes and flexible dtypes
with a simple name (string, not tuple). I'm a bit surprised with
numpy, here.
Consider:
dt.names
('P', 'D', 'T', 'w', 'S', 'sigtheta', 'theta')
So we lose the
On Dec 2, 2008, at 4:26 AM, Eric Firing wrote:
From page 132 in the numpy book:
The fields dictionary is indexed by keys that are the names of the
fields. Each entry in the dictionary is a tuple fully describing the
field: (dtype, offset[,title]). If present, the optional title can
On Dec 2, 2008, at 3:12 PM, Ryan May wrote:
Pierre GM wrote:
Well, looks like the attachment is too big, so here's the
implementation. The tests will come in another message.
A couple of quick nitpicks:
1) On line 186 (in the NameValidator class), you use
excludelist.append() to append
Chris,
I can try, but in that case, please write me a unittest, so that I
have a clear and unambiguous idea of what you expect.
ANFSCD, have you tried the missing_values option ?
On Dec 2, 2008, at 5:36 PM, Christopher Barker wrote:
Pierre GM wrote:
I think that treating an explicitly
All,
Please find attached to this message another implementation of
np.loadtxt, which focuses on missing values. It's basically a
combination of John Hunter's et al mlab.csv2rec, Ryan May's patches
and pieces of code I'd been working on over the last few weeks.
Besides some helper classes
And now for the tests:
Proposal :
Here's an extension to np.loadtxt, designed to take missing values into account.
from genload_proposal import *
from numpy.ma.testutils import *
import StringIO
class TestLineSplitter(TestCase):
#
def test_nodelimiter(self):
Test
Well, looks like the attachment is too big, so here's the
implementation. The tests will come in another message.
Proposal :
Here's an extension to np.loadtxt, designed to take missing values into account.
import itertools
import numpy as np
import numpy.ma as ma
def
(Sorry about that, I pressed Reply instead of Reply all. Not my
day for emails...)
On Dec 1, 2008, at 1:54 PM, John Hunter wrote:
It looks like I am doing something wrong -- trying to parse a CSV
file
with dates formatted like '2008-10-14', with::
import datetime, sys
import
On Dec 1, 2008, at 2:26 PM, John Hunter wrote
OK, that worked great. I do think some a default impl in np.rec which
returned a recarray would be nice. It might also be nice to have a
method like np.rec.fromcsv which defaults to a delimiter=',',
names=True and dtype=None. Since csv is one
I agree, genloadtxt is a bit blotted, and it's not a surprise it's
slower than the initial one. I think that in order to be fair,
comparisons must be performed with matplotlib.mlab.csv2rec, that
implements as well the autodetection of the dtype. I'm quite in favor
of keeping a lite version
On Dec 1, 2008, at 6:09 PM, Eric Firing wrote:
Pierre,
ma.masked_all does not seem to work with fancy dtypes and more then
one dimension:
Eric,
Should be fixed in SVN (r6130). There were indeed problems with nested
dtypes. Tricky beasts they are.
Thanks for reporting!
, 2008, at 5:42 AM, Manuel Metz wrote:
Pierre GM wrote:
On Nov 27, 2008, at 3:08 AM, Manuel Metz wrote:
Certainly, yes! Dealing with fixed-length fields would be necessary.
The
case I had in mind had both -- a separator (|) __and__ fixed-
length
fields -- and is probably very special
On Nov 27, 2008, at 3:08 AM, Manuel Metz wrote:
Certainly, yes! Dealing with fixed-length fields would be necessary.
The
case I had in mind had both -- a separator (|) __and__ fixed-length
fields -- and is probably very special in that sense. But such
data-files exists out there...
On Nov 26, 2008, at 5:55 PM, Ryan May wrote:
Manuel Metz wrote:
Ryan May wrote:
3) Better support for missing values. The docstring mentions a
way of
handling missing values by passing in a converter. The problem
with this is
that you have to pass in a converter for *every column*
All,
I'd like to update routines.ma.rst on the numpy/numpy-docs/trunk SVN,
but the whole trunk seems to be MIA... Where has it gone ? How can I
(where should I) commit changes ?
Thx in advance.
P.
___
Numpy-discussion mailing list
On Nov 27, 2008, at 12:32 AM, Robert Kern wrote:
On Wed, Nov 26, 2008 at 23:27, Pierre GM [EMAIL PROTECTED] wrote:
All,
I'd like to update routines.ma.rst on the numpy/numpy-docs/trunk SVN,
but the whole trunk seems to be MIA... Where has it gone ? How can I
(where should I) commit changes
On Nov 27, 2008, at 1:39 AM, Scott Sinclair wrote:
Looking at some recent changes made to docstrings in SVN by Pierre
(r6110 r6111), these are not yet reflected in the doc wiki.
Well, I haven't committed my version yet. I'm polishing a couple of
issues with functions that are not
FYI,
I can't reproduce David's failures on my machine (intel core2 duo w/
10.5.5)
* python 2.6 from macports
* numpy svn 6098
* GCC 4.0.1 (Apple Inc. build 5488)
I have only 1 failure:
FAIL: test_umath.TestComplexFunctions.test_against_cmath
Ryan,
FYI, I've been coding over the last couple of weeks an extension of
loadtxt for a better support of masked data, with the option to read
column names in a header. Please find an example below (I also have
unittest). Most of the work is actually inspired from matplotlib's
All,
Sorry to bump my own post, and I was kinda threadjacking anyway:
Some functions of numy.ma (eg, ma.max, ma.min...) accept explicit
outputs that may not be MaskedArrays.
When such an explicit output is not a MaskedArray, a value that should
have been masked is transformed into np.nan.
On Nov 25, 2008, at 12:30 PM, Christopher Barker wrote:
missing : string, optional
A string representing a missing value, irrespective of the
column where it appears (e.g., ``'missing'`` or ``'unused'``.
It might be nice if missing could be a sequence of strings, if there
is
301 - 400 of 659 matches
Mail list logo