On Dec 8, 2009, at 5:32 PM, John [H2O] wrote:
Pierre GM-2 wrote:
masked_where is a function that requires 2 arguments.
If you try to mask a whole record, you can try something like
x = ma.array([('a',1),('b',2)],dtype=[('','|S1'),('',float)])
x[x['f0']=='a'] = ma.masked
On Dec 8, 2009, at 7:27 PM, John [H2O] wrote:
Maybe I should add, I'm looking at this thread:
http://old.nabble.com/masked-record-arrays-td26237612.html
And, I guess I'm in the same situation as the OP there. It's not clear to
me, but as best I can tell I am working with structured arrays
On Dec 8, 2009, at 7:11 PM, John [H2O] wrote:
My apologies for adding confusing. In answer to your first question. Yes at
one point I tried playing with scikits.timeseries... there were some issues
at the time that prevented me from working with it, maybe I should revisit.
What kind of issues
On Dec 7, 2009, at 11:15 PM, David Cournapeau wrote:
Charles R Harris wrote:
No, it is a consequence of errors being set to ignored in numpy.ma
http://numpy.ma:
Oopsie...
A bit of background first;
In the first implementations of numpy.core.ma, the approach was to get rid of
the data
On Dec 2, 2009, at 6:55 PM, Howard Chong wrote:
My question is: how can I make the latter version run faster? I think the
answer is that I have to do the iteration in C.
If that's the case, can anyone point me to where np.array.argmax() is
implemented so I can write np.array.argmaxN()
On Nov 25, 2009, at 4:13 PM, mdekauwe wrote:
I tried redoing the internal logic for example by using the where function
but I can't seem to work out how to match up the logic. For example (note
slightly different from above). If I change the main loop to
lst = np.where((data -900.0) (lst
On Nov 23, 2009, at 4:36 AM, Pauli Virtanen wrote:
Mon, 23 Nov 2009 01:40:00 -0500, Pierre GM wrote:
[clip]
XXX: 3K: numpy.ma is disabled for now -- some issues
What are the issues ?
Something resolving which would have taken more than 5 minutes :)
Possibly because something that ma
On Nov 23, 2009, at 6:42 PM, Travis Oliphant wrote:
I've made a few changes to datetime today and spent some time looking over
what is there and what remains to be implemented.
As always, many thanks for your work !!
Basically, the biggest thing left to do is to implement the low-level
On Nov 23, 2009, at 12:35 AM, Pauli Virtanen wrote:
http://github.com/pv/numpy-work/tree/py3k
$ mkdir -p $PWD/dist/lib/python3.1/site-packages
$ python3 setup.py install --prefix=$PWD/dist
$ cd $PWD/dist/lib/python3.1/site-packages python3
Python 3.1.1+ (r311:74480, Oct 11 2009, 20:22:16)
On Nov 19, 2009, at 4:34 PM, Scot Denhalter wrote:
I am a beginning programmer who is reading Natural Language Processing with
Python. The book provides tutorials for working with the NLTK, which needs
numpy to run certain calculations. I have downloaded and installed Python
2.6. I have
On Nov 19, 2009, at 5:40 PM, Fernando Perez wrote:
we're starting to use these tools more and more, and with the 1.4
release coming out, we're a bit lost here...
Welcome to the club...
Fernando, Ariel, I'm in the same spot as you are, I haven't been able to use
it. I don't think it's that
On Nov 19, 2009, at 5:41 PM, Scot Denhalter wrote:
Pierre,
Yes, I am using the Snow Leopard OSX. Should I be coding through the Xcode
interface and not Python's IDLE shell?
Oh no, that's not what I meant when I asked you if you had installed Xcode.
It's just that you need Xcode for
On Nov 19, 2009, at 7:00 PM, Scot Denhalter wrote:
I am not building anything at the moment. I am simply trying to learn
Python as it pertains to Natural Language Processing.
And soon you'll get addicted and start installing stuffs ;)
Yes, my MacBook Pro came with Python 2.5. Pierre
All,
An issue was recently raised about summing a MaskedArray with a np.object
dtype. Turns out that the problem is numpy based:
Let's sum using integers
type(np.sum([1,2,3], dtype=np.int32))
type 'numpy.int32'
Now, with a np.object dtype:
type(np.sum([1,2,3],dtype=object))
type 'int'
And
On Nov 16, 2009, at 12:16 AM, Robert Kern wrote:
On Sun, Nov 15, 2009 at 23:11, Pierre GM pgmdevl...@gmail.com wrote:
All,
An issue was recently raised about summing a MaskedArray with a np.object
dtype. Turns out that the problem is numpy based:
Let's sum using integers
type(np.sum
On Nov 13, 2009, at 9:54 PM, Charles R Harris wrote:
Hi All,
The documentation documentation says to document constants like functions. So
if a module defines a constant is it documented like so:
myconstant = 1
Blah and blah
That doesn't seem quite right, but what is the proper
On Nov 12, 2009, at 4:47 AM, Sebastian Haase wrote:
On Thu, Nov 12, 2009 at 10:38 AM, Sebastian Haase seb.ha...@gmail.com wrote:
Hi,
I hope my subject line is not entirely incomprehensible:
I remember there was a discussion (some time ago) that every ndarray
instance should get an extra
On Nov 10, 2009, at 1:09 PM, Darryl Wallace wrote:
Hello again,
The best way so far that's come to my attention is to use:
numpy.ma.masked_object
Will only work for masking one specific string, as you've noticed.
Can anyone help me so that all strings are found in the array without
On Nov 7, 2009, at 9:14 PM, Ryan May wrote:
On Sat, Nov 7, 2009 at 5:38 PM, Pierre GM pgmdevl...@gmail.com
wrote:
Linear interpolation with the delaunay package doesn't work great for
my data. I played with the radial basis functions, but I'm afraid
they're leading me down the dark, dark
On Nov 7, 2009, at 2:26 PM, Thomas Robitaille wrote:
Thanks for the advice! I'm somewhat confused by the difference between
structured and record arrays. My understanding is that record arrays
allow
you to access fields by attribute (e.g. r.field_name), but I imagine
that
there are
On Nov 6, 2009, at 5:45 PM, Pauli Virtanen wrote:
pe, 2009-11-06 kello 17:20 -0500, Pierre GM kirjoitti:
All,
I have a vector of observations (latitude,longitude,value) that I
need
to interpolate over a given area.
You could try to use linear interpolation from the delaynay package
On Nov 4, 2009, at 11:35 AM, Thomas Robitaille wrote:
Pierre GM-2 wrote:
As a workwaround, perhaps you could use np.object instead of np.str
while defining your array. You can then get the maximum string length
by looping, as David suggested, and then use .astype to transform
your
On Nov 3, 2009, at 11:43 AM, David Warde-Farley wrote:
On 2-Nov-09, at 11:35 PM, Thomas Robitaille wrote:
But if I want to specify the data types:
np.rec.fromrecords([(1,'hello'),(2,'world')],dtype=[('a',np.int8),
('b',np.str)])
the string field is set to a length of zero:
On Oct 29, 2009, at 8:30 AM, TheLonelyStar wrote:
Adter trying the same thing in matlab, I realized that my tsv file
is not
matrix-style. But this I mean, not all lines ave the same lenght
(not the
same number of values).
What would be the best way to load this?
The SVN version of
On Oct 29, 2009, at 4:22 PM, Alok Singhal wrote:
Hi,
On 29/10/09: 12:18, Ariel Rokem wrote:
I want to start trying out the new dtype for representation of arrays
of times, datetime64, which is implemented in the current svn. Is
there any documentation anywhere? I know of this proposal:
On Oct 27, 2009, at 7:56 AM, Gökhan Sever wrote:
Unfortunately, matplotlib.mlab's prctile cannot handle this division:
Actually, the division's OK, it's mlab.prctile which is borked. It
uses the length of the input array instead of its count to compute the
nb of valid data. The easiest
On Oct 19, 2009, at 12:01 PM, George Nurser wrote:
I had the same 4 errors in genfromtext yesterday when I upgraded
numpy r 7539.
mac os x python 2.5.2.
I'm on it, should be fixed in a few hours.
Please, don't hesitate to open a ticket next time (so that I remember
to test on 2.5 as
On Oct 16, 2009, at 8:29 AM, Skipper Seabold wrote:
Great work! I am especially glad to see the better documentation on
missing values, as I didn't fully understand how to do this. A few
small comments and a small attached diff with a few nitpicking
grammatical changes and some of what's
On Oct 16, 2009, at 12:14 AM, David Warde-Farley wrote:
Does anyone know what happened to the Google Groups archive of this
list? when I try to access it, I see:
Cannot find numpy-discussion
The group named numpy-discussion has been removed because it violated
Google's Terms Of Service.
On Oct 7, 2009, at 2:57 AM, Gökhan Sever wrote:
One more example. (I still think the behaviour of fill_value is
inconsistent)
Well, ma.masked_values use `value` to define fill_value,
ma.masked_equal does not. So yes, there's an inconsistency here. Once
again, please fill an enhancement
All,
I need to test the numpy SVN on a 10.6.1 mac, but using Python 2.5.4
(32b) instead of the 2.6.1 (64b).
The sources get compiled OK (apparently, find the build here:
http://pastebin.com/m147a2909
) but numpy fails to import:
File
On Oct 7, 2009, at 3:54 PM, Bruce Southey wrote:
Anyhow, I do like what genfromtxt is doing so merging multiple
delimiters of the same type is not really needed.
Thinking about it, merging multiple delimiters of the same type can be
tricky: how do you distinguish between, say,
AAA\t\tCCC
On Oct 6, 2009, at 2:42 PM, Bruce Southey wrote:
Hi,
Excellent as the changes appear to address incorrect number of
delimiters.
They should also give some extra info if there's a problem w/ the
converters.
I think that the default invalid_raise should be True.
Mmh, OK, that's a +1/)
On Oct 6, 2009, at 6:57 PM, Gökhan Sever wrote:
Seeing a different filling value is causing confusion. Both for
myself, and when I try to demonstrate the usage of masked array to
other people.
Fair enough. I must admit that `fill_value` is a vestige from the
previous implementation
On Oct 6, 2009, at 9:54 PM, Gökhan Sever wrote:
Also say, if I want to replace that one element back to its original
state will it use fill_value as 1e+20 or 99.?
What do you mean by 'replace back to its original state' ? Using
`filled`, you mean ?
Yes, in more properly stated
On Oct 6, 2009, at 10:08 PM, Bruce Southey wrote:
No, just seeing what sort of problems I can create. This case is
partly based on if someone is using tab-delimited then they need to
set the delimiter='\t' otherwise it gives an error. Also I often parse
text files so, yes, you have to be
On Oct 6, 2009, at 10:58 PM, Gökhan Sever wrote:
I see your points. I don't want to give you extra work, don't
worry :) It just seem a bit bizarre:
I[27]: c.data['Air_Temp'].fill_value
O[27]: 99.005
I[28]: c.data['Air_Temp'][4].fill_value
O[28]: 1e+20
As you see, it just
On Oct 6, 2009, at 11:01 PM, Skipper Seabold wrote:
In keeping with the making some work for you theme, I filed an
enhancement ticket for one change that we discussed and another IMO
useful addition. http://projects.scipy.org/numpy/ticket/1238
I think it would be nice if we could do
data
On Oct 7, 2009, at 12:10 AM, Gökhan Sever wrote:
Created the ticket http://projects.scipy.org/numpy/ticket/1253
Want even more confusion ?
x = ma.array([1,2,3],mask=[0,1,0], dtype=int)
x[0].dtype
dtype('int64')
x[1].dtype
dtype('float64')
x[2].dtype
dtype('int64')
Yet another
On Oct 7, 2009, at 1:12 AM, Gökhan Sever wrote:
One more from me:
I[1]: a = np.arange(5)
I[2]: mask = 999
I[6]: a[3] = 999
I[7]: am = ma.masked_equal(a, mask)
I[8]: am
O[8]:
masked_array(data = [0 1 2 -- 4],
mask = [False False False True False],
fill_value =
All,
What Python version are we supporting in 1.4.0dev ? 2.4 still ? For
which version of numpy will we be moving to a more recent one ?
Thx in advance
P.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
On Sep 29, 2009, at 12:37 PM, Christopher Barker wrote:
Pierre GM wrote:
Another idea: only store the indexes of the rows that have the wrong
number of columns -- if that's a large number, then then user has
bigger
problems than memory usage!
That was my first idea, but then it adds
On Sep 29, 2009, at 1:57 PM, Bruce Southey wrote:
On 09/29/2009 11:37 AM, Christopher Barker wrote:
Pierre GM wrote:
Probably more than memory is the execution time involved in printing
these problem rows.
The rows with problems will be printed outside the loop (with at least
On Sep 29, 2009, at 3:28 PM, Christopher Barker wrote:
well, how does one test compare to:
read the line from the file
split the line into tokens
parse each token
I can't imagine it's significant, but I guess you only know with
profiling.
That's on the parsing part. I'd like to keep
On Sep 28, 2009, at 12:51 PM, Skipper Seabold wrote:
This was probably due to the way that I timed it, honestly. I only
did it once. The only differences I made for that part were in the
first post of the thread. Two incremented scalars for line numbers
and column numbers and a try/except
Sorry all, I haven't been as respondent as I wished lately...
* About the patch: I don't like the idea of adding yet some other
tests in the main loop. I was more into letting things like they are,
but calling some error function if some 'setting an array element with
a sequence' exception
On Sep 25, 2009, at 3:12 PM, Christopher Barker wrote:
Pierre GM wrote:
That way, we don't slow
things down when everything works, but just add some delay when they
don't.
good goal, but if you don't keep track of where you are, wouldn't you
need to re-parse the whole file to figure
On Sep 25, 2009, at 3:42 PM, Skipper Seabold wrote:
As far as this goes, I added some examples to the docs wiki, but I
think that genfromtxt and related would be best served by having their
own wiki page that could maybe go here
http://docs.scipy.org/numpy/docs/numpy-docs/user/
Thoughts?
On Sep 25, 2009, at 3:51 PM, Skipper Seabold wrote:
While you're at it, can you ask for adding the possibility to process
a dtype like (int,int,float) ? That was what I was working on
before I
started installing Snow Leopard...
Sure. Should it be another keyword though `type` maybe?
On Sep 25, 2009, at 3:56 PM, Ralf Gommers wrote:
The examples you put in the docstring are good I think. One more
example demonstrating missing values would be useful. And +1 to a
page in the user guide for anything else.
Check also what's done in tests/test_io.py, that gives an idea of
On Sep 21, 2009, at 12:17 PM, Ryan May wrote:
2009/9/21 Ernest Adrogué eadro...@gmx.net
Hello there,
Given a masked array such as this one:
In [19]: x = np.ma.masked_equal([-1, -1, 0, -1, 2], -1)
In [20]: x
Out[20]:
masked_array(data = [-- -- 0 -- 2],
mask = [ True True
On Sep 21, 2009, at 4:23 PM, Ernest Adrogué wrote:
This explains why x[x == 3] = 4 works as expected, whereas
x[x == 0] = 4 ruins everything. Basically, any condition that matches
0 will match every masked item as well.
There's room for improvement here indeed. I need to check first
On Sep 16, 2009, at 4:25 AM, josef.p...@gmail.com wrote:
I have two structured arrays of different types. How can I
horizontally concatenate the two arrays? Is there a direct way, or do
I need to start from scratch?
Check numpy.lib.recfunctions, that should get you started.
On Sep 15, 2009, at 10:44 AM, Skipper Seabold wrote:
How do you specify different dtypes in genfromtxt?
I could not see the information in the docstring and the dtype
argument
does not appear to allow multiple dtypes.
Just give a regular dtype, or something that could be interpreted as
On Sep 15, 2009, at 4:19 PM, Tim Michelsen wrote:
Check the archives of the mailing list, there's an example using
dateutil.parser that may be just what you need.
How is this dateutil.parser used in timeseries?
It's left in a corner. The use of dateutil.parser comes from
matplotlib.
On Sep 15, 2009, at 4:22 PM, Ralf Gommers wrote:
There's a ticket for this functionality on the pydocweb tracker
already. Hopefully it gets implemented at some point.
My bad, sorry. I already always forget to check tickets on the trac
site for numpy/scipy, adding yet another site to check
On Sep 15, 2009, at 4:30 PM, Tim Michelsen wrote:
We shall all thank for having genfromtxt and derived!
You should really thank John Hunter, the original writer of
mlab.csv2rec (I thnk). I just polished the code and add a few extra
functionalities.
On Sep 13, 2009, at 3:51 PM, Skipper Seabold wrote:
On Sun, Sep 13, 2009 at 1:29 PM, Skipper Seabold
jsseab...@gmail.com wrote:
Is there a reason that the missing argument in genfromtxt only
takes a string?
Because we check strings. Note that you can specify several characters
at
On Sep 14, 2009, at 10:31 PM, Skipper Seabold wrote:
I actually figured out a workaround with converters, since my missing
values are , ,ie., irregular number of spaces and the
values aren't stripped of white spaces. I just define {# : lambda s:
float(s.strip() or 0)}, and I have a
On Sep 14, 2009, at 10:55 PM, Skipper Seabold wrote:
While we're on the subject, the other thing on my wishlist (unless I
just don't know how to do this) is being able to define a column map
for datasets that have no delimiters. At first each observation of my
data was just one long string
On Sep 15, 2009, at 12:50 AM, Skipper Seabold wrote:
Fixed-width fields should already be supported. Instead of delimiter=
[1-6, 7-10, 11-15, 16]..., use delimiter=[6, 4, 4, 1] (that is, just
give the widths of the fields).
Note that I wouldn't be surprised at all if it failed for some
On Sep 1, 2009, at 6:08 AM, Tim Michelsen wrote:
Hello,
I tried to load a ASCII table into a string array. Unfortunately,
this table has
some empty chells
Here it is:
http://www.ncdc.noaa.gov/oa/climate/rcsg/cdrom/ismcs/alphanum.html
After having converted this into a text file I
On Aug 31, 2009, at 2:33 PM, Ernest Adrogué wrote:
30/08/09 @ 13:19 (-0400), thus spake Pierre GM:
I can't reproduce that with a recent SVN version (r7348). What
version
of numpy are you using ?
Version 1.2.1
That must be that. Can you try w/ 1.3
Oops, overlooked this one ...
On Aug 22, 2009, at 7:58 AM, Ernest Adrogué wrote:
Hi there,
Here is a structured array with 3 fields each of which has 3 fields
in turn:
However if try the same with a masked array, it fails:
In [14]: x = np.ma.masked_all(2, dtype=desc)
In [15]:
On Aug 25, 2009, at 1:59 PM, Skipper Seabold wrote:
On Tue, Aug 25, 2009 at 1:51 PM, Charles R
Harrischarlesr.har...@gmail.com wrote:
Hi Travis,
The new parse_datetime.c file contains a lot of c++ style comments
that
should be fixed. Also, the new test for mirr is failing on all the
On Aug 7, 2009, at 11:23 PM, Charles R Harris wrote:
I ask again,
Datetime is getting really stale and hasn't been touched recently.
Do the datetime folks want it merged or not, because it's getting to
be a bit of work.
Chuck,
Please check directly w/ Travis O. (and Robert ?), the only
On Aug 6, 2009, at 12:22 PM, Keith Goodman wrote:
On Thu, Aug 6, 2009 at 9:18 AM, Robert Kernrobert.k...@gmail.com
wrote:
On Thu, Aug 6, 2009 at 11:15, Keith Goodmankwgood...@gmail.com
wrote:
thanksOn Thu, Aug 6, 2009 at 9:07 AM, Robert Kernrobert.k...@gmail.com
wrote:
On Thu, Aug
cough And, er... masked arrays anyone ? /cough
On Aug 5, 2009, at 11:20 AM, Bruce Southey wrote:
On 08/05/2009 09:18 AM, Keith Goodman wrote:
On Wed, Aug 5, 2009 at 1:40 AM, Bruce Southeybsout...@gmail.com
wrote:
On Tue, Aug 4, 2009 at 4:05 PM, Keith Goodmankwgood...@gmail.com
wrote:
On Aug 5, 2009, at 3:14 PM, Robert Kern wrote:
On Wed, Aug 5, 2009 at 14:11, Pierre GMpgmdevl...@gmail.com wrote:
cough And, er... masked arrays anyone ? /cough
That was what I suggested. The very first response, even.
I know, Robert, and I thank you for that. My comment was intended to
On Jul 23, 2009, at 6:07 AM, Scott Sinclair wrote:
2009/7/22 Pierre GM pgmdevl...@gmail.com:
You could try scipy.stats.scoreatpercentile,
scipy.stats.mstats.plottingposition or scipy.stats.mstats.mquantiles,
which will all approximate quantiles of your distribution.
It seems
On Jul 23, 2009, at 1:13 PM, Jacopo wrote:
In short, I want to create a class Dummy, inherited from np.ndarray,
which
returns a plain array whenever an instance is sliced or viewed. I
cannot figure
out how.
Easy enough for slices:
class Dummy(np.ndarray):
def __new__(cls,
On Jul 22, 2009, at 12:36 PM, Johannes Bauer wrote:
Hello list,
is there some possibilty to get a p-dynamic of an array, i.e. if p=1
then the result would be (arr.min(), arr.max()), but if 0 p 1,
then
the result is so that the pth percentile of the picture is withing the
range given?
On Jul 22, 2009, at 11:34 AM, Peter Alexander wrote:
Hi all,
I see life in a feature I've been dreaming about for years now. :-)
I'm wondering how stable this branch is and if it's ready for
playing with.
I may speak out of turn here, but I don't think so. Besides Travis O.
who does
On Jul 21, 2009, at 2:42 AM, Nils Wagner wrote:
Fixed-length fields are quite common e.g. in the area of
Finite Element pre/postprocessing.
Therefore It would be nice to have a function like
line2array in numpy.
Comments ?
Er, there's already something like that:
On Jul 20, 2009, at 3:44 PM, Christopher Barker wrote:
...
Is there a cleaner way to do this?
-Chris
Yes. np.lib._iotools.LineSplitter and/or np.genfromtxt
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
On Jul 21, 2009, at 3:16 AM, Nils Wagner wrote:
Er, there's already something like that:
np.lib._iotools.LineSplitter
Great. I didn't know about that.
Your examples are very useful.
IMHO the examples should be added to
http://www.scipy.org/Cookbook/InputOutput
to attract interest.
On Jul 21, 2009, at 10:58 AM, Darren Dale wrote:
http://docs.scipy.org/doc/numpy/user/basics.subclassing.html#simple-example-adding-an-extra-attribute-to-ndarray
includes an example where an instance attribute is set in __new__ .
However, there is a warning at http://www.scipy.org/Subclasses:
On Jul 21, 2009, at 2:23 PM, Darren Dale wrote:
Without access to your code, I'm likely to speak beyond myself here
(could you send me a link to the latest version ?), but if you need
to
rescale the units, make sure it's done in __array_finalize__ as well,
with something like
units =
On Jul 20, 2009, at 7:54 AM, John [H2O] wrote:
I have a file containing mixed data types: strings, floats, datetime
output(i.e. strings), and ints. Something like:
#ID, name, date, value 1,sample,2008-07-10 12:34:20,344.56
Presuming I get them nicely into a recarray (see my other post)
On Jul 15, 2009, at 4:23 AM, Pauli Virtanen wrote:
Tue, 14 Jul 2009 14:45:11 -0400, Pierre GM kirjoitti:
Consider the following code:
a = np.array(zip(np.arange(3)),dtype=[('a',float)]) np.isfinite(a)
NotImplemented
Seems like a bug. As I understand, NotImplemented is intended
All,
Consider the following code:
a = np.array(zip(np.arange(3)),dtype=[('a',float)])
np.isfinite(a)
NotImplemented
That is, when the input is a structured array, np.isfinite returns an
object of type NotImplementedType. I would have expected it to raise a
NotImplementedError exception.
On Jul 8, 2009, at 3:18 AM, Scott Sinclair wrote:
2009/7/8 Robert Kern robert.k...@gmail.com:
2009/7/4 Stéfan van der Walt ste...@sun.ac.za:
Thanks, Scott. This should now be fixed in SVN.
You should probably change that to asanyarray() before the masked
array crowd gets upset. :-)
I
On Jul 8, 2009, at 7:03 AM, John [H2O] wrote:
Hello,
I have several issues which require me to iterate through a fairly
large
array (30+ records).
The first case is calculating and hourly average from non-regularly
sampled
data.
Would you like to give the scikits.timeseries
On Jul 6, 2009, at 1:12 PM, Elaine Angelino wrote:
Hi -- We are subclassing from np.rec.recarray and are confused about
how some methods of np.rec.recarray relate to (differ from)
analogous methods of its parent, np.ndarray. Below are specific
questions about the __eq__, __getitem__
On Jul 2, 2009, at 6:42 PM, Peter Kelley wrote:
Hey Everyone,
I am reading in a file of columns with mixed data types, and the
number of columns can vary and their format is inputted by the user.
So I came up with this:
dt=dtype({'names': [x for x in event_fields], 'formats': [b for
On Jun 26, 2009, at 2:51 PM, Dan Yamins wrote:
We've been using the numpy.rec classes to make record array objects.
We've noticed that in more recent versions of numpy, record-array
like objects can be made directly with the numpy.ndarray class, by
passing a complex data type.
Hasn't
On Jun 26, 2009, at 3:59 PM, Dan Yamins wrote:
Short answer:
a np.recarray is a subclass of ndarray with structured dtype, where
fields can be accessed has attributes (as in 'yourarray.yourfield')
instead of as items (as in yourarray['yourfield']).
Is this the only substantial thing added
On Jun 21, 2009, at 5:01 AM, David Cournapeau wrote:
(Continuing the discussion initiated in the neighborhood iterator
thread)
Hi,
I would like to gather people's opinion on what to target for numpy
1.4.0.
Is this a wish list ?
* As Darren mentioned, some __array_prepare__ method
On Jun 11, 2009, at 3:07 PM, Travis Oliphant wrote:
BTW, what is the metadata that is going to be added to the types?
What purpose does it serve?
In the date-time case, it holds what frequency the integer in the
data-
type represents.There will only be 2 new static data-types.
On Jun 11, 2009, at 3:47 PM, Robert Kern wrote:
On Thu, Jun 11, 2009 at 14:37, Pierre GMpgmdevl...@gmail.com wrote:
On Jun 11, 2009, at 3:07 PM, Travis Oliphant wrote:
BTW, what is the metadata that is going to be added to the types?
What purpose does it serve?
In the date-time case, it
On Jun 4, 2009, at 3:12 PM, Fernando Perez wrote:
Howdy,
I was thinking about this yesterday, because I'm dealing with exactly
this same problem in a local project. How hard would it be to allow
structured arrays to support ufuncs/arithmetic for the case where
their dtype is actually a
On Jun 3, 2009, at 11:06 AM, D2Hitman wrote:
Hi,
I would like to have an object/class that acts like array of floats
such as:
a_array = numpy.array([[0.,1.,2.,3.,4.],[1.,2.,3.,4.,5.]])
but i would like to be able to slice this array by some header
dictionary:
header_dict =
On Jun 3, 2009, at 5:03 PM, Robert Kern wrote:
On Wed, Jun 3, 2009 at 15:26, josef.p...@gmail.com wrote:
2009/6/3 Stéfan van der Walt ste...@sun.ac.za:
Hi Jon
2009/6/3 D2Hitman j.m.gir...@warwick.ac.uk:
I understand record arrays such as:
a_array =
On Jun 3, 2009, at 7:00 PM, Robert Kern wrote:
On Wed, Jun 3, 2009 at 17:58, josef.p...@gmail.com wrote:
Do you have an opinion about whether .view(ndarray_subclass) or
__array_wrap__ is the more appropriate return wrapper for function
such as the ones in stats?
__array_wrap__ would be
On Jun 3, 2009, at 7:23 PM, Robert Kern wrote:
On Wed, Jun 3, 2009 at 18:20, Pierre GM pgmdevl...@gmail.com wrote:
Or, as all fields have the same dtype:
a_array.view(dtype=('f',len(a_array.dtype)))
array([[ 0., 1., 2., 3., 4.],
[ 1., 2., 3., 4., 5.]], dtype=float32
On May 27, 2009, at 5:53 PM, Fernando Perez wrote:
Howdy,
I'm wondering if the code below illustrates a bug in loadtxt, or just
a 'live with it' limitation.
Have you tried np.lib.io.genfromtxt ?
dt = dtype(dict(names=['name','x','y','block'],
On May 27, 2009, at 6:15 PM, Fernando Perez wrote:
Hi Pierre,
On Wed, May 27, 2009 at 3:01 PM, Pierre GM pgmdevl...@gmail.com
wrote:
Have you tried np.lib.io.genfromtxt ?
I didn't know about it, but it has the same problem as loadtxt:
Oh yes indeed. Yet another case of I-opened-my
On May 27, 2009, at 7:10 PM, Fernando Perez wrote:
Hi Pierre,
On Wed, May 27, 2009 at 4:03 PM, Pierre GM pgmdevl...@gmail.com
wrote:
Oh yes indeed. Yet another case of I-opened-my-mouth-too-soon'...
OK, so there's a trick. Kinda:
* Define a specific converter:
Thanks, that's
Sorry to jump in a conversation I haven't followed too deep in
details, but I'm sure you're all aware of the scikits.timeseries
package by now. This should at least help you manage the dates
operations in a straightforward manner. I think that could be a nice
extension to the package:
On May 25, 2009, at 7:02 PM, josef.p...@gmail.com wrote:
On Mon, May 25, 2009 at 6:36 PM, Pierre GM pgmdevl...@gmail.com
wrote:
Sorry to jump in a conversation I haven't followed too deep in
details, but I'm sure you're all aware of the scikits.timeseries
package by now. This should
201 - 300 of 659 matches
Mail list logo