Hello All,
As per the createTable()
documentationhttp://www.pytables.org/docs/manual/ch04.html#TableClassDescr,
I tried to make a new table using a numpy dtype as the description.
However, doing so fails with the following traceback:
Traceback (most recent call last):
File table_dtype.py,
Hello Francesc,
Yes, the do/undo machinery is already capturing this (among other
events). If you are only interested in triggering an action on node
creation,
Actually, this was fairly easy to expose. Attached is a patch that allows
the user to submit a 'hook' function that then gets
Hey Tom,
I am not sure what you are asking is possible. I don't think sets are
regular enough* container type. Perhaps if you had a sets of consistent (or
maximal) length and primitive data type, this kind of thing would be
possible. (Eg every set in your column only contained integers and has
Hello All,
I am very pleased to announce inSCIght, a new scientific computing podcast
(press release below). I apologize for those of you in the intersection of
these lists that may receive this message multiple times.
As I mention in the press release, we are very open to your contributions!
Hi Wayne,
I am fairly certain that the maximum number of fields is defined as a
variable on the C-level
http://www.hdfgroup.org/training/HDFtraining/UsersGuide/Fundmtls.fm3.html#2412
If
you were willing to change the hfdf5 source and recompile, you could adjust
this.
However, I have often found
Hi Dhananjaya,
This very much depends on the implementation of your code. (My guess is
that you are not comparing apples to apples.)
Could you please send code samples of the Python version and the C version
so that we can review.
Be Well
Anthony
On Tue, Mar 15, 2011 at 7:09 PM, Dhananjaya
Hello Dhananjaya,
I see what is going on. Yes, you are correct that PyTables is not optimized
for making new datasets, but neither is the C version of HDF5. Really the
point of these libraries is to store large amounts of well-structured data.
Not large numbers of small data sets. (It looks
Hi Lionel,
Consistent, atomic, file i/o is sort of a fundamentally serial task.
Trying to do this in parallel is almost guaranteed to fail in one
way or another.
What you need is a caching / blocking mechanism on top of the
HDF5 file. All of your processes would write to this queue which
would
(second
file)..
The loop allows in principle the calculus to be called one after an other..
in a way it should be a blocking maneer to fill our matrix.. no?
Best
Lionel
2011/5/25 Anthony Scopatz scop...@gmail.com
Hi Lionel,
Consistent, atomic, file i/o is sort of a fundamentally serial task
Hey Francesc,
I am personally saddened to see you go...
I use pytables all the time and have a vested
interest in keeping the project going. It would
be a major blow to loose pytables entirely
I would be happy to do basic maintenance
(bug fixes, responding to people on the list)
but I wouldn't
Hey Curious,
I think that is probably because str do not have defined size on their own.
Both numpy and pytables need to know the number of bytes something a
variable is expected to have. By default, dtypes do a good job of guessing
size, but you are overriding the default behavior and not
!
Be Well
Anthony
Thanks!
Francesc
A Wednesday 01 June 2011 20:25:12 Anthony Scopatz escrigué:
Hey Francesc,
I am personally saddened to see you go...
I use pytables all the time and have a vested
interest in keeping the project going. It would
be a major blow to loose pytables
I tend to just use google for searching through mailing lists ;)
On Wed, Jun 22, 2011 at 4:38 PM, Jason Moore moorepa...@gmail.com wrote:
I found the search functionality, but it is rather painful to find
anything. I'll go ahead and ask my question in a new thread.
On Wed, Jun 22, 2011 at
Hi Jason,
You can try nesting variable length arrays inside of your table (
http://www.pytables.org/docs/manual/ch04.html#VLArrayClassDescr). You might
need to refer to nested tables to get an idea of how to do this (
http://www.pytables.org/docs/manual/ch03.html#id332794). On the HDF5 level,
:
http://stackoverflow.com/questions/5366099/in-pytables-how-to-create-nested-array-of-variable-length
It is well written but has no answers.
Any specific ideas on how to make this example work?
Jason
On Wed, Jun 22, 2011 at 2:57 PM, Anthony Scopatz scop...@gmail.comwrote:
Also note
Hey Mike,
Hmmm I don't see LinkAtom or LinkCol anywhere in my source.
Can you try to determine which file they are coming from. What
does type(LinkAtom) say?
Be Well
Anthony
On Wed, Jun 22, 2011 at 6:41 PM, Tallhamer, Mike
mike.tallha...@usoncology.com wrote:
Here is some additional
(unlike strings).
However, truth be told I don't have a lot of experience with them myself.
Though I would certainly be happy to take a look.
Be Well
Anthony
Thanks,
Mike
Message: 1
Date: Wed, 22 Jun 2011 20:20:02 -0500
From: Anthony Scopatz scop...@gmail.com
Subject: Re: [Pytables-users
Hi Ben,
The column names are stored as attributes of the table which follow the
following pattern FIELD_(\d+)_NAME, where the group is given as a
zero-index column number. (In ViTables, access properties for a table and
then look at the System Attributes tab.)
You might be able to just change
database. I'll stick with that for now, but in version two of my
software, I'll certainly look into the ability to add a VLArrayCol. It would
be a nice feature.
Jason
On Wed, Jun 22, 2011 at 6:10 PM, Anthony Scopatz scop...@gmail.comwrote:
Hi Jason,
Upon further inspection, I think
Hello Jose,
You could try the following:
1. Get a fresh Python 2.7 install
2. Install PyTables again
3. And see if you get the same error.
It seems like the compiler that was used to build Python and that used to
build PyTables might be different. (How are you getting Python?)
You
Unfortunately, numpy is actually the smaller of the dependency issues for
PyTables on PyPy.
PyPy does not support extension modules at all, and the PyPy folks seem
unlikely to add it in the near future. The tiny numpy for PyPy is
a re-implementation of the numpy API.
It would be nice to move to
if this ever happens again to me.
I tried EPD Free without success.
Cheers,
José M.
2011/7/26 Anthony Scopatz scop...@gmail.com
Hello Jose,
You could try the following:
1. Get a fresh Python 2.7 install
2. Install PyTables again
3. And see if you get the same error.
It seems
Hello All,
On Fri, Aug 5, 2011 at 2:51 PM, Josh Moore josh.mo...@gmx.de wrote:
Chiming in late
Also chiming in late.
On Aug 4, 2011, at 10:17 AM, Antonio Valentino wrote:
Hi list, hi developers,
it seems to me that the PyTables development is a little stalled at the
moment.
On Sat, Aug 6, 2011 at 2:42 AM, Antonio Valentino
a_valent...@users.sourceforge.net wrote:
Hi Anthony, hi Josh,
Il 06/08/2011 03:01, Anthony Scopatz ha scritto:
Hello All,
On Fri, Aug 5, 2011 at 2:51 PM, Josh Moore josh.mo...@gmx.de wrote:
Chiming in late
Also chiming
Hello Francesc,
On Mon, Aug 8, 2011 at 4:22 AM, Francesc Alted fal...@pytables.org wrote:
Hi Anthony,
2011/8/8, Anthony Scopatz scop...@gmail.com:
I made some thought about proposing to use sphinx for the user guide in
the future.
I'm a little bit dubious, it is a big task.
You
On Mon, Aug 22, 2011 at 1:00 PM, Antonio Valentino
antonio.valent...@tiscali.it wrote:
Hi Stuart,
Il 22/08/2011 05:59, Stuart Mentzer ha scritto:
Hi Antonio,
Thanks for the quick response.
Hi Stuart,
Il giorno 21/ago/2011, alle ore 02.38, Stuart Mentzer ha scritto:
[CUT]
Hi Chris,
I know at Enthought, we considered the benefits of the former PyTables Pro
(now v2.3) worth the current beta status. Putting my PyTables hat on, it
still is beta so if you include it in your distribution, be prepared to
update ;).
[thread hijack]
Everyone else,
Do we have a timeline
or IRC meeting is a great idea. We should probably try to do these
once every 3 months or so.
Be Well
Anthony
Cheers,
~Josh
On Aug 23, 2011, at 4:34 AM, Anthony Scopatz wrote:
[thread hijack]
Everyone else,
Do we have a timeline for a release? Are we trying to release
On Sat, Aug 27, 2011 at 3:20 AM, Francesc Alted fal...@pytables.org wrote:
2011/8/26 Antonio Valentino antonio.valent...@tiscali.it
I agree, governance issues should be hammered out before v2.3
+1
IMHO the Governance milestone is not really blocking for 2.3.
Anyway it is OK for me
Hi Kris,
There have been some other build path issues recently as well. It would be
good to consider this. I know it is only a one-line change, but would you
mind submitting this as a pull request on github?
Thanks, Anthony
On Fri, Sep 2, 2011 at 9:47 AM, Chris Kees cek...@gmail.com wrote:
Hi Kris,
No worries. Github has a very detailed page on pull requests (
http://help.github.com/send-pull-requests/).
First you'll need to fork your PyTables into your own remote repo. Then,
you'll want to create a remote fork branch locally, and point this to your
fork. Make your changes
Hi Damien,
I generally find it best, when storing date and/or times to use the
TimeCol64(), which basically just stores the timestamp (ie the float-number
of seconds since epoch).
Be Well
Anthony
On Tue, Sep 13, 2011 at 6:42 AM, Damien Klassen damien.klas...@gmail.comwrote:
Hopefully an easy
On Tue, Sep 13, 2011 at 7:56 AM, Damien Klassen damien.klas...@gmail.comwrote:
Hopefully an easy one for someone out there!
Am trying to transfer some of my existing calculations from a relational DB
into pytables to speed up a range of calculations. One of the problems is
that the data has
On Tue, Sep 13, 2011 at 12:53 AM, PyTables Org pytab...@googlemail.comwrote:
Forwarding to list. Your email address doesn't appear to be registered.
See http://sourceforge.net/mail/?group_id=63486 for more information.
Begin forwarded message:
*From:
2011 02:20
Subject: Pytables-users Digest, Vol 64, Issue 4
To: pytables-users@lists.sourceforge.net
Date: Tue, 13 Sep 2011 11:20:09 -0500
From: Anthony Scopatz scop...@gmail.com
Subject: Re: [Pytables-users] Best Practice data design
To: Discussion list for PyTables
pytables-users
on vitables and editable entries/row in tables?
At this point, I am not yet convinced I should use h5/pytables yet...
just exploring
On Thu, Sep 15, 2011 at 11:58 AM, Anthony Scopatz
ascop...@enthought.com wrote:
Hello Emmanuel,
Could you please be more specific? You can always take data
Hello Francesc,
These are great. Thanks for sharing!
Be Well
Anthony
On Fri, Sep 16, 2011 at 1:54 AM, Francesc Alted fal...@pytables.org wrote:
Hi,
Just a quick note to say that I've just successfully introduced the
new PyTables 2.3 to the students participating in the Advanced
Whoohoo! Congrats everyone.
On Wed, Sep 21, 2011 at 2:52 PM, Antonio Valentino
antonio.valent...@tiscali.it wrote:
===
Announcing PyTables 2.3
===
We are happy to announce PyTables 2.3.
This release comes after about 10 months of
that installed on my machine. Also I can't find where the HDF5 1.8.5 headers
might be. The only HDF5 libraries I can see on my machine are 1.8.7.
Thanks,
-Ranjit
On Oct 12, 2011, at 10/12/11 9:15 PM, Anthony Scopatz wrote:
Hello Ranjit,
Does NumPy Work? To the best of my knowledge, numpy
pytables using both
easy_install, and by downloading the source and compiling. I get the same
errors either way.
On Oct 13, 2011, at 10/13/11 7:55 AM, Anthony Scopatz wrote:
Hmm... How did you install pytables? What platform are you on?
On Thu, Oct 13, 2011 at 10:33 AM, Ranjit Chacko rjcha
:28 AM, Anthony Scopatz wrote:
Hmmm, is this also how how are you getting numpy?
It may be the case that if your numpy is linked against MKL, that then
PyTables also needs to be linled against MKL.
On Thu, Oct 13, 2011 at 12:00 PM, Ranjit Chacko rjcha...@gmail.comwrote:
I'm running Snow
Hello Fernando,
I personally have always found the 64-bit time stamps to be much more
useful (and much less ambiguous) than Python datetime objects.
However, if against my better judgement, you decide to store datetimes, you
effectively have three options (in preferred order):
1. Create a
On Thu, Nov 17, 2011 at 6:20 PM, Andre' Walker-Loud walksl...@gmail.comwrote:
Hi All,
I just stumbled upon pytables, and have been playing around with
converting my data files into hdf5 using pytables. I am wondering about
strategies to create data files.
I have created a file with the
Hello Matthew,
I think code can definitely be faster. I am going to ask a series of
possibly silly questions, so bear with me? This will help pin down what
the problem points are.
1) Have you profiled the code? (I use
line_profilerhttp://packages.python.org/line_profiler/.)
Which lines are
Hello Edward,
I don't think there is any document such as you proposed about translating
errors from HDF5 to PyTables. This problem does seem to be about the issue
of using Blosc. If you want a quick fix you could try turning compression
off or using a different filter instead. If you want more
Hello Edward,
I'd like to respond point by point:
On Tue, Dec 6, 2011 at 2:54 PM, PyTables Org pytab...@googlemail.comwrote:
1. There seems to be an unpythonic design choice with the start, stop, step
convention for PyTables. Anything that is unnatural to a Python
programmer should be
On Wed, Dec 7, 2011 at 5:51 AM, Josh Moore josh.mo...@gmx.de wrote:
On Dec 6, 2011, at 11:06 PM, Anthony Scopatz wrote:
...snip...
5. The reference manual for numpy contains _many_ small examples. They
partially compensate for any lack of precision or excessive precision
On Mon, Jan 16, 2012 at 12:43 PM, Ümit Seren uemit.se...@gmail.com wrote:
I created a hdf5 file with pytables which contains around 29 000
tables with around 31k rows each.
I am trying to create a caching table in the same hdf5 file which
contains a subset of those 29 000 tables.
I wrote a
On Tue, Jan 17, 2012 at 4:35 AM, Ümit Seren uemit.se...@gmail.com wrote:
@Anthony:
Thanks for the quick reply.
I fixed my problem (I will get to it later) but first to my previous
problem:
I actually made a mistake in my previous mail.
My setup is the following: I have around 29
Hello Ümit,
Yes this is some seriously messed up behavior. I would suggest profiling
(kern_prof, line_profiler) to try to figure out what is going on here. The
is counter productive at the very least
My suspicion is that when indexes are used that BOTH the index and the
table values
Hello Luc,
This is because you do not have the correct version of the Windows C
runtime library installed. Python is distributed with the Virtual C 2008
(and not anything newer!). You need to install MSVCR90.DLL correctly to
get this to work. This is a well known issue.
Google for ImportError
.
Thanks!
Brad
On Thu, Jan 19, 2012 at 2:10 PM, Anthony Scopatz scop...@gmail.comwrote:
On Thu, Jan 19, 2012 at 1:02 PM, Antonio Valentino
antonio.valent...@tiscali.it wrote:
Hi Brad,
Il 19/01/2012 04:13, Brad Buran ha scritto:
Hi Antonio:
The user block I am referring to is a region
: function_to_test at line 11
Total time: 10.7759 s
Well I have the hdf5 file which reproduces the problem. I will try to
write a script which creates the an hdf5 file that shows the same
behavior.
On Sun, Jan 22, 2012 at 11:20 PM, Anthony Scopatz scop...@gmail.com
wrote:
Hello Ümit,
Yes
Hello Davide,
Yes it is possible to store fixed sized arrays in a Table. The column
classes (IntCol, Float64Col, etc) all take a 'shape' argument in
their constructor which takes a tuple of the rank and dimension information.
Float64Col('array', shape=(2, 4))
for a 2x4 array.
Be Well
Anthony
On Thu, Jan 26, 2012 at 5:38 PM, Davide Cittaro daweonl...@gmail.comwrote:
On Jan 27, 2012, at 12:32 AM, Anthony Scopatz wrote:
Hello Davide,
Yes it is possible to store fixed sized arrays in a Table. The column
classes (IntCol, Float64Col, etc) all take a 'shape' argument
Hi Davide,
From the
createTable()http://pytables.github.com/usersguide/libref.html?#tables.File.createTabledoc
string you'll see that you are not confined to the IsDescription class
for creating table descriptions. In fact, you can use dictionaries of Col
instances or numpy dtypes as
Hi Jason,
I did some successful experimenting around with this idea about a year ago
or so. You'll probably be interested in the following issue:
https://github.com/PyTables/PyTables/issues/31 As implemented here, it
uses the undo/redo mechanism for hooking into events. You can take the
code
Hello All,
One of my collaborators (cc'd) has been trying to install PyTables on his
MacBook. Everything compiles and installs just fine but there is
and immediate issue on import with dynamic linking. However, I know very
little about OS X development and it isn't readily apparent to me how to
\pyscript.py).
To me it seems that the problem is in the interaction (wscript, win32 and
pytables)
I'll try to get more info on how the exact process works, but maybe this
already rings a bell.
Message: 4
Date: Wed, 25 Jan 2012 11:17:58 -0600
From: Anthony Scopatz scop...@gmail.com
Subject
Hello Ask,
What you are looking for in general are called Links in PyTables:
http://pytables.github.com/usersguide/libref.html#the-softlink-class.
Storing them as nodes on their own is easy. Storing them as
attributes on another node may be more tricky. I am not even
certain that HDF5 supports
Hello German,
The easiest and probably the fastest way is to use numpy array.
Simply pass the table into the array constructor:
import numpy as np
a = np.array(f.root.path.to.table)
If your table contains more than one type and you want to keep that
setup via a structured array, also pass in
Hi James,
It seems that many of your questions if you are approaching
this from a SQL perspective may be answered on out Hints
for SQL users page (http://www.pytables.org/moin/HintsForSQLUsers).
(Yes, we need to migrate this to sphinx.)
But in general I would agree with your assessment. (1)
On Sat, Mar 3, 2012 at 4:48 PM, Francesc Alted fal...@pytables.org wrote:
On Mar 3, 2012, at 12:44 PM, Anthony Scopatz wrote:
Hello Kale,
This is actually pretty easy. You should look at the where() method on
tables (
http://pytables.github.com/usersguide/libref.html
Hi Daπid,
So in general there are a couple of different ways of tackling this issue.
The one that you choose will depend on your desired scalablity and existing
architecture. I'll outline some options now:
1) What you described. Every process writes out its own library and then
a post-process
Hi Chris,
PyTables should work with HDF5 v1.8.7. (Apparently v1.8.8 is out but I
haven't tested it with that.) What version are you using at TACC?
Also, what is the top most build error that you are getting?
Be Well
Anthony
On Wed, Mar 7, 2012 at 3:35 PM, Chris Kees cek...@gmail.com wrote:
/python2.7/site-packages/numpy/core/include/numpy/__multiarray_api.h:1666:
warning: '_import_array' defined but not use
d
gcc: blosc/blosc.c
blosc/blosc.c:79: error: expected '=', ',', ';', 'asm' or '__attribute__'
before 'barr_init'
On Wed, Mar 7, 2012 at 3:38 PM, Anthony Scopatz scop
...
Today's Topics:
1. Performance advice/Blosc (Francesc Alted)
2. Re: Performance advice/Blosc (Francesc Alted)
3. Re: Pytables-users Digest, Vol 69, Issue 2 (Luc Kesters)
4. Re: Pytables-users Digest, Vol 69, Issue 2 (Anthony Scopatz
Hello Sreeaurovindh,
The problem is not with PyTables, but rather that you have a bug in the
code.
You assign table to be the adSuggester table and then try to access
queryToken
attributes / columns. This line:
queryToken=table.row
should really be:
queryToken=queryTable.row
Be Well
Anthony
Links are covered here.
soft: http://pytables.github.com/usersguide/libref.html?#the-softlink-class
hard:
http://pytables.github.com/usersguide/libref.html?#the-externallink-class
On Thu, Mar 15, 2012 at 7:56 AM, Alvaro Tejero Cantero alv...@minin.eswrote:
Does PyTables support object region
that the PyTables objects return.
Be Well
Anthony
Sorry for the newbie questions, feel free to point me to the FM (but
see next email),
-á.
On Thu, Mar 15, 2012 at 17:57, Anthony Scopatz scop...@gmail.com wrote:
Hello Alvaro,
Thanks for your excitement!
On Thu, Mar 15, 2012 at 7:52 AM, Alvaro
Hello Sree,
Sorry for the slow response.
On Thu, Mar 15, 2012 at 10:56 PM, sreeaurovindh viswanathan
sreeaurovi...@gmail.com wrote:
Hi,
I have created five tables in a Hdf5 file.I have created index during the
creation of the file.I have about 140 million records in my postgresql
Try changing
adSuggester['queryToks']=qA
to
adSuggester['queryToks'][:]=qA
Be Well
Anthony
On Sat, Mar 17, 2012 at 9:13 AM, sreeaurovindh viswanathan
sreeaurovi...@gmail.com wrote:
On Sat, Mar 17, 2012 at 7:34 PM, sreeaurovindh viswanathan
sreeaurovi...@gmail.com wrote:
Hi,
I have
Sorry for the long list of questions.Kindly help me for the same.
Thanks you
Sree aurovindh V
On Sat, Mar 17, 2012 at 1:49 AM, Anthony Scopatz scop...@gmail.comwrote:
Hello Sree,
Sorry for the slow response.
On Thu, Mar 15, 2012 at 10:56 PM, sreeaurovindh viswanathan
sreeaurovi
Is there any way that you can query and write in much larger chunks that 6?
I don't know much about postgresql in specific, but in general HDF5 does
much better if you can take larger chunks. Perhaps you could at least do
the postgresql in parallel.
Be Well
Anthony
On Mon, Mar 19, 2012 at
Viswanathan
On Mon, Mar 19, 2012 at 10:01 PM, Anthony Scopatz scop...@gmail.com
wrote:
Is there any way that you can query and write in much larger chunks that
6? I don't know much about postgresql in specific, but in general HDF5
does much better if you can take larger chunks. Perhaps you
array!
Be Well
Anthony
Thanks
Sree aurovindh V
On Mon, Mar 19, 2012 at 10:16 PM, Anthony Scopatz scop...@gmail.comwrote:
What Francesc said ;)
On Mon, Mar 19, 2012 at 11:43 AM, Francesc Alted fal...@gmail.comwrote:
My advice regarding parallelization is: do not worry about this *at all
On Tue, Mar 20, 2012 at 11:46 AM, Tom Diethe tdie...@gmail.com wrote:
I'm writing a wrapper for sparse matrices (CSR format) and therefore
need to store three vectors and 3 scalars:
- data(float64 vector)
- indices(int32 vector)
- indptr (int32 vector)
- nrows (int32
On the other hand Tom,
If you know that you will be doing N insertions in the future,
you can always pre-allocate a Table / Array that is of size N
and pre-loaded with null values. You can then 'insert' by
over-writing the nth row. Furthermore you can always append
size N chunks whenever.
For
On Fri, Apr 13, 2012 at 12:30 PM, Alvaro Tejero Cantero alv...@minin.eswrote:
Hi Anthony,
How does hierarchical help here? do you create a 'singer_name'/song
table? or a 'genre name'/song ?. Most of the time the physical layout
in the form of a hierarchy is just an annoyance.
I have
Hello Alvaro,
What are the timings using the normal where() method?
http://pytables.github.com/usersguide/libref.html?highlight=where#tables.Table.where
Be Well
Anthony
On Wed, Apr 18, 2012 at 12:33 PM, Alvaro Tejero Cantero alv...@minin.eswrote:
A single array with 312 000 000 int 16 values.
On Thu, Apr 19, 2012 at 15:33, Anthony Scopatz scop...@gmail.com wrote:
I was interested in how long it takes to iterate, since this is arguably
where the
majority of the time is spent.
On Thu, Apr 19, 2012 at 8:43 AM, Alvaro Tejero Cantero alv...@minin.es
wrote:
Some complementary
On Mon, Apr 23, 2012 at 9:14 PM, Francesc Alted fal...@pytables.org wrote:
On 4/19/12 8:43 AM, Alvaro Tejero Cantero wrote:
Some complementary info (I copy the details of the tables below)
timeit vals = numpy.fromiter((x['val'] for x in
Hello M,
On Tue, May 1, 2012 at 9:25 PM, lef...@cnr.colostate.edu
lef...@gmail.comwrote:
but this doesn't work. Is there a syntax that will work for this ?
The short answer is No. Basically, right now you have to write your own
wrapper class which knows how to dispatch various operations
Hello Christian,
I would probably use the
modifyCoordinates()http://pytables.github.com/usersguide/libref.html#tables.Table.modifyCoordinates,
getWhereList()http://pytables.github.com/usersguide/libref.html#tables.Table.getWhereList,
and
On Mon, May 14, 2012 at 3:05 PM, Francesc Alted fal...@pytables.org wrote:
[snip]
However, do not expect to use all your cores at full speed in this cases,
as the reductions in numexpr can only make use of one thread (this is
because this has not been implemented yet, not due to a intrinsic
Hi Alex,
In general, HDF5 files are very portable to many platforms and many
languages. Indeed, that is sort of the purpose behind the HDF Group.
While there are some incompatible edge cases, you sort of have to look for
them. Josh did a very good job of outlining the support for HDF5 across
On Thu, Jun 14, 2012 at 4:30 PM, Andre' Walker-Loud walksl...@gmail.comwrote:
Hi Anthony,
On Jun 14, 2012, at 11:30 AM, Anthony Scopatz wrote:
On Wed, Jun 13, 2012 at 8:23 PM, Andre' Walker-Loud walksl...@gmail.com
wrote:
Hi All,
Still trying to sort out a recursive walk through
in any
programming, I have just hacked as needed for work/research, so things like
this are not yet common for me to realize.
No worries, that is what we are here for ;)
Cheers,
Andre
On Jun 14, 2012, at 3:28 PM, Anthony Scopatz wrote:
On Thu, Jun 14, 2012 at 4:30 PM, Andre' Walker
:57 AM, Anthony Scopatz scop...@gmail.com
wrote:
Hi David,
How did you build / install HDF5?
Be Well
Anthony
On Fri, Jun 15, 2012 at 7:14 PM, David Donovan donov...@gmail.com
wrote:
Hi Everyone,
I am having problems running the tests for PyTables on Mac OS X Lion.
I have
Hey Aquil,
Yes, the string method certainly works. The other thing you could do that
isn't mentioned in that post is have a table or 2D array whose first
column is the float timestamp [1] and whose second column is a string repr
of just the timezone name (or a int or float of the offset in sec
Hello Sreeroop,
Yes, the merged version with all of the former pro features is now in
v2.3.1. Just use this version from now on - and remember to thank Francesc
;)
Be Well
Anthony
On Wed, Jun 20, 2012 at 3:38 PM, Sreeroop invincido...@yahoo.co.in wrote:
Hi there,
I have been using pytables
Hi Daniele,
This is probably because of the way PyTables caches its file objects. As a
temporary work around, why don't you try clearing the cache or at least
removing this file. The cache is just a dictionary and it is located at
tables.file._open_files. ie try:
Hello Jacob,
This is not surprising. The HDF5 parallel library requires MPI and comes
with some special restrictions (no compression on write). As such, the
pain of implementing a parallel write version of PyTables has not been
worth it. We certainly welcome pull requests and further
Hi Jacob,
This is not a solely PyTables issue. As described the methods you mention
all involve attribute (or metadata) access, which is notaoriously slow in
HDF5. Or rather, much slower that read/write from the datasets (Tables,
Arrays) themselves.Generally, having a single table with 3E8
new everyday ;)
I'm still going to go with larger tables though, since I have to read the
data eventually.
Sounds good! Fee free to ask further questions here!
Be Well
Anthony
Thanks Again For Your Time,
Jacob
On Thu, Jun 28, 2012 at 10:16 AM, Anthony Scopatz scop...@gmail.comwrote
Hi Alvaro,
I think if you save the table as a record array, it should return you a
record array. Or does it return a structured array? Have you tried this?
Be Well
Anthony
On Thu, Jun 28, 2012 at 11:22 AM, Alvaro Tejero Cantero alv...@minin.eswrote:
Hi,
I've noticed that tables are loaded
...@minin.eswrote:
I just tested: passing an object of type numpy.core.records.recarray
to the constructor of createTable and then reading back it into memory
via slicing (h5f.root.myobj[:] ) returns to me a numpy.ndarray.
Best,
-á.
On Thu, Jun 28, 2012 at 5:30 PM, Anthony Scopatz scop
list later tonight,
unless someone else wants to...
Francesc
On 6/28/12 8:25 PM, Anthony Scopatz wrote:
Hmmm Ok. Maybe there needs to be a recarray flavor.
I kind of like just returning a normal ndarray, though I see your
argument for returning a recarray. Maybe some of the other devs
Hello Again Jacob,
Hmm are they of Python type long? Also, what exactly is the number that is
failing?
Be Well
Anthony
On Thu, Jun 28, 2012 at 4:18 PM, Jacob Bennett jacob.bennet...@gmail.comwrote:
Hello PyTables Users,
I have a concern with a very strange error that references that my
table.row, then it works
fine.
I will look at this issue more later tonight, and will report my findings.
Thanks,
Jacob
On Thu, Jun 28, 2012 at 5:37 PM, Anthony Scopatz scop...@gmail.comwrote:
Hello Again Jacob,
Hmm are they of Python type long? Also, what exactly is the number
1 - 100 of 220 matches
Mail list logo