Francesc Alted said the following on 11/9/2010 12:42 PM:
> After having a look at you script, yes, I think this is the expected
> behaviour. In order to explain this you need to know how HDF5 stores
> its data internally. For chunked datasets (the Table object is an
> example of this), the I/O
A Tuesday 09 November 2010 18:45:52 David E. Sallis escrigué:
> Francesc, sorry this took so long, but I'm back. I have upgraded to
> PyTables 2.2, HDF 1.8.5-patch1, Numpy 1.5.0, Numexpr 1.4.1, and
> Cython 0.13. I'm still running Python 2.6.5 (actually Stackless
> Python) under Linux RedHat 5.
>
David E. Sallis said the following on 9/23/2010 8:36 AM:
> Francesc Alted said the following on 9/23/2010 2:39 AM:
>> A Wednesday 22 September 2010 22:04:53 David E. Sallis escrigué:
>>> I have a table in an HDF5 file consisting of 9 columns and just over
>>> 6000 rows, and an application which per
Francesc Alted said the following on 9/23/2010 2:39 AM:
> A Wednesday 22 September 2010 22:04:53 David E. Sallis escrigué:
>> I have a table in an HDF5 file consisting of 9 columns and just over
>> 6000 rows, and an application which performs updates on these table
>> rows. The application runs ho
A Wednesday 22 September 2010 22:04:53 David E. Sallis escrigué:
> I have a table in an HDF5 file consisting of 9 columns and just over
> 6000 rows, and an application which performs updates on these table
> rows. The application runs hourly and performs updates to the table
> during each run. No