On 9/19/12 3:37 PM, Luke Lee wrote:
> Hi all,
>
> I'm attempting to optimize my HDF5/pytables application for reading 
> entire columns at a time.  I was wondering what the best way to go 
> about this is.
>
> My HDF5 has the following properties:
>
> - 400,000+ rows
> - 25 columns
> - 147 MB in total size
> - 1 string column of size 12
> - 1 column of type 'Float'
> - 23 columns of type 'Float64'
>
> My access pattern for this data is generally to read an entire column 
> out at a time.  So, I want to minimize the number of disk accesses 
> this takes and store data contiguously by column.

To start with, you must be aware that the Table object stores data in 
row-order, not column order.  In practice, that means that whenever you 
want to access a single column, you will need to traverse the *entire* 
table.

I always wished to implement a column-order table in PyTables, but that 
did not happen in the end.

>
> I think the proper way to do this via HDF5 is to use 'chunking.'  I'm 
> creating my HDF5 files via Pytables so I guess using the 'chunkshape' 
> parameter during creation is the correct way to do this?

Yes, it is.

>
> All of the HDF5 documentation I read discusses 'chunksize' in terms of 
> rows and columns.  However, the Pytables 'chunkshape' parameter only 
> takes a single number.  I looked through the source and see that I can 
> in fact pass a tuple, which I assume is (row, column) as the HDF5 
> documentation would suggest.

Not quite.  The Table object is actually an uni-dimensional beast, but 
with a 'compound' datatype (that in some way can be regarded as another 
dimension, but it is not a 'true' dimension).

>
> Is it best to use the 'expectedrows' parameter instead of the 
> 'chunkshape' or use both?

You can try both.  The `expectedrows` parameter was introduced to ease 
the life of users, and it 'optimizes' the `chunkshape` but for 'normal' 
usage.  For specific requirements, playing directly with the 
`chunkshape` normally gives better results.

>
> I have done some debugging/profiling and discovered that my default 
> chunkshape is 321 for this dataset.  I have increased this to 1000 and 
> see quite a bit better speeds.  I'm sure I could keep changing these 
> numbers and find what is best for this particular dataset.  However, 
> I'm seeking a bit more knowledge on how Pytables uses each of these 
> parameters, how they relate to the HDF5 'chunking' concept and 
> best-practices.  This will help me to understand how to optimize in 
> the future instead of just for this particular dataset.  Is there any 
> documentation on best practices for using the 'expectedrows' and 
> 'chunkshape' parameters?

Well, there is:

http://pytables.github.com/usersguide/optimization.html

but I'm sure you already know this.

Frankly, if you want to enhance the speed of column retrieval, you are 
going to need an object that is stored in column-order.  In this sense, 
you may want to experiment with the ctable object in carray package 
(https://github.com/FrancescAlted/carray).  It supports barely the same 
capabilities than the Table object, but the column-order is implemented 
properly, so probably a ctable will buy you a nice speed-up.

>
> Thank you for your time,

Hope this helps,

-- 
Francesc Alted


------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Pytables-users mailing list
Pytables-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/pytables-users

Reply via email to