Re: [Pytables-users] Optimizing pytables for reading entire columns at a time

2012-09-21 Thread Anthony Scopatz
On Fri, Sep 21, 2012 at 4:55 PM, Francesc Alted  wrote:

> On 9/21/12 10:07 PM, Anthony Scopatz wrote:
> > On Fri, Sep 21, 2012 at 10:49 AM, Luke Lee  > > wrote:
> >
> > Hi again,
> >
> > I haven't been getting the updates via email so I'm attempting to
> > post again to respond.
> >
> > Thanks everyone for the suggestions.  I have a few questions:
> >
> > 1.  What is the benefit of using the stand-alone carray project
> > (https://github.com/FrancescAlted/carray) vs Pytables.carray?
> >
> >
> > Hello Luke,
> >
> > carrays are in-memory, not on disk.
>
> Well, that was true until version 0.5 where disk persistency was
> introduced.  Now, carray supports both in-memory and on-disk objects,
> and they work exactly in the same way.
>

Sorry for not being exactly up to date ;)


>
> --
> Francesc Alted
>
>
>
> --
> Got visibility?
> Most devs has no idea what their production app looks like.
> Find out how fast your code is with AppDynamics Lite.
> http://ad.doubleclick.net/clk;262219671;13503038;y?
> http://info.appdynamics.com/FreeJavaPerformanceDownload.html
> ___
> Pytables-users mailing list
> Pytables-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/pytables-users
>
--
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html___
Pytables-users mailing list
Pytables-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/pytables-users


Re: [Pytables-users] Optimizing pytables for reading entire columns at a time

2012-09-21 Thread Francesc Alted
On 9/21/12 10:07 PM, Anthony Scopatz wrote:
> On Fri, Sep 21, 2012 at 10:49 AM, Luke Lee  > wrote:
>
> Hi again,
>
> I haven't been getting the updates via email so I'm attempting to
> post again to respond.
>
> Thanks everyone for the suggestions.  I have a few questions:
>
> 1.  What is the benefit of using the stand-alone carray project
> (https://github.com/FrancescAlted/carray) vs Pytables.carray?
>
>
> Hello Luke,
>
> carrays are in-memory, not on disk.

Well, that was true until version 0.5 where disk persistency was 
introduced.  Now, carray supports both in-memory and on-disk objects, 
and they work exactly in the same way.

-- 
Francesc Alted


--
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html
___
Pytables-users mailing list
Pytables-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/pytables-users


Re: [Pytables-users] Optimizing pytables for reading entire columns at a time

2012-09-21 Thread Anthony Scopatz
On Fri, Sep 21, 2012 at 10:49 AM, Luke Lee  wrote:

> Hi again,
>
> I haven't been getting the updates via email so I'm attempting to post
> again to respond.
>
> Thanks everyone for the suggestions.  I have a few questions:
>
> 1.  What is the benefit of using the stand-alone carray project (
> https://github.com/FrancescAlted/carray) vs Pytables.carray?
>

Hello Luke,

carrays are in-memory, not on disk.


> 2.  I realized my code base never uses the query functionality of a Table.
>  So, I changed all my columns to be just Pytables.carray objects instead.
>  They are all sitting at the top of the hierarchy, just below root.  Is
> this a good idea?
>
> I see a big speed increase from this obviously because now everything is
> stored contiguously.  However, are there any downsides to doing this?  I
> suppose I could also use EArray, but we are never actually changing the
> data once it is stored in HDF5.
>

If it works for you, then great!


> 3.  Is compression automatically happening with the Carray?  I know the
> documentation says that compression is supported, but what do I need to do
> to enable it?  Maybe it's already happening and this is contributing to my
> big speed improvement.
>

For compression to be enabled, you need to define the appropriate filter
[1] on either the node or the file.

4.  I would certainly love to take a look at contributing something like
> this in my free time.  I don't have a whole lot at this time so the changes
> could take a while.  I'm sure I need to learn a lot more about the codebase
> before really giving it a try.  I'm going to take a look at this though,
> thanks for the suggestion!
>

No problem ;)


> 5.  How do I subscribe to the dev mailing list?  I only see announcements
> and users.
>

Here is the dev list site:
https://groups.google.com/forum/?fromgroups#!forum/pytables-dev


> 6.  Any idea why I'm not getting the emails from the list?  I signed up 2
> days ago and didn't get any of your replies via email.
>

We have been having problems with this list.  I think It might be time to
transition...

Be Well
Anthony

1.
http://pytables.github.com/usersguide/libref/helper_classes.html?highlight=filter#tables.Filters


>
>
> --
> Got visibility?
> Most devs has no idea what their production app looks like.
> Find out how fast your code is with AppDynamics Lite.
> http://ad.doubleclick.net/clk;262219671;13503038;y?
> http://info.appdynamics.com/FreeJavaPerformanceDownload.html
> ___
> Pytables-users mailing list
> Pytables-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/pytables-users
>
>
--
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html___
Pytables-users mailing list
Pytables-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/pytables-users


Re: [Pytables-users] Optimizing pytables for reading entire columns at a time

2012-09-21 Thread Luke Lee
Hi again,

I haven't been getting the updates via email so I'm attempting to post
again to respond.

Thanks everyone for the suggestions.  I have a few questions:

1.  What is the benefit of using the stand-alone carray project (
https://github.com/FrancescAlted/carray) vs Pytables.carray?
2.  I realized my code base never uses the query functionality of a Table.
 So, I changed all my columns to be just Pytables.carray objects instead.
 They are all sitting at the top of the hierarchy, just below root.  Is
this a good idea?

I see a big speed increase from this obviously because now everything is
stored contiguously.  However, are there any downsides to doing this?  I
suppose I could also use EArray, but we are never actually changing the
data once it is stored in HDF5.

3.  Is compression automatically happening with the Carray?  I know the
documentation says that compression is supported, but what do I need to do
to enable it?  Maybe it's already happening and this is contributing to my
big speed improvement.

4.  I would certainly love to take a look at contributing something like
this in my free time.  I don't have a whole lot at this time so the changes
could take a while.  I'm sure I need to learn a lot more about the codebase
before really giving it a try.  I'm going to take a look at this though,
thanks for the suggestion!

5.  How do I subscribe to the dev mailing list?  I only see announcements
and users.
6.  Any idea why I'm not getting the emails from the list?  I signed up 2
days ago and didn't get any of your replies via email.

Thanks!
--
Got visibility?
Most devs has no idea what their production app looks like.
Find out how fast your code is with AppDynamics Lite.
http://ad.doubleclick.net/clk;262219671;13503038;y?
http://info.appdynamics.com/FreeJavaPerformanceDownload.html___
Pytables-users mailing list
Pytables-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/pytables-users


Re: [Pytables-users] Optimizing pytables for reading entire columns at a time

2012-09-21 Thread Alvaro Tejero Cantero
Hi!

You may want to have a look | reuse | combine your approach with that
implemented in pandas (pandas.io.pytables.HDFStore)

https://github.com/pydata/pandas/blob/master/pandas/io/pytables.py

(see _write_array method)

A certain liberality in Pandas with dtypes (partly induced by the
missing data problem) leads to VLArrays being created often that might
be not the most performant solution. But if the types of the columns
in the data frames are guessed right, then CArrays embedded in groups
will be used, as far as I understand (as suggested above).

Best,

 -รก.



On 21 September 2012 01:14, Anthony Scopatz  wrote:
> Luke,
>
> I'd also like to mention, that if you don't want to wait for us to implement
> this we will gladly take contributions ;).  If you need help getting started
> or throughout the process we are also happy to provide that too.  Please
> sign up for PyTables Dev (pytables-...@googlegroups.com) so we move
> implementation discussions away from users.  Clearly, people would benefit
> from you taking this upon yourself, should you choose to accept this
> mission!
>
> Be Well
> Anthony
>
> On Thu, Sep 20, 2012 at 3:26 PM, Josh Ayers  wrote:
>>
>> Depending on your use case, you may be able to get around this by storing
>> each column in its own table.  That will effectively store the data in
>> column-first order.  Instead of creating a table, you would create a group,
>> which then contains a separate table for each column.
>>
>> If you want, you can wrap all the functionality you need in a single
>> object that hides the complexity and makes it act just like a single table.
>> I did something similar to this recently and it's worked well.  However, I
>> wasn't too concerned with exactly matching the Table API or implementing all
>> of its features.
>>
>> Creating a more general version that does duplicate the Table class
>> interface and can be included in PyTables is definitely possible and is
>> something I'd like to do, but I've never had the necessary time to dedicate
>> to it.
>>
>> Hope that helps,
>> Josh
>>
>>
>>
>> On Wed, Sep 19, 2012 at 10:56 AM, Francesc Alted 
>> wrote:
>>>
>>> On 9/19/12 3:37 PM, Luke Lee wrote:
>>> > Hi all,
>>> >
>>> > I'm attempting to optimize my HDF5/pytables application for reading
>>> > entire columns at a time.  I was wondering what the best way to go
>>> > about this is.
>>> >
>>> > My HDF5 has the following properties:
>>> >
>>> > - 400,000+ rows
>>> > - 25 columns
>>> > - 147 MB in total size
>>> > - 1 string column of size 12
>>> > - 1 column of type 'Float'
>>> > - 23 columns of type 'Float64'
>>> >
>>> > My access pattern for this data is generally to read an entire column
>>> > out at a time.  So, I want to minimize the number of disk accesses
>>> > this takes and store data contiguously by column.
>>>
>>> To start with, you must be aware that the Table object stores data in
>>> row-order, not column order.  In practice, that means that whenever you
>>> want to access a single column, you will need to traverse the *entire*
>>> table.
>>>
>>> I always wished to implement a column-order table in PyTables, but that
>>> did not happen in the end.
>>>
>>> >
>>> > I think the proper way to do this via HDF5 is to use 'chunking.'  I'm
>>> > creating my HDF5 files via Pytables so I guess using the 'chunkshape'
>>> > parameter during creation is the correct way to do this?
>>>
>>> Yes, it is.
>>>
>>> >
>>> > All of the HDF5 documentation I read discusses 'chunksize' in terms of
>>> > rows and columns.  However, the Pytables 'chunkshape' parameter only
>>> > takes a single number.  I looked through the source and see that I can
>>> > in fact pass a tuple, which I assume is (row, column) as the HDF5
>>> > documentation would suggest.
>>>
>>> Not quite.  The Table object is actually an uni-dimensional beast, but
>>> with a 'compound' datatype (that in some way can be regarded as another
>>> dimension, but it is not a 'true' dimension).
>>>
>>> >
>>> > Is it best to use the 'expectedrows' parameter instead of the
>>> > 'chunkshape' or use both?
>>>
>>> You can try both.  The `expectedrows` parameter was introduced to ease
>>> the life of users, and it 'optimizes' the `chunkshape` but for 'normal'
>>> usage.  For specific requirements, playing directly with the
>>> `chunkshape` normally gives better results.
>>>
>>> >
>>> > I have done some debugging/profiling and discovered that my default
>>> > chunkshape is 321 for this dataset.  I have increased this to 1000 and
>>> > see quite a bit better speeds.  I'm sure I could keep changing these
>>> > numbers and find what is best for this particular dataset.  However,
>>> > I'm seeking a bit more knowledge on how Pytables uses each of these
>>> > parameters, how they relate to the HDF5 'chunking' concept and
>>> > best-practices.  This will help me to understand how to optimize in
>>> > the future instead of just for this particular dataset.  Is there any
>>> > documentation on best practices for using the 'expe