Hi Matt,

> My motivation for doing this is to make hbase a viable candidate for a

> large, auto-partitioned, sorted, *in-memory* database.  Not the usual
> analytics use case, but i think hbase would be great for this.


Really interested in hearing your thoughts as to why HBase currently is an -- 
whether or not "viable" -- at least suboptimal candidate for that purpose. It 
has been moving in the direction of being better for that purpose ever since 
0.89. Where we can further improve would be a good discussion to have, the 
HBase constituency is not only analytics use cases as you point out.

Best regards,

    - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein (via 
Tom White)


>________________________________
>From: Matt Corgan <mcor...@hotpads.com>
>To: dev@hbase.apache.org
>Sent: Friday, September 16, 2011 7:29 PM
>Subject: Re: prefix compression implementation
>
>Ryan - thanks for the feedback.  The situation I'm thinking of where it's
>useful to parse DirectBB without copying to heap is when you are serving
>small random values out of the block cache.  At HotPads, we'd like to store
>hundreds of GB of real estate listing data in memory so it can be quickly
>served up at random.  We want to access many small values that are already
>in memory, so basically skipping step 1 of 3 because values are already in
>memory.  That being said, the DirectBB are not essential for us since we
>haven't run into gb problems, i just figured it would be nice to support
>them since they seem to be important to other people.
>
>My motivation for doing this is to make hbase a viable candidate for a
>large, auto-partitioned, sorted, *in-memory* database.  Not the usual
>analytics use case, but i think hbase would be great for this.

Reply via email to