Yes you need to add the snappy libraries to hbase path (i think the variable to 
set is called HBASE_LIBRARY_PATH)

------------------------------
On Wed 25 Jul, 2012 3:46 AM IST Mohit Anchlia wrote:

>On Tue, Jul 24, 2012 at 2:04 PM, Dhaval Shah 
><[email protected]>wrote:
>
>> I bet that your compression libraries are not available to HBase.. Run the
>> compression test utility and see if it can find LZO
>>
>> That seems to be the case for SNAPPY. However, I do have snappy installed
>and it works with hadoop just fine and HBase is running on the same
>cluster. Is there something special I need to do for HBase?
>
>>
>> Regards,
>> Dhaval
>>
>>
>> ----- Original Message -----
>> From: Mohit Anchlia <[email protected]>
>> To: [email protected]
>> Cc:
>> Sent: Tuesday, 24 July 2012 4:39 PM
>> Subject: Re: Enabling compression
>>
>> Thanks! I was trying it out and I see this message when I use COMPRESSION,
>> but it works when I don't use it. Am I doing something wrong?
>>
>>
>> hbase(main):012:0> create 't2', {NAME => 'f1', VERSIONS => 1, COMPRESSION
>> => 'LZO'}
>>
>> ERROR: org.apache.hadoop.hbase.client.RegionOfflineException: Only 0 of 1
>> regions are online; retries exhausted.
>>
>> hbase(main):014:0> create 't3', {NAME => 'f1', VERSIONS => 1}
>>
>> 0 row(s) in 1.1260 seconds
>>
>>
>> On Tue, Jul 24, 2012 at 1:37 PM, Jean-Daniel Cryans <[email protected]
>> >wrote:
>>
>> > On Tue, Jul 24, 2012 at 1:34 PM, Jean-Marc Spaggiari
>> > <[email protected]> wrote:
>> > > Also, if I understand it correctly, this will enable the compression
>> > > for the new put but will not compresse the actual cells already stored
>> > > right? For that, we need to run a major compaction of the table which
>> > > will rewrite all the cells and so compact them?
>> >
>> > Yeah, although you may not want to recompact everything all at once in
>> > a live system. You can just let it happen naturally through cycles of
>> > flushes and compactions, it's all fine.
>> >
>> > J-D
>> >
>>
>>

Reply via email to