Thanks Ryan, that is helpful.

couple questions:

1. Will this work on .20 alpha, or do I need to be on trunk?
2. When we say read operations, we are also talking about scanner
performance right? not just straight Gets?




Ryan Rawson wrote:
> 
> Use the shell:
> 
> major_compact 'table'
> 
> good luck!
> 
> On Tue, Jul 28, 2009 at 3:03 PM, llpind<[email protected]> wrote:
>>
>> major_compact from the web UI by clicking on the HBase table then
>> clicking
>> Compact?
>>
>> Thanks
>>
>> Ryan Rawson wrote:
>>>
>>> Hi,
>>>
>>> You should enable LZO compression.  All performance goes up, both read
>>> and write.
>>>
>>> Follow the instructions to get booted basic:
>>> http://wiki.apache.org/hadoop/UsingLzoCompression
>>>
>>> Once you have your cluster restarted with the new jars and native
>>> libs, disable the tables.  Then alter them to include the
>>> compression=>'LZO' flag.  Re-enable them.  Kick off a major_compact on
>>> the table and the new files will be in LZO.
>>>
>>> -ryan
>>>
>>> On Tue, Jul 28, 2009 at 2:23 PM, llpind<[email protected]> wrote:
>>>>
>>>> Hey,
>>>>
>>>> I have a couple tall tables (~ 120M rows each with small columns).  I
>>>> was
>>>> wondering what type of read performance I can expect using LZO
>>>> compression?
>>>>
>>>> Also, is there a way to enable compression on an existing HBase table,
>>>> or
>>>> do
>>>> I have to drop, recreate, and reload the entire data?
>>>>
>>>> Thanks
>>>> --
>>>> View this message in context:
>>>> http://www.nabble.com/LZO-compression-in-HBase-tp24708137p24708137.html
>>>> Sent from the HBase User mailing list archive at Nabble.com.
>>>>
>>>>
>>>
>>>
>>
>> --
>> View this message in context:
>> http://www.nabble.com/LZO-compression-in-HBase-tp24708137p24708714.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/LZO-compression-in-HBase-tp24708137p24722547.html
Sent from the HBase User mailing list archive at Nabble.com.

Reply via email to