Hi,

You should enable LZO compression.  All performance goes up, both read
and write.

Follow the instructions to get booted basic:
http://wiki.apache.org/hadoop/UsingLzoCompression

Once you have your cluster restarted with the new jars and native
libs, disable the tables.  Then alter them to include the
compression=>'LZO' flag.  Re-enable them.  Kick off a major_compact on
the table and the new files will be in LZO.

-ryan

On Tue, Jul 28, 2009 at 2:23 PM, llpind<[email protected]> wrote:
>
> Hey,
>
> I have a couple tall tables (~ 120M rows each with small columns).  I was
> wondering what type of read performance I can expect using LZO compression?
>
> Also, is there a way to enable compression on an existing HBase table, or do
> I have to drop, recreate, and reload the entire data?
>
> Thanks
> --
> View this message in context: 
> http://www.nabble.com/LZO-compression-in-HBase-tp24708137p24708137.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>

Reply via email to