I tried again, but there is nothing in the regionserver logs.
The size of the table is tiny (MBs).

In general, what would be a good way to verify if LZO compression is working.

i.

major compaction is not instant... I've seen it take an hour on a
300-400gb table.  You want to check the regionserver logs looking for
any major compaction lines.

in your case, the table will take a lot less longer, but still not
instant.  Give it another shot, see what happens.

-ryan

On Wed, Sep 2, 2009 at 12:57 PM, Igor Ranitovic<[email protected]> wrote:
After setting up a test table with about 250K rows, I wanted to enable
lzo compression on it.

In the shell, I disabled and altered  the table and then ran major_compaction.

This is the result:

hbase(main):013: major_compact 'urls'
0 row(s) in 0.0460 seconds

Does '0 rows' means that this did not work?
(the size of the table on the disk did not change/reduce)

Table info:
hbase(main):019:0> describe 'urls'
DESCRIPTION
 ENABLED

          false
 {NAME => 'urls', FAMILIES => [{NAME => 'data', VERSIONS => '1',
COMPRE
 SSION => 'LZO', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY
=
 > 'false', BLOCKCACHE => 'true'}]}
1 row(s) in 0.0420 seconds


Table Count:
hbase(main):021:0> count 'urls'
....
Current count: 251000, row: http://com,zz/


From the master log:
hbase-crawler-master-xxx.log:2009-09-02 13:54:51,880 INFO
com.hadoop.compression.lzo.GPLNativeCodeLoader: Loaded native gpl
library
hbase-crawler-master-xxx.log:2009-09-02 13:54:51,891 INFO
com.hadoop.compression.lzo.LzoCodec: Successfully loaded & initialized
native-lzo library

I using 0.20. Thanks.

i.



Reply via email to