Thats a little silly.  That the message is INFO level is probably
small potatoes when doing a mapreduce job but in our case with lots of
file openings, it turns into a little log storm.

I suppose you'll need to disable it.  Set log level to WARN on
org.apache.hadoop.io.compress?

This might help you making the change:
http://wiki.apache.org/hadoop/Hbase/FAQ#A5

St.Ack

On Mon, Jan 10, 2011 at 9:46 AM, Matt Corgan <[email protected]> wrote:
> I'm trying to use GZIP compression but running into a logging problem.  It
> appears that every time a block is compressed it logs the following:
>
> 2011-01-10 12:40:48,407 INFO org.apache.hadoop.io.compress.CodecPool: Got
> brand-new compressor
> 2011-01-10 12:40:48,414 INFO org.apache.hadoop.io.compress.CodecPool: Got
> brand-new compressor
> 2011-01-10 12:40:48,420 INFO org.apache.hadoop.io.compress.CodecPool: Got
> brand-new compressor
> 2011-01-10 12:40:48,426 INFO org.apache.hadoop.io.compress.CodecPool: Got
> brand-new compressor
> 2011-01-10 12:40:48,431 INFO org.apache.hadoop.io.compress.CodecPool: Got
> brand-new compressor
> 2011-01-10 12:40:48,447 INFO org.apache.hadoop.io.compress.CodecPool: Got
> brand-new compressor
> 2011-01-10 12:40:48,453 INFO org.apache.hadoop.io.compress.CodecPool: Got
> brand-new compressor
>
> Same for decompression.  It's logging that 150 times per second during a
> major compaction which pretty much renders the logs useless.  I assume other
> people are not having this problem, so did we accidentally enable that
> logging somehow?
>
> Thanks,
> Matt
>

Reply via email to