Sounds like all upside to me... was a little tricky to notice since it still
compresses without them

Matt


On Tue, Jan 11, 2011 at 10:14 PM, Stack <[email protected]> wrote:

> Oh.  Yeah.  Makes sense.  We used to bundle the native libs but we
> seem to have dropped them.  We should add them back?
> St.Ack
>
> On Tue, Jan 11, 2011 at 3:24 PM, Matt Corgan <[email protected]> wrote:
> > Turns out this is what happens if you don't have the native libraries set
> up
> > correctly.  The data still gets compressed using the pure java codec, but
> it
> > doesn't cache the codec and gives you a warning each time it creates it
> for
> > each block.
> >
> >
> > On Mon, Jan 10, 2011 at 2:41 PM, Stack <[email protected]> wrote:
> >
> >> Thats a little silly.  That the message is INFO level is probably
> >> small potatoes when doing a mapreduce job but in our case with lots of
> >> file openings, it turns into a little log storm.
> >>
> >> I suppose you'll need to disable it.  Set log level to WARN on
> >> org.apache.hadoop.io.compress?
> >>
> >> This might help you making the change:
> >> http://wiki.apache.org/hadoop/Hbase/FAQ#A5
> >>
> >> St.Ack
> >>
> >> On Mon, Jan 10, 2011 at 9:46 AM, Matt Corgan <[email protected]>
> wrote:
> >> > I'm trying to use GZIP compression but running into a logging problem.
> >>  It
> >> > appears that every time a block is compressed it logs the following:
> >> >
> >> > 2011-01-10 12:40:48,407 INFO org.apache.hadoop.io.compress.CodecPool:
> Got
> >> > brand-new compressor
> >> > 2011-01-10 12:40:48,414 INFO org.apache.hadoop.io.compress.CodecPool:
> Got
> >> > brand-new compressor
> >> > 2011-01-10 12:40:48,420 INFO org.apache.hadoop.io.compress.CodecPool:
> Got
> >> > brand-new compressor
> >> > 2011-01-10 12:40:48,426 INFO org.apache.hadoop.io.compress.CodecPool:
> Got
> >> > brand-new compressor
> >> > 2011-01-10 12:40:48,431 INFO org.apache.hadoop.io.compress.CodecPool:
> Got
> >> > brand-new compressor
> >> > 2011-01-10 12:40:48,447 INFO org.apache.hadoop.io.compress.CodecPool:
> Got
> >> > brand-new compressor
> >> > 2011-01-10 12:40:48,453 INFO org.apache.hadoop.io.compress.CodecPool:
> Got
> >> > brand-new compressor
> >> >
> >> > Same for decompression.  It's logging that 150 times per second during
> a
> >> > major compaction which pretty much renders the logs useless.  I assume
> >> other
> >> > people are not having this problem, so did we accidentally enable that
> >> > logging somehow?
> >> >
> >> > Thanks,
> >> > Matt
> >> >
> >>
> >
>

Reply via email to