I am deleting .oldlog files manually now. I am seeing a ton of the errors
below. Are these errors due to me manually deleting the .oldlog files or is
this the error from the bug explaining why they are not deleted on their
own?

2011-02-03 12:07:23,618 ERROR
org.apache.hadoop.hbase.master.LogCleaner: Caught exception
java.lang.NullPointerException
        at 
org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner.isLogDeletable(ReplicationLogCleaner.java:59)
        at org.apache.hadoop.hbase.master.LogCleaner.chore(LogCleaner.java:140)
        at org.apache.hadoop.hbase.Chore.run(Chore.java:66)
        at org.apache.hadoop.hbase.master.LogCleaner.run(LogCleaner.java:167)



On Sat, Jan 29, 2011 at 8:43 PM, Jean-Daniel Cryans <[email protected]>wrote:

> There's some sort of rate limiting for file deletion, I think it's 20 every
> time it runs (which is every minute). Could it be that your region servers
> are creating them faster than that?
>
> In any case, it's safe to delete them but not the folder itself. Also
> please
> open a jira and assign it to me.
>
> J-D
> On Jan 29, 2011 5:22 PM, "Wayne" <[email protected]> wrote:
> > The current log folder in hdfs (.logs) seems to always keep to 32 log
> files
> > max per region server or the last hour. It is the .oldlogs folder that is
> > growing crazy large. I added the setting for hbase.master.logcleaner.ttl
> and
> > switched it from 7 days to 2 days and restarted yesterday and no oldlogs
> > have been removed yet. I assume the TTL is based on file date time? This
> > seems to be new in .90 so I am worried that the replication changes have
> > introduced this. Per the replication page (
> > http://hbase.apache.org/docs/r0.89.20100726/replication.html) I think
> some
> > of this logic is blocking the clean up.
> >
> > Can I delete these manually without a problem on my own? Our cluster will
> > fill up in 3-4 days at the rate we are going.
> >
> > Thanks.
>

Reply via email to