[ 
https://issues.apache.org/jira/browse/HBASE-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14709350#comment-14709350
 ] 

Dave Latham commented on HBASE-14247:
-------------------------------------

{quote}
I don't think this will be problem. Currenly the HFileCleaner have to run 
through the cleaner checks for every store's subdirectory of every region of 
every table and we didn't see the problem of efficiency. So I won't need to 
worry about the OldLogCleaner in new layout of old layout.
{quote}

The HFileCleaner does not need to check anything that is O(regions) or 
O(servers).  It only cleans the archive directory which I believe only has a 
subdirectory for each column family.  But this issue would not affect the 
HFileCleaner - only the OldLogCleaner which currently only has to process a 
single batch.  It has a different set of cleaner checks - including the 
ReplicationLogCleaner.

{quote}
What's more, we can turn up the cleaner period by config 
hbase.master.cleaner.interval.
{quote}
We've had a problem before where the cleaner cannot clean old files as fast as 
the new ones are created.  (See HBASE-9208 for one example).  When that 
happens, it doesn't matter what the hbase.master.cleaner.interval is.

> Separate the old WALs into different regionserver directories
> -------------------------------------------------------------
>
>                 Key: HBASE-14247
>                 URL: https://issues.apache.org/jira/browse/HBASE-14247
>             Project: HBase
>          Issue Type: Improvement
>          Components: wal
>            Reporter: Liu Shaohui
>            Assignee: Liu Shaohui
>            Priority: Minor
>             Fix For: 2.0.0
>
>         Attachments: HBASE-14247-v001.diff, HBASE-14247-v002.diff, 
> HBASE-14247-v003.diff
>
>
> Currently all old WALs of regionservers are achieved into the single 
> directory of oldWALs. In big clusters, because of long TTL of WAL or disabled 
> replications, the number of files under oldWALs may reach the 
> max-directory-items limit of HDFS, which will make the hbase cluster crashed.
> {quote}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
>  The directory item limit of /hbase/lgprc-xiaomi/.oldlogs is exceeded: 
> limit=1048576 items=1048576
> {quote}
> A simple solution is to separate the old WALs into different  directories 
> according to the server name of the WAL.
> Suggestions are welcomed~ Thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to