[
https://issues.apache.org/jira/browse/HBASE-18084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16018488#comment-16018488
]
Ted Yu commented on HBASE-18084:
--------------------------------
{code}
171 return -1;
172 } else if (f1ConsumedSpace < f2ConsumedSpace) {
{code}
'else' can be omitted since return is called in the previous if block.
{code}
164 HashMap<FileStatus, Long> directorySpaces = new
HashMap<FileStatus, Long>();
{code}
The map is declared in the comparator which is passed dirs List. How many
directories would find their cached lengths ?
{code}
224 LOG.debug("Prepared to delete files in directory: " + dirs);
{code}
Would the list of directories be logged ? nit: directory -> directories
> Improve CleanerChore to clean from directory which consumes more disk space
> ---------------------------------------------------------------------------
>
> Key: HBASE-18084
> URL: https://issues.apache.org/jira/browse/HBASE-18084
> Project: HBase
> Issue Type: Bug
> Reporter: Yu Li
> Assignee: Yu Li
> Attachments: HBASE-18084.patch
>
>
> Currently CleanerChore cleans the directory in dictionary order, rather than
> from the directory with largest space usage. And when data abnormally
> accumulated to some huge volume in archive directory, the cleaning speed
> might not be enough.
> This proposal is another improvement working together with HBASE-18083 to
> resolve our online issue (archive dir consumed more than 1.8PB SSD space)
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)