[
https://issues.apache.org/jira/browse/HDFS-13277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16399239#comment-16399239
]
Bharat Viswanadham commented on HDFS-13277:
-------------------------------------------
In this Jira also add the testcase for testing with MiniDFSCluster, after this
Jira gets reviewed and committed, will try to see if we can add the testcase
for ReplicaFileDeleteTask without MiniDFSCluster.
This Jira ia dependant on HDFS-13163.
> Improve move to account for usage (number of files) to limit trash dir size
> ---------------------------------------------------------------------------
>
> Key: HDFS-13277
> URL: https://issues.apache.org/jira/browse/HDFS-13277
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Bharat Viswanadham
> Assignee: Bharat Viswanadham
> Priority: Major
> Attachments: HDFS-13277-HDFS-12996.00.patch
>
>
> The trash subdirectory maximum entries. This puts an upper limit on the size
> of subdirectories in replica-trash. Set this default value to
> blockinvalidateLimit.
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]