[
https://issues.apache.org/jira/browse/HDFS-13277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16411761#comment-16411761
]
Hanisha Koneru commented on HDFS-13277:
---------------------------------------
Thanks for the update Bharat.
I am sorry I missed this earlier. The default value for {{max.blocks}} is being
set to the default value of {{block.invalidate.limit}}. It should be set to the
configured value of this limit instead. Also, in {{hdfs-default.xml}}, we need
to mention that if the new parameter is not set, it would take the value of the
parameter {{dfs.block.invalidate.limit}}.
NITs:{color:#3b73af} {color}
# {color:#3b73af}{{{color}FsDatasetAsyncDiskService# L104}} has "information"
twice in the comment.
# It might be good to avoid abbreviations in hdfs-default.xml as it would be
reflected in the docs (referring to no for number).
> Improve move to account for usage (number of files) to limit trash dir size
> ---------------------------------------------------------------------------
>
> Key: HDFS-13277
> URL: https://issues.apache.org/jira/browse/HDFS-13277
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Bharat Viswanadham
> Assignee: Bharat Viswanadham
> Priority: Major
> Attachments: HDFS-13277-HDFS-12996.00.patch,
> HDFS-13277-HDFS-12996.01.patch, HDFS-13277-HDFS-12996.02.patch,
> HDFS-13277-HDFS-12996.03.patch, HDFS-13277-HDFS-12996.04.patch
>
>
> The trash subdirectory maximum entries. This puts an upper limit on the size
> of subdirectories in replica-trash. Set this default value to
> blockinvalidateLimit.
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]