[
https://issues.apache.org/jira/browse/HDFS-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15328714#comment-15328714
]
Xiao Chen commented on HDFS-8046:
---------------------------------
Thanks [~kihwal] and all for the contribution. Great work here and on HDFS-4995
to fix NN locking issues!
Just dropping a quick note here, that increasing the default of
{{DFS_CONTENT_SUMMARY_LIMIT_DEFAULT}} from 0 to 5000, plus a bug in HDFS-4995
(fixed by HDFS-8581) created a somewhat incompatible behavior on {{/}}: {{hdfs
dfs -du -s /}} or {{hdfs dfs -count /}} may end up only calculate some of the
dirs when the limit is reached.
I think we should backport HDFS-8581 to 2.6.x and 2.7.x to fix this.
> Allow better control of getContentSummary
> -----------------------------------------
>
> Key: HDFS-8046
> URL: https://issues.apache.org/jira/browse/HDFS-8046
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Kihwal Lee
> Assignee: Kihwal Lee
> Labels: 2.6.1-candidate, 2.7.2-candidate
> Fix For: 2.6.1, 2.7.2
>
> Attachments: HDFS-8046-branch-2.6.1.txt, HDFS-8046.v1.patch
>
>
> On busy clusters, users performing quota checks against a big directory
> structure can affect the namenode performance. It has become a lot better
> after HDFS-4995, but as clusters get bigger and busier, it is apparent that
> we need finer grain control to avoid long read lock causing throughput drop.
> Even with unfair namesystem lock setting, a long read lock (10s of
> milliseconds) can starve many readers and especially writers. So the locking
> duration should be reduced, which can be done by imposing a lower
> count-per-iteration limit in the existing implementation. But HDFS-4995 came
> with a fixed amount of sleep between locks. This needs to be made
> configurable, so that {{getContentSummary()}} doesn't get exceedingly slow.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]