[
https://issues.apache.org/jira/browse/HDFS-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17340786#comment-17340786
]
Xiaoqiao He commented on HDFS-15994:
------------------------------------
Thanks [~zhuqi] for your report and contributions. I am totally +1 for this
proposal. I have met NameNode hang for long time due to process delete request
about very large directory many times.
IIRC, some other guys try to improve it but not push it forward completely. IMO
it is proper time to do that again.
IIUC, there are two segments to hold global lock for long time and need to
improve.
a. traverse the whole sub-directory and collect the pending deleted blocks.
b. execute to delete blocks.
It could be one choice using `release lock - sleep - acquire lock` to avoid
NameNode hang for long time. I am not sure if it is a best solution, welcome
deep discussion.
For patch v001, I am not fans to add extra configuration entry for every
improvement because there are so many configuration and it is more and more
confused to end user. I think we could give static parameter in this case.
Thanks [~zhuqi] again.
> Deletion should sleep some time, when there are too many pending deletion
> blocks.
> ---------------------------------------------------------------------------------
>
> Key: HDFS-15994
> URL: https://issues.apache.org/jira/browse/HDFS-15994
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Qi Zhu
> Assignee: Qi Zhu
> Priority: Major
> Attachments: HDFS-15994.001.patch
>
>
> HDFS-13831 realize that we can control the frequency of other waiters to get
> the lock chance.
> But actually in our big cluster with heavy deletion:
> The problem still happened, and the pending deletion blocks will be more
> than ten million somtimes, and the size become more than 1 million in regular
> in huge clusters.
> So i think we should sleep for some time when pending too many deletion
> blocks.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]