[
https://issues.apache.org/jira/browse/HDFS-9434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025449#comment-15025449
]
Hudson commented on HDFS-9434:
------------------------------
FAILURE: Integrated in Hadoop-Mapreduce-trunk #2658 (See
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2658/])
Move HDFS-9434 to 2.6.3 in CHANGES.txt. (szetszwo: rev
56493cda04e30ab737fc6cecc8c43a87d5b006b7)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
> Recommission a datanode with 500k blocks may pause NN for 30 seconds
> --------------------------------------------------------------------
>
> Key: HDFS-9434
> URL: https://issues.apache.org/jira/browse/HDFS-9434
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Reporter: Tsz Wo Nicholas Sze
> Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.6.3
>
> Attachments: h9434_20151116.patch
>
>
> In BlockManager, processOverReplicatedBlocksOnReCommission is called within
> the namespace lock. There is a (not very useful) log message printed in
> processOverReplicatedBlock. When there is a large number of blocks stored in
> a storage, printing the log message for each block can pause NN to process
> any other operations. We did see that it could pause NN for 30 seconds for
> a storage with 500k blocks.
> I suggest to change the log message to trace level as a quick fix.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)