[ 
https://issues.apache.org/jira/browse/HDFS-14186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16740073#comment-16740073
 ] 

He Xiaoqiao commented on HDFS-14186:
------------------------------------

I think safe mode extension could cover the issue in small or medium scale 
cluster (include 300M blocks or less) using default extension time 30s, but 
when the block size is too large, default extension time could not absorb all 
block report from datanode. As result, in our production cluster, we have to 
monitor load of service port 8040 after namenode leave safe mode, then manual 
execute master slave switching util load recovery, waiting about 30min after 
namenode leave safe mode under normal conditions.
I try to add replication total checking when namenode startup with 
configuration item for open this feature or not in the demonstration patch 
[^HDFS-14186.001.patch]. FYI. If the idea could pass, I will add some unittest 
next couple days.
[~kihwal] Thanks again, and look forward to your comments.

> blockreport storm slow down namenode restart seriously in large cluster
> -----------------------------------------------------------------------
>
>                 Key: HDFS-14186
>                 URL: https://issues.apache.org/jira/browse/HDFS-14186
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: He Xiaoqiao
>            Assignee: He Xiaoqiao
>            Priority: Major
>         Attachments: HDFS-14186.001.patch
>
>
> In the current implementation, the datanode sends blockreport immediately 
> after register to namenode successfully when restart, and the blockreport 
> storm will make namenode high load to process them. One result is some 
> received RPC have to skip because queue time is timeout. If some datanodes' 
> heartbeat RPC are continually skipped for long times (default is 
> heartbeatExpireInterval=630s) it will be set DEAD, then datanode has to 
> re-register and send blockreport again, aggravate blockreport storm and trap 
> in a vicious circle, and slow down (more than one hour and even more) 
> namenode startup seriously in a large (several thousands of datanodes) and 
> busy cluster especially. Although there are many work to optimize namenode 
> startup, the issue still exists. 
> I propose to postpone dead datanode check when namenode have finished startup.
> Any comments and suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to