[ 
https://issues.apache.org/jira/browse/HDFS-599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12756248#action_12756248
 ] 

Raghu Angadi commented on HDFS-599:
-----------------------------------

For this particular problem, the root cause is that NN can not distinguish 
between own slow down from DN's. Priorities help with the situation, but what 
if NN slept for 12 min instead of 8 min?

One simpler solution could be to consider average heart beat time across all 
the datanodes before marking one 'dead':
{code}
   delay = now - dn.lastHeartBeatTime;
//instead of  
   if (delay > someLimit) markDNDead(dn);
// we could do something like 
   if (delay > someLimit && (numDNs < 5 || delay > 20*avgHeartBeatTime || delay 
> reallyLargeLimit)) 
       markDNDead(dn);
{code}

{{avgHeartBeatTime}} is updated at each heartBeat. 

If NN actively contacted DNs, it won't be affected by its own slowness. But 
that is much bigger change. 

bq. To take this one step further - why does the failure detection code need to 
be implemented as part of the DN and NN daemons? 

Unfortunately heartBeat is lot more than a heart beat check. In Hadoop, servers 
like NN, JobTracker, depend on response to heartBeat (and other) RPCs from 
client to communicate to them. Ideally these servers should be able to actively 
contact its slaves.


> Improve Namenode robustness by prioritizing datanode heartbeats over client 
> requests
> ------------------------------------------------------------------------------------
>
>                 Key: HDFS-599
>                 URL: https://issues.apache.org/jira/browse/HDFS-599
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>
> The namenode processes RPC requests from clients that are reading/writing to 
> files as well as heartbeats/block reports from datanodes.
> Sometime, because of various reasons (Java GC runs, inconsistent performance 
> of NFS filer that stores HDFS transacttion logs, etc), the namenode 
> encounters transient slowness. For example, if the device that stores the 
> HDFS transaction logs becomes sluggish, the Namenode's ability to process 
> RPCs slows down to a certain extent. During this time, the RPCs from clients 
> as well as the RPCs from datanodes suffer in similar fashion. If the 
> underlying problem becomes worse, the NN's ability to process a heartbeat 
> from a DN is severly impacted, thus causing the NN to declare that the DN is 
> dead. Then the NN starts replicating blocks that used to reside on the 
> now-declared-dead datanode. This adds extra load to the NN. Then the 
> now-declared-datanode finally re-establishes contact with the NN, and sends a 
> block report. The block report processing on the NN is another heavyweight 
> activity, thus casing more load to the already overloaded namenode. 
> My proposal is tha the NN should try its best to continue processing RPCs 
> from datanodes and give lesser priority to serving client requests. The 
> Datanode RPCs are integral to the consistency and performance of the Hadoop 
> file system, and it is better to protect it at all costs. This will ensure 
> that NN  recovers from the hiccup much faster than what it does now.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to