[ 
https://issues.apache.org/jira/browse/HDFS-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14983864#comment-14983864
 ] 

Brahma Reddy Battula commented on HDFS-4937:
--------------------------------------------

I don't know if I can give a -1. But shall we revert this? A low of tests are 
broken because of it.

{code}
662 int refreshCounter = numOfAvailableNodes;
...
671 while(numOfReplicas > 0 && numOfAvailableNodes > 0) {
672   DatanodeDescriptor chosenNode = chooseDataNode(scope);
673   if (excludedNodes.add(chosenNode)) { //was not in the excluded list
674     if (LOG.isDebugEnabled()) {
675       builder.append("\nNode 
").append(NodeBase.getPath(chosenNode)).append(" [");
676     }
677     numOfAvailableNodes--;
678     DatanodeStorageInfo storage = null;
679     if (isGoodDatanode(chosenNode, maxNodesPerRack, considerLoad,
...
711   }
712   // Refresh the node count. If the live node count became smaller,
713   // but it is not reflected in this loop, it may loop forever in case
714   // the replicas/rack cannot be satisfied.
715   if (--refreshCounter == 0) {
716     refreshCounter = clusterMap.countNumOfAvailableNodes(scope,
717     excludedNodes);
718     // It has already gone through enough number of nodes.
719     if (refreshCounter <= excludedNodes.size()) {
720       break;
721     }
722   }
723 }
{code}

line 672 {{chooseDataNode(scope)}} is random, if {{chosenNode}} happens to be a 
excluded one, it won't go to line 674. But {{refreshCounter}} is still 
decreased.
If we out of luck, too many times of {{chooseDataNode(scope)}} return a already 
excluded one, we go inside line 716, and break at line 720.
Then we end up with choosing not enough {{numOfReplicas}}. In fact we could 
have.

> ReplicationMonitor can infinite-loop in 
> BlockPlacementPolicyDefault#chooseRandom()
> ----------------------------------------------------------------------------------
>
>                 Key: HDFS-4937
>                 URL: https://issues.apache.org/jira/browse/HDFS-4937
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.0.4-alpha, 0.23.8
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>              Labels: BB2015-05-TBR
>             Fix For: 3.0.0, 2.7.2
>
>         Attachments: HDFS-4937.patch, HDFS-4937.v1.patch, HDFS-4937.v2.patch
>
>
> When a large number of nodes are removed by refreshing node lists, the 
> network topology is updated. If the refresh happens at the right moment, the 
> replication monitor thread may stuck in the while loop of {{chooseRandom()}}. 
> This is because the cached cluster size is used in the terminal condition 
> check of the loop. This usually happens when a block with a high replication 
> factor is being processed. Since replicas/rack is also calculated beforehand, 
> no node choice may satisfy the goodness criteria if refreshing removed racks. 
> All nodes will end up in the excluded list, but the size will still be less 
> than the cached cluster size, so it will loop infinitely. This was observed 
> in a production environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to