[ 
https://issues.apache.org/jira/browse/HDFS-12820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16890847#comment-16890847
 ] 

Chen Zhang commented on HDFS-12820:
-----------------------------------

Hi [~jojochuang], I've checked the code of the trunk branch, I think this issue 
still exists on the latest version

If we decommission a datanode and then stop it, the nodesInService of 
DatanodeStats variable is not subtracted, see the follow code:

 
{code:java}
synchronized void subtract(final DatanodeDescriptor node) {
  xceiverCount -= node.getXceiverCount();
  if (node.isInService()) { //Admin.DECOMMISSIONED is not count as isInService
    capacityUsed -= node.getDfsUsed();
    capacityUsedNonDfs -= node.getNonDfsUsed();
    blockPoolUsed -= node.getBlockPoolUsed();
    nodesInService--;
    nodesInServiceXceiverCount -= node.getXceiverCount();
    capacityTotal -= node.getCapacity();
    capacityRemaining -= node.getRemaining();
    cacheCapacity -= node.getCacheCapacity();
    cacheUsed -= node.getCacheUsed();
  } else if (node.isDecommissionInProgress() ||
    node.isEnteringMaintenance()) {
    cacheCapacity -= node.getCacheCapacity();
    cacheUsed -= node.getCacheUsed();
  }
  ...
}{code}
so If we have a cluster of 100 nodes and we decommission and stopped 50 nodes, 
the nodeInService variable will still be 100, this would makes the value 
stats.getInServiceXceiverAverage returns is only half of real "average xceiver 
count", which will cause most nodes become overloaded in the following code
{code:java}
boolean excludeNodeByLoad(DatanodeDescriptor node){
  final double maxLoad = considerLoadFactor *
  stats.getInServiceXceiverAverage(); //calculated by 
total-xceiverCount/nodesInService
  final int nodeLoad = node.getXceiverCount();
  if ((nodeLoad > maxLoad) && (maxLoad > 0)) {
    logNodeIsNotChosen(node, NodeNotChosenReason.NODE_TOO_BUSY,
      "(load: " + nodeLoad + " > " + maxLoad + ")");
    return true;
  }
  return false;
}
{code}
 

> Decommissioned datanode is counted in service cause datanode allcating failure
> ------------------------------------------------------------------------------
>
>                 Key: HDFS-12820
>                 URL: https://issues.apache.org/jira/browse/HDFS-12820
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: block placement
>    Affects Versions: 2.4.0
>            Reporter: Gang Xie
>            Priority: Major
>
> When allocate a datanode when dfsclient write with considering the load, it 
> checks if the datanode is overloaded by calculating the average xceivers of 
> all the in service datanode. But if the datanode is decommissioned and become 
> dead, it's still treated as in service, which make the average load much more 
> than the real one especially when the number of the decommissioned datanode 
> is great. In our cluster, 180 datanode, and 100 of them decommissioned, and 
> the average load is 17. This failed all the datanode allocation. 
> private void subtract(final DatanodeDescriptor node) {
>       capacityUsed -= node.getDfsUsed();
>       blockPoolUsed -= node.getBlockPoolUsed();
>       xceiverCount -= node.getXceiverCount();
>     {color:red}  if (!(node.isDecommissionInProgress() || 
> node.isDecommissioned())) {{color}
>         nodesInService--;
>         nodesInServiceXceiverCount -= node.getXceiverCount();
>         capacityTotal -= node.getCapacity();
>         capacityRemaining -= node.getRemaining();
>       } else {
>         capacityTotal -= node.getDfsUsed();
>       }
>       cacheCapacity -= node.getCacheCapacity();
>       cacheUsed -= node.getCacheUsed();
>     }



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to