[ https://issues.apache.org/jira/browse/HDFS-12820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
He Xiaoqiao resolved HDFS-12820. -------------------------------- Resolution: Not A Problem > Decommissioned datanode is counted in service cause datanode allcating failure > ------------------------------------------------------------------------------ > > Key: HDFS-12820 > URL: https://issues.apache.org/jira/browse/HDFS-12820 > Project: Hadoop HDFS > Issue Type: Bug > Components: block placement > Affects Versions: 2.4.0 > Reporter: Gang Xie > Priority: Major > > When allocate a datanode when dfsclient write with considering the load, it > checks if the datanode is overloaded by calculating the average xceivers of > all the in service datanode. But if the datanode is decommissioned and become > dead, it's still treated as in service, which make the average load much more > than the real one especially when the number of the decommissioned datanode > is great. In our cluster, 180 datanode, and 100 of them decommissioned, and > the average load is 17. This failed all the datanode allocation. > private void subtract(final DatanodeDescriptor node) { > capacityUsed -= node.getDfsUsed(); > blockPoolUsed -= node.getBlockPoolUsed(); > xceiverCount -= node.getXceiverCount(); > {color:red} if (!(node.isDecommissionInProgress() || > node.isDecommissioned())) {{color} > nodesInService--; > nodesInServiceXceiverCount -= node.getXceiverCount(); > capacityTotal -= node.getCapacity(); > capacityRemaining -= node.getRemaining(); > } else { > capacityTotal -= node.getDfsUsed(); > } > cacheCapacity -= node.getCacheCapacity(); > cacheUsed -= node.getCacheUsed(); > } -- This message was sent by Atlassian JIRA (v7.6.14#76016) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org