[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14302015#comment-14302015
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7411:
-------------------------------------------

{code}
+        // Assume 100k blocks per node.
+        blocksPerInterval = 100 * 1000 * numNodes;
{code}
How to come up with such assumption?  It seems invalid for some clusters.  
Also, nodes in a cluster may have different numbers of blocks.  Simply assuming 
all datanodes having the same number of blocks does not seem correct.

Why not keeping the existing code?  It is a simple easy way to support backward 
compatibility.

> Refactor and improve decommissioning logic into DecommissionManager
> -------------------------------------------------------------------
>
>                 Key: HDFS-7411
>                 URL: https://issues.apache.org/jira/browse/HDFS-7411
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 2.5.1
>            Reporter: Andrew Wang
>            Assignee: Andrew Wang
>         Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
> hdfs-7411.003.patch, hdfs-7411.004.patch, hdfs-7411.005.patch, 
> hdfs-7411.006.patch, hdfs-7411.007.patch, hdfs-7411.008.patch, 
> hdfs-7411.009.patch, hdfs-7411.010.patch
>
>
> Would be nice to split out decommission logic from DatanodeManager to 
> DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to