[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14316793#comment-14316793
 ] 

Chris Douglas commented on HDFS-7411:
-------------------------------------

bq. The -1 is not for the refactoring. It is for keeping the existing behavior.

Andrew, even though you prefer estimates or averages that approximate the 
existing behavior, halting when either of the limits are hit would move this 
forward.

Nicholas, would you be OK changing the default so this uses the new algorithm 
in clusters where the node limit is not explicitly configured (default value 
for nodes is {{Integer.MAX_VALUE}})? You're also OK enforcing the existing 
semantics in the new code?

> Refactor and improve decommissioning logic into DecommissionManager
> -------------------------------------------------------------------
>
>                 Key: HDFS-7411
>                 URL: https://issues.apache.org/jira/browse/HDFS-7411
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 2.5.1
>            Reporter: Andrew Wang
>            Assignee: Andrew Wang
>         Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
> hdfs-7411.003.patch, hdfs-7411.004.patch, hdfs-7411.005.patch, 
> hdfs-7411.006.patch, hdfs-7411.007.patch, hdfs-7411.008.patch, 
> hdfs-7411.009.patch, hdfs-7411.010.patch
>
>
> Would be nice to split out decommission logic from DatanodeManager to 
> DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to