[ 
https://issues.apache.org/jira/browse/HDFS-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13852241#comment-13852241
 ] 

Brandon Li commented on HDFS-5662:
----------------------------------

I've committed the patch. Thank you, Arpit, for reviewing it.

> Can't decommission a DataNode due to file's replication factor larger than 
> the rest of the cluster size
> -------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5662
>                 URL: https://issues.apache.org/jira/browse/HDFS-5662
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Brandon Li
>            Assignee: Brandon Li
>         Attachments: HDFS-5662.001.patch, HDFS-5662.002.patch
>
>
> A datanode can't be decommissioned if it has replica that belongs to a file 
> with a replication factor larger than the rest of the cluster size.
> One way to fix this is to have some kind of minimum replication factor 
> setting and thus any datanode can be decommissioned regardless of the largest 
> replication factor it's related to. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

Reply via email to