[ https://issues.apache.org/jira/browse/HDFS-1547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12981004#action_12981004 ]
Suresh Srinivas commented on HDFS-1547: --------------------------------------- > But will the decommissioning itself actually be able to proceed? >From my previous comment: "It reduces cluster available free storage for writes. Writes could simply fail because of no free storage. The decommissioning may not complete, because of lack of free storage." I am not sure what you mean by the deadlock situation. Removing nodes from exclude stops decommissioning and the cluster should get back to normal state. > will the NN be able to pick new locations for the blocks previously stored on > the decomissioning nodes I assume you mean decommissioned nodes (the patch introduces no change to decommissioning nodes and how they are handled). Decommissioned replicas are chosen as the last location for reads. If all the replicas of a block are decommissioned, then decomissioned node will be used for reading it. > Improve decommission mechanism > ------------------------------ > > Key: HDFS-1547 > URL: https://issues.apache.org/jira/browse/HDFS-1547 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node > Affects Versions: 0.23.0 > Reporter: Suresh Srinivas > Assignee: Suresh Srinivas > Fix For: 0.23.0 > > Attachments: HDFS-1547.1.patch, HDFS-1547.patch > > > Current decommission mechanism driven using exclude file has several issues. > This bug proposes some changes in the mechanism for better manageability. See > the proposal in the next comment for more details. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.