[jira] [Updated] (HDFS-9685) StopDecommission for datanode should remove the underReplicatedBlocks

2016-01-22 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9685:

Attachment: HDFS-9685.001.patch

> StopDecommission for datanode should remove the underReplicatedBlocks
> -
>
> Key: HDFS-9685
> URL: https://issues.apache.org/jira/browse/HDFS-9685
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9685.001.patch
>
>
> When one node removed from exclude file, and its state from 
> decommission-in-progress to in service. But the underReplicatedBlocksNum of 
> cluster has not been decreased. Most of these underReplicatedBlocks are not 
> needed and it will costs namenode much time to deal with. And frequently 
> namenode will find there are enough replications. So in 
> {{stopDecommissioned}} operation, we should remove neededReplicatedBlocks of 
> decomNodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9685) StopDecommission for datanode should remove the underReplicatedBlocks

2016-01-22 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9685:

Attachment: HDFS-9685.001.patch

> StopDecommission for datanode should remove the underReplicatedBlocks
> -
>
> Key: HDFS-9685
> URL: https://issues.apache.org/jira/browse/HDFS-9685
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-9685.001.patch
>
>
> When one node removed from exclude file, and its state from 
> decommission-in-progress to in service. But the underReplicatedBlocksNum of 
> cluster has not been decreased. Most of these underReplicatedBlocks are not 
> needed and it will costs namenode much time to deal with. And frequently 
> namenode will find there are enough replications. So in 
> {{stopDecommissioned}} operation, we should remove neededReplicatedBlocks of 
> decomNodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9685) StopDecommission for datanode should remove the underReplicatedBlocks

2016-01-22 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9685:

Status: Patch Available  (was: Open)

Attach a initial patch. Adding a new method to remove blocks in 
stopDecommission.
{code}
void stopDecommission(DatanodeDescriptor node) {
if (node.isDecommissionInProgress() || node.isDecommissioned()) {
  LOG.info("Stopping decommissioning of node {}", node);
  // Update DN stats maintained by HeartbeatManager
  hbManager.stopDecommission(node);
  // Over-replicated blocks will be detected and processed when
  // the dead node comes back and send in its full block report.
  // The original blocks in decomNodes will be removing from
  // neededReplications.
  if (node.isAlive) {
blockManager.processOverReplicatedBlocksOnReCommission(node);
removeNeededReplicatedBlocksInDecomNodes(node);
  }
  // Remove from tracking in DecommissionManager
  pendingNodes.remove(node);
  decomNodeBlocks.remove(node);
} else {
  LOG.trace("stopDecommission: Node {} is not decommission in progress " +
  "or decommissioned, nothing to do.", node);
}
  }
{code}
Kindly reviewing!

> StopDecommission for datanode should remove the underReplicatedBlocks
> -
>
> Key: HDFS-9685
> URL: https://issues.apache.org/jira/browse/HDFS-9685
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
>
> When one node removed from exclude file, and its state from 
> decommission-in-progress to in service. But the underReplicatedBlocksNum of 
> cluster has not been decreased. Most of these underReplicatedBlocks are not 
> needed and it will costs namenode much time to deal with. And frequently 
> namenode will find there are enough replications. So in 
> {{stopDecommissioned}} operation, we should remove neededReplicatedBlocks of 
> decomNodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9685) StopDecommission for datanode should remove the underReplicatedBlocks

2016-01-22 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HDFS-9685:

Attachment: (was: HDFS-9685.001.patch)

> StopDecommission for datanode should remove the underReplicatedBlocks
> -
>
> Key: HDFS-9685
> URL: https://issues.apache.org/jira/browse/HDFS-9685
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
>
> When one node removed from exclude file, and its state from 
> decommission-in-progress to in service. But the underReplicatedBlocksNum of 
> cluster has not been decreased. Most of these underReplicatedBlocks are not 
> needed and it will costs namenode much time to deal with. And frequently 
> namenode will find there are enough replications. So in 
> {{stopDecommissioned}} operation, we should remove neededReplicatedBlocks of 
> decomNodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)