[
https://issues.apache.org/jira/browse/HDFS-8056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14395245#comment-14395245
]
Hadoop QA commented on HDFS-8056:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12709302/HDFS-8056-2.patch
against trunk revision db80e42.
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 2 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/10175//testReport/
Console output:
https://builds.apache.org/job/PreCommit-HDFS-Build/10175//console
This message is automatically generated.
> Decommissioned dead nodes should continue to be counted as dead after NN
> restart
> --------------------------------------------------------------------------------
>
> Key: HDFS-8056
> URL: https://issues.apache.org/jira/browse/HDFS-8056
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Ming Ma
> Assignee: Ming Ma
> Attachments: HDFS-8056-2.patch, HDFS-8056.patch
>
>
> We had some offline discussion with [~andrew.wang] and [~cmccabe] about this.
> Bring this up for more input and get the patch in place.
> Dead nodes are tracked by {{DatanodeManager}}'s {{datanodeMap}}. However,
> after NN restarts, those nodes that were dead before NN restart won't be in
> {{datanodeMap}}. {{DatanodeManager}}'s {{getDatanodeListForReport}} will add
> those dead nodes, but not if they are in the exclude file.
> {noformat}
> if (listDeadNodes) {
> for (InetSocketAddress addr : includedNodes) {
> if (foundNodes.matchedBy(addr) || excludedNodes.match(addr)) {
> continue;
> }
> // The remaining nodes are ones that are referenced by the hosts
> // files but that we do not know about, ie that we have never
> // head from. Eg. an entry that is no longer part of the cluster
> // or a bogus entry was given in the hosts files
> //
> // If the host file entry specified the xferPort, we use that.
> // Otherwise, we guess that it is the default xfer port.
> // We can't ask the DataNode what it had configured, because it's
> // dead.
> DatanodeDescriptor dn = new DatanodeDescriptor(new DatanodeID(addr
> .getAddress().getHostAddress(), addr.getHostName(), "",
> addr.getPort() == 0 ? defaultXferPort : addr.getPort(),
> defaultInfoPort, defaultInfoSecurePort, defaultIpcPort));
> setDatanodeDead(dn);
> nodes.add(dn);
> }
> }
> {noformat}
> The issue here is the decommissioned dead node JMX will be different after NN
> restart. It might be better to make it consistent across NN restart.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)