[ 
https://issues.apache.org/jira/browse/YARN-3266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14494156#comment-14494156
 ] 

Hadoop QA commented on YARN-3266:
---------------------------------

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12725212/YARN-3266.03.patch
  against trunk revision b5a0b24.

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/7329//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/7329//console

This message is automatically generated.

> RMContext inactiveNodes should have NodeId as map key
> -----------------------------------------------------
>
>                 Key: YARN-3266
>                 URL: https://issues.apache.org/jira/browse/YARN-3266
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: resourcemanager
>    Affects Versions: 2.6.0
>            Reporter: Chengbing Liu
>            Assignee: Chengbing Liu
>         Attachments: YARN-3266.01.patch, YARN-3266.02.patch, 
> YARN-3266.03.patch
>
>
> Under the default NM port configuration, which is 0, we have observed in the 
> current version, "lost nodes" count is greater than the length of the lost 
> node list. This will happen when we consecutively restart the same NM twice:
> * NM started at port 10001
> * NM restarted at port 10002
> * NM restarted at port 10003
> * NM:10001 timeout, {{ClusterMetrics#incrNumLostNMs()}}, # lost node=1; 
> {{rmNode.context.getInactiveRMNodes().put(rmNode.nodeId.getHost(), rmNode)}}, 
> {{inactiveNodes}} has 1 element
> * NM:10002 timeout, {{ClusterMetrics#incrNumLostNMs()}}, # lost node=2; 
> {{rmNode.context.getInactiveRMNodes().put(rmNode.nodeId.getHost(), rmNode)}}, 
> {{inactiveNodes}} still has 1 element
> Since we allow multiple NodeManagers on one host (as discussed in YARN-1888), 
> {{inactiveNodes}} should be of type {{ConcurrentMap<NodeId, RMNode>}}. If 
> this will break the current API, then the key string should include the NM's 
> port as well.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to