[ 
https://issues.apache.org/jira/browse/YARN-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14981133#comment-14981133
 ] 

Tomas F. Pena commented on YARN-4319:
-------------------------------------

That is supposed to be. But when I put in the include list the name of the 
nodes and refresh with yarn rmadmin -refreshNodes all the nodemanagers in the 
list are immediately decommissioned.

$ cat yarn.include
ip-10-0-0-104
ip-10-0-0-105
ip-10-0-0-106
ip-10-0-0-107
$ yarn rmadmin -refreshNodes

and the four running nodemanagers are killed.


> Nodemanagers are decommissioned when the the list of included hosts is not 
> empty
> --------------------------------------------------------------------------------
>
>                 Key: YARN-4319
>                 URL: https://issues.apache.org/jira/browse/YARN-4319
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 2.7.1
>         Environment: java version "1.7.0_85"
> OpenJDK Runtime Environment (amzn-2.6.1.3.61.amzn1-x86_64 u85-b01)
> OpenJDK 64-Bit Server VM (build 24.85-b03, mixed mode)
>            Reporter: Tomas F. Pena
>              Labels: newbie
>
> When the file indicated in yarn.resourcemanager.nodes.include-path is nor 
> empty, the resource manager decommision all the nodemanagers
> {{2015-10-29 18:51:51,900 INFO org.apache.hadoop.util.HostsFileReader: 
> Setting the includes file to /opt/yarn/hadoop/etc/hadoop/yarn.include}}
> {{2015-10-29 18:51:51,900 INFO org.apache.hadoop.util.HostsFileReader: 
> Setting the excludes file to /opt/yarn/hadoop/etc/hadoop/yarn.exclude}}
> {{2015-10-29 18:51:51,900 INFO org.apache.hadoop.util.HostsFileReader: 
> Refreshing hosts (include/exclude) list}}
> {{2015-10-29 18:51:51,900 INFO org.apache.hadoop.util.HostsFileReader: Adding 
> a to the list of included hosts from 
> /opt/yarn/hadoop/etc/hadoop/yarn.include}}
> {{2015-10-2918:51:51,901 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdmaster 
> IP=10.0.0.10 OPERATION=refreshNodes TARGET=AdminService RESULT=SUCCESS}}
> {{2015-10-29 18:51:51,901 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Deactivating 
> Node ip-10-0-0-106.ec2.internal:33747 as it is now DECOMMISSIONED}}
> {{2015-10-29 18:51:51,901 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: 
> ip-10-0-0-106.ec2.internal:33747 Node Transitioned from RUNNING to 
> DECOMMISSIONED}}
> {{2015-10-29 18:51:51,901 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Deactivating 
> Node ip-10-0-0-105.ec2.internal:57257 as it is now DECOMMISSIONED}}
> {{2015-10-29 18:51:51,901 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: 
> ip-10-0-0-105.ec2.internal:57257 Node Transitioned from RUNNING to 
> DECOMMISSIONED}}
> {{2015-10-29 18:51:51,901 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Deactivating 
> Node ip-10-0-0-104.ec2.internal:48645 as it is now DECOMMISSIONED}}
> {{2015-10-29 18:51:51,901 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: 
> ip-10-0-0-104.ec2.internal:48645 Node Transitioned from RUNNING to 
> DECOMMISSIONED}}
> {{2015-10-29 18:51:51,901 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Deactivating 
> Node ip-10-0-0-107.ec2.internal:33115 as it is now DECOMMISSIONED}}
> {{2015-10-29 18:51:51,901 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: 
> ip-10-0-0-107.ec2.internal:33115 Node Transitioned from RUNNING to 
> DECOMMISSIONED}}
> {{2015-10-29 18:51:51,902 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
>  Removed node ip-10-0-0-106.ec2.internal:33747 clusterResource: <memory:9216, 
> vCores:3>}}
> {{2015-10-29 18:51:51,902 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
>  Removed node ip-10-0-0-105.ec2.internal:57257 clusterResource: <memory:6144, 
> vCores:2>}}
> {{2015-10-29 18:51:51,902 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
>  Removed node ip-10-0-0-104.ec2.internal:48645 clusterResource: <memory:3072, 
> vCores:1>}}
> {{2015-10-29 18:51:51,902 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler:
>  Removed node ip-10-0-0-107.ec2.internal:33115 clusterResource: <memory:0, 
> vCores:0>}}
> {{2015-10-29 18:51:52,006 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
> Disallowed NodeManager nodeId: ip-10-0-0-105.ec2.internal:57257 hostname: 
> ip-10-0-0-105.ec2.internal}}
> {{2015-10-29 18:51:52,683 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
> Disallowed NodeManager nodeId: ip-10-0-0-106.ec2.internal:33747 hostname: 
> ip-10-0-0-106.ec2.internal}}
> {{2015-10-29 18:51:52,734 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
> Disallowed NodeManager nodeId: ip-10-0-0-107.ec2.internal:33115 hostname: 
> ip-10-0-0-107.ec2.internal}}
> {{2015-10-29 18:51:52,891 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
> Disallowed NodeManager nodeId: ip-10-0-0-104.ec2.internal:48645 hostname: 
> ip-10-0-0-104.ec2.internal}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to