[ 
https://issues.apache.org/jira/browse/YARN-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16935731#comment-16935731
 ] 

Peter Bacsko edited comment on YARN-9011 at 9/23/19 10:38 AM:
--------------------------------------------------------------

[~adam.antal] so the problem is that {{ResourceTrackerService}} uses 
{{NodesListManager}} to determine what nodes are enabled. But sometimes it sees 
an inconsistent state: {{NodesListManager}} returns that a certain node is in 
the excluded list, but it's state is not {{DECOMMISSIONING}}. So we have to 
wait for this state change.

First, adding {{synchronized}} blocks to {{NodesListManager}} is necessary. 
When you call {{isValidNode()}} you have to wait until the XML (which contains 
the list of to-be-decommissioned nodes) is completely processed.

However, if {{isValid()}} returns false, you don't know if graceful 
decommissioning is going on. If it is, the state of {{RMNodeImpl}} is 
{{NodeState.DECOMMISSIONING}}. But the catch is that state transition happens 
on a separate dispatcher thread so you have to wait for it. Most of the time 
it's quick enough, but you can miss it. When that happens, RTS simply considers 
a node to be "disallowed" and orders a shutdown immediately.

So what's why I introduced a new class called {{DecommissioningNodesSyncer}}. 
If a node is selected for graceful decommissioning, it is added to a deque. 
When {{isValid()}} returns, we check if the node is included in this deque. 
Then we wait for the state transition with a {{Condition}} object. Signaling 
comes from {{RMNode}} itself.

The change is a bit bigger than it should be because I modified constructors, 
so to avoid compilation problems, tests also had to be modified. Alternative to 
this is using a singleton {{DecommissioningNodesSyncer}} but I just don't like 
it. I prefer dependency injection to singletons.


was (Author: pbacsko):
[~adam.antal] so the problem is that {{ResourceTrackerService}} uses 
{{NodesListManager}} to determine what nodes are enabled or not. But sometimes 
it sees an inconsistent state: {{NodesListManager}} returns that a certain node 
is in the excluded list, but it's state is not {{DECOMMISSIONING}}. So we have 
to wait for this state change.

First, adding {{synchronized}} blocks to {{NodesListManager}} is necessary. 
When you call {{isValidNode()}} you have to wait until the XML (which contains 
the list of to-be-decommissioned nodes) is completely processed.

However, if {{isValid()}} returns false, you don't know if graceful 
decommissioning is going on. If it is, the state of {{RMNodeImpl}} is 
{{NodeState.DECOMMISSIONING}}. But the catch is that state transition happens 
on a separate dispatcher thread so you have to wait for it. Most of the time 
it's quick enough, but you can miss it. When that happens, RTS simply considers 
a node to be "disallowed" and orders a shutdown immediately.

So what's why I introduced a new class called {{DecommissioningNodesSyncer}}. 
If a node is selected for graceful decommissioning, it is added to a deque. 
When {{isValid()}} returns, we check if the node is included in this deque. 
Then we wait for the state transition with a {{Condition}} object. Signaling 
comes from {{RMNode}} itself.

The change is a bit bigger than it should be because I modified constructors, 
so to avoid compilation problems, tests also had to be modified. Alternative to 
this is using a singleton {{DecommissioningNodesSyncer}} but I just don't like 
it. I prefer dependency injection to singletons.

> Race condition during decommissioning
> -------------------------------------
>
>                 Key: YARN-9011
>                 URL: https://issues.apache.org/jira/browse/YARN-9011
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 3.1.1
>            Reporter: Peter Bacsko
>            Assignee: Peter Bacsko
>            Priority: Major
>         Attachments: YARN-9011-001.patch, YARN-9011-002.patch, 
> YARN-9011-003.patch, YARN-9011-004.patch
>
>
> During internal testing, we found a nasty race condition which occurs during 
> decommissioning.
> Node manager, incorrect behaviour:
> {noformat}
> 2018-06-18 21:00:17,634 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Received 
> SHUTDOWN signal from Resourcemanager as part of heartbeat, hence shutting 
> down.
> 2018-06-18 21:00:17,634 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Message from 
> ResourceManager: Disallowed NodeManager nodeId: node-6.hostname.com:8041 
> hostname:node-6.hostname.com
> {noformat}
> Node manager, expected behaviour:
> {noformat}
> 2018-06-18 21:07:37,377 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Received 
> SHUTDOWN signal from Resourcemanager as part of heartbeat, hence shutting 
> down.
> 2018-06-18 21:07:37,377 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Message from 
> ResourceManager: DECOMMISSIONING  node-6.hostname.com:8041 is ready to be 
> decommissioned
> {noformat}
> Note the two different messages from the RM ("Disallowed NodeManager" vs 
> "DECOMMISSIONING"). The problem is that {{ResourceTrackerService}} can see an 
> inconsistent state of nodes while they're being updated:
> {noformat}
> 2018-06-18 21:00:17,575 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.NodesListManager: hostsReader 
> include:{172.26.12.198,node-7.hostname.com,node-2.hostname.com,node-5.hostname.com,172.26.8.205,node-8.hostname.com,172.26.23.76,172.26.22.223,node-6.hostname.com,172.26.9.218,node-4.hostname.com,node-3.hostname.com,172.26.13.167,node-9.hostname.com,172.26.21.221,172.26.10.219}
>  exclude:{node-6.hostname.com}
> 2018-06-18 21:00:17,575 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.NodesListManager: Gracefully 
> decommission node node-6.hostname.com:8041 with state RUNNING
> 2018-06-18 21:00:17,575 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
> Disallowed NodeManager nodeId: node-6.hostname.com:8041 node: 
> node-6.hostname.com
> 2018-06-18 21:00:17,576 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Put Node 
> node-6.hostname.com:8041 in DECOMMISSIONING.
> 2018-06-18 21:00:17,575 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=yarn     
> IP=172.26.22.115        OPERATION=refreshNodes  TARGET=AdminService     
> RESULT=SUCCESS
> 2018-06-18 21:00:17,577 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Preserve 
> original total capability: <memory:8192, vCores:8>
> 2018-06-18 21:00:17,577 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: 
> node-6.hostname.com:8041 Node Transitioned from RUNNING to DECOMMISSIONING
> {noformat}
> When the decommissioning succeeds, there is no output logged from 
> {{ResourceTrackerService}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to