[ 
https://issues.apache.org/jira/browse/YARN-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16961879#comment-16961879
 ] 

Peter Bacsko edited comment on YARN-9011 at 10/29/19 11:10 AM:
---------------------------------------------------------------

_"1. Why do we need a lazy update?"_

Please see details in my comment above that I posted on 25th Sep: 
https://issues.apache.org/jira/browse/YARN-9011?focusedCommentId=16937696&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16937696

It is important that when you do a "lazy" refresh, you should not make the new 
changes visible to {{ResourceTrackerService}}. The problematic part of the code 
is this:
{noformat}
    // 1. Check if it's a valid (i.e. not excluded) node, if not, see if it is
    // in decommissioning.
    if (!this.nodesListManager.isValidNode(nodeId.getHost())
        && !isNodeInDecommissioning(nodeId)) {
    ...
{noformat}
If you perform a graceful decom, it is important that 
{{isNodeInDecommissioning()}} return true. However, it takes time for 
{{RMAppImpl}} to go into {{DECOMMISSIONING}} state, that's why this code is not 
fully reliable. Therefore, {{isValidNode()}} should only return false when we 
already constructed a set of nodes that we want to decommission.

_2. Could we check the "Decommissioning" status before 
"isGracefullyDecommissionableNode" in method "isNodeInDecommissioning"?_

-No, we can't (well, we can, but it would be pointless). Decomissioning status 
only occurs when you refresh (reload) the exclusion/inclusion files. That is, 
we need to call {{NodesListManager.refreshNodes()}}. And that is the problem - 
during refresh, excludeable nodes become visible almost immediately, but not 
the fact that they're decomissionable.-

I misunderstood this question. It's doable, see my comment below.

_3. So it will always be scanned when heartbeat which seems not necessary._
 Scanning is necessary to avoid the race condition, but this isn't really a 
problem because of three things:
 1. It happens only for those nodes which are excluded ({{isValid()}} is false)
 2. We lookup inside a ConcurrentHashMap, which should be really fast
 3. Once {{RMNode}} reaches {{DECOMMISSIONING}} state (which should happen 
pretty quickly from {{RUNNING}}), we no longer need the set.

*Edit*: even though it's not a huge problem, I agree that it can be enhanced, 
again, see below.

I can imagine a small enhancement here: once the node reached 
{{DECOMISSIONING}} state, we remove it from the set, making it smaller and 
smaller.


was (Author: pbacsko):
_"1. Why do we need a lazy update?"_

Please see details in my comment above that I posted on 25th Sep: 
https://issues.apache.org/jira/browse/YARN-9011?focusedCommentId=16937696&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16937696

It is important that when you do a "lazy" refresh, you should not make the new 
changes visible to {{ResourceTrackerService}}. The problematic part of the code 
is this:
{noformat}
    // 1. Check if it's a valid (i.e. not excluded) node, if not, see if it is
    // in decommissioning.
    if (!this.nodesListManager.isValidNode(nodeId.getHost())
        && !isNodeInDecommissioning(nodeId)) {
    ...
{noformat}
If you perform a graceful decom, it is important that 
{{isNodeInDecommissioning()}} return true. However, it takes time for 
{{RMAppImpl}} to go into {{DECOMMISSIONING}} state, that's why this code is not 
fully reliable. Therefore, {{isValidNode()}} should only return false when we 
already constructed a set of nodes that we want to decommission.

_2. Could we check the "Decommissioning" status before 
"isGracefullyDecommissionableNode" in method "isNodeInDecommissioning"?_

No, we can't (well, we can, but it would be pointless). Decomissioning status 
only occurs when you refresh (reload) the exclusion/inclusion files. That is, 
we need to call {{NodesListManager.refreshNodes()}}. And that is the problem - 
during refresh, excludeable nodes become visible almost immediately, but not 
the fact that they're decomissionable.

_3. So it will always be scanned when heartbeat which seems not necessary._
 Scanning is necessary to avoid the race condition, but this isn't really a 
problem because of three things:
 1. It happens only for those nodes which are excluded ({{isValid()}} is false)
 2. We lookup inside a ConcurrentHashMap, which should be really fast
 3. Once {{RMNode}} reaches {{DECOMMISSIONING}} state (which should happen 
pretty quickly from {{RUNNING}}), we no longer need the set.

I can imagine a small enhancement here: once the node reached 
{{DECOMISSIONING}} state, we remove it from the set, making it smaller and 
smaller.

> Race condition during decommissioning
> -------------------------------------
>
>                 Key: YARN-9011
>                 URL: https://issues.apache.org/jira/browse/YARN-9011
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 3.1.1
>            Reporter: Peter Bacsko
>            Assignee: Peter Bacsko
>            Priority: Major
>         Attachments: YARN-9011-001.patch, YARN-9011-002.patch, 
> YARN-9011-003.patch, YARN-9011-004.patch, YARN-9011-005.patch, 
> YARN-9011-006.patch, YARN-9011-007.patch
>
>
> During internal testing, we found a nasty race condition which occurs during 
> decommissioning.
> Node manager, incorrect behaviour:
> {noformat}
> 2018-06-18 21:00:17,634 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Received 
> SHUTDOWN signal from Resourcemanager as part of heartbeat, hence shutting 
> down.
> 2018-06-18 21:00:17,634 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Message from 
> ResourceManager: Disallowed NodeManager nodeId: node-6.hostname.com:8041 
> hostname:node-6.hostname.com
> {noformat}
> Node manager, expected behaviour:
> {noformat}
> 2018-06-18 21:07:37,377 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Received 
> SHUTDOWN signal from Resourcemanager as part of heartbeat, hence shutting 
> down.
> 2018-06-18 21:07:37,377 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Message from 
> ResourceManager: DECOMMISSIONING  node-6.hostname.com:8041 is ready to be 
> decommissioned
> {noformat}
> Note the two different messages from the RM ("Disallowed NodeManager" vs 
> "DECOMMISSIONING"). The problem is that {{ResourceTrackerService}} can see an 
> inconsistent state of nodes while they're being updated:
> {noformat}
> 2018-06-18 21:00:17,575 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.NodesListManager: hostsReader 
> include:{172.26.12.198,node-7.hostname.com,node-2.hostname.com,node-5.hostname.com,172.26.8.205,node-8.hostname.com,172.26.23.76,172.26.22.223,node-6.hostname.com,172.26.9.218,node-4.hostname.com,node-3.hostname.com,172.26.13.167,node-9.hostname.com,172.26.21.221,172.26.10.219}
>  exclude:{node-6.hostname.com}
> 2018-06-18 21:00:17,575 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.NodesListManager: Gracefully 
> decommission node node-6.hostname.com:8041 with state RUNNING
> 2018-06-18 21:00:17,575 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
> Disallowed NodeManager nodeId: node-6.hostname.com:8041 node: 
> node-6.hostname.com
> 2018-06-18 21:00:17,576 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Put Node 
> node-6.hostname.com:8041 in DECOMMISSIONING.
> 2018-06-18 21:00:17,575 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=yarn     
> IP=172.26.22.115        OPERATION=refreshNodes  TARGET=AdminService     
> RESULT=SUCCESS
> 2018-06-18 21:00:17,577 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Preserve 
> original total capability: <memory:8192, vCores:8>
> 2018-06-18 21:00:17,577 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: 
> node-6.hostname.com:8041 Node Transitioned from RUNNING to DECOMMISSIONING
> {noformat}
> When the decommissioning succeeds, there is no output logged from 
> {{ResourceTrackerService}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to