[
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16257426#comment-16257426
]
ASF GitHub Bot commented on YARN-6483:
--------------------------------------
Github user xslogic commented on a diff in the pull request:
https://github.com/apache/hadoop/pull/289#discussion_r151767367
--- Diff:
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
---
@@ -1160,6 +1160,11 @@ public void transition(RMNodeImpl rmNode,
RMNodeEvent event) {
// Update NM metrics during graceful decommissioning.
rmNode.updateMetricsForGracefulDecommission(initState, finalState);
rmNode.decommissioningTimeout = timeout;
+ // Notify NodesListManager to notify all RMApp so that each
Application Master
+ // could take any required actions.
+ rmNode.context.getDispatcher().getEventHandler().handle(
+ new NodesListManagerEvent(
+ NodesListManagerEventType.NODE_USABLE, rmNode));
--- End diff --
Thanks - looking at the patch.. can you also attach a consolidated patch on
the JIRA ? So as to kick Jenkins.
> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes
> returned by the Resource Manager as a response to the Application Master
> heartbeat
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
> Issue Type: Improvement
> Components: resourcemanager
> Affects Versions: 2.8.0
> Reporter: Juan RodrĂguez Hortalá
> Attachments: YARN-6483-v1.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful
> decommissioning mechanism to give time for tasks to complete in a node that
> is scheduled for decommission, and for reducer tasks to read the shuffle
> blocks in that node. Also, YARN effectively blacklists nodes in
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent
> additional containers to be launched in those nodes, so no more shuffle
> blocks are written to the node. This blacklisting is not effective for
> applications like Spark, because a Spark executor running in a YARN container
> will keep receiving more tasks after the corresponding node has been
> blacklisted at the YARN level. We would like to propose a modification of the
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added
> to the list of updated nodes returned by the Resource Manager as a response
> to the Application Master heartbeat. This way a Spark application master
> would be able to blacklist a DECOMMISSIONING at the Spark level.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]