Github user juanrh commented on the issue:
https://github.com/apache/spark/pull/19267
@tgravescs I was finally able to contribute
https://github.com/apache/hadoop/pull/289 which solves
[YARN-6483](https://issues.apache.org/jira/browse/YARN-6483). With that patch,
and the code in this pull request, in `YarnAllocator.allocateResources` we will
receive a `NodeReport` entry in `allocateResponse.getUpdatedNodes` for each
node moved to `DECOMMISSIONING` state using [Hadoop's graceful
decommission](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html),
which would trigger blacklisting for those nodes.
But for now YARN-6483 has only been accepted for Hadoop 3.1.0, so I'll work
on [SPARK-21737](https://issues.apache.org/jira/browse/SPARK-21737) to have an
alternative solution that doesn't rely on the cluster manager
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]