Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/19267
just reading through your description here all the yarn pieces aren't in
place so you have an admin type command to signal spark that a node is being
decommissioned. But that means someone has to run that command on every single
spark application running on that cluster, correct? That doesn't seem very
feasible on any relatively large cluster. One big question I have is, are
there enough use cases where just the command is useful? Otherwise we are
temporarily adding a command that we will have to keep supporting forever (or
perhaps only the next major release). Or does it make sense to wait for YARN
(or another resource manager) to have full support for this?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]