Github user erikerlandson commented on the issue:
https://github.com/apache/spark/pull/19041
I have been thinking about a different but related [use
case](https://github.com/apache-spark-on-k8s/spark/issues/261); supporting the
ability to operate in dynamic allocation mode without requiring a separate
shuffle service. The motivation is to make it more frictionless for a spark
driver to operate using D.A. in a container platform (such as k8s), where
standing up the shuffle service adds an extra step that may require cluster
admin intervention.
Some of this logic seems to overlap - putting executors into a "draining"
state, and attempting to reduce data loss from executor scale-down so that the
application doesn't start to thrash.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]