GitHub user skonto opened a pull request:

    https://github.com/apache/spark/pull/23136

    [SPARK-25515][K8s] Adds a config option to keep executor pods for debuging

    ## What changes were proposed in this pull request?
    Keeps K8s executor resources present if case of failure or normal 
termination.
    Introduces a new boolean config option: `spark.kubernetes.deleteExecutors`, 
with default value set to true.
    The idea is to update Spark K8s backend structures but leave the resources 
around.
    The assumption is that since entries are not removed from the 
`removedExecutorsCache` we are immune to updates that refer to the the executor 
resources previously removed.
    The only delete operation not touched is the one in the `doKillExecutors` 
method.
    Reason is right now we dont support 
[blacklisting](https://issues.apache.org/jira/browse/SPARK-23485) and dynamic 
allocation with Spark on K8s. In both cases in the future we might want to 
handle these scenarios although its more complicated.
    ## How was this patch tested?
    Manually by running a Spark job and verifying pods are not deleted.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/skonto/spark keep_pods

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/23136.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #23136
    
----
commit 5cdca3167e2f8acb84a23f9c64e5c3bc524f04ac
Author: Stavros Kontopoulos <stavros.kontopoulos@...>
Date:   2018-11-25T20:18:42Z

    add config option to keep executor pods

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to