[ 
https://issues.apache.org/jira/browse/SPARK-29771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16968933#comment-16968933
 ] 

Jackey Lee commented on SPARK-29771:
------------------------------------

This patch is mainly used in the scenario where the executor started failed. 
The executor runtime failurem, which is caused by task errors is controlled by 
spark.executor.maxFailures.

Another Example, add `--conf spark.executor.extraJavaOptions=-Xmse` after 
spark-submit, which can also appear executor crazy retry.

> Limit executor max failures before failing the application
> ----------------------------------------------------------
>
>                 Key: SPARK-29771
>                 URL: https://issues.apache.org/jira/browse/SPARK-29771
>             Project: Spark
>          Issue Type: Improvement
>          Components: Kubernetes
>    Affects Versions: 3.0.0
>            Reporter: Jackey Lee
>            Priority: Major
>
> At present, K8S scheduling does not limit the number of failures of the 
> executors, which may cause executor retried continuously without failing.
> A simple example, we add a resource limit on default namespace. After the 
> driver is started, if the quota is full, the executor will retry the creation 
> continuously, resulting in a large amount of pod information accumulation. 
> When many applications encounter such situations, they will affect the K8S 
> cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to