dongjoon-hyun commented on code in PR #484:
URL: 
https://github.com/apache/spark-kubernetes-operator/pull/484#discussion_r2762118961


##########
spark-submission-worker/src/main/java/org/apache/spark/k8s/operator/SparkAppSubmissionWorker.java:
##########
@@ -167,6 +168,11 @@ protected SparkAppDriverConf buildDriverConf(
     effectiveSparkConf.setIfMissing("spark.app.id", appId);
     effectiveSparkConf.setIfMissing("spark.authenticate", "true");
     effectiveSparkConf.setIfMissing("spark.io.encryption.enabled", "true");
+    // Use K8s Garbage Collection instead of explicit API invocations
+    if (applicationSpec.getApplicationTolerations().getResourceRetainPolicy() 
!=
+        ResourceRetainPolicy.Always) {
+      
effectiveSparkConf.setIfMissing("spark.kubernetes.executor.deleteOnTermination",
 "false");

Review Comment:
   It's a kind of switching the default behavior.
   - In Apache Spark distribution, `Driver` pod always remain. So, we need to 
clean up the executor pods. So, the default value of 
`spark.kubernetes.executor.deleteOnTermination` is true.
   - In Apache Spark K8s Operator, `Driver` pod is cleaned up because 
`SparkApp` CRD remains. In this case, K8s is able to clean up executor pods 
whose owner (`Driver` pod) is gone. So, this PR aims to eliminate the explicit 
Spark Driver's API invocation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to