dongjoon-hyun commented on code in PR #52867:
URL: https://github.com/apache/spark/pull/52867#discussion_r2638184499


##########
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala:
##########
@@ -462,14 +462,25 @@ private[spark] object Config extends Logging {
 
   val KUBERNETES_ALLOCATION_PODS_ALLOCATOR =
     ConfigBuilder("spark.kubernetes.allocation.pods.allocator")
-      .doc("Allocator to use for pods. Possible values are direct (the 
default) and statefulset " +
-        ", or a full class name of a class implementing AbstractPodsAllocator. 
" +
+      .doc("Allocator to use for pods. Possible values are direct (the 
default), statefulset," +
+        " deployment, or a full class name of a class implementing 
AbstractPodsAllocator. " +
         "Future version may add Job or replicaset. This is a developer API and 
may change " +
       "or be removed at anytime.")
       .version("3.3.0")
       .stringConf
       .createWithDefault("direct")
 
+  val KUBERNETES_EXECUTOR_POD_DELETION_COST =
+    ConfigBuilder("spark.kubernetes.executor.podDeletionCost")

Review Comment:
   Do we have a future plan to manage this dynamically from Apache Spark side, 
@ForVic ?
   
   If this is static, can we reuse 
`spark.kubernetes.executor.annotation.controller.kubernetes.io/pod-deletion-cost=XXX`
 instead of `spark.kubernetes.executor.podDeletionCost=XXX`? Or, pod template?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to