This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 58fbd7f6b1b0 [SPARK-54173][K8S][FOLLOWUP] Fix 
`spark.kubernetes.executor.podDeletionCost` config doc
58fbd7f6b1b0 is described below

commit 58fbd7f6b1b0ab6640207114c8dba27a84e04892
Author: Dongjoon Hyun <[email protected]>
AuthorDate: Wed Feb 11 10:43:23 2026 -0800

    [SPARK-54173][K8S][FOLLOWUP] Fix 
`spark.kubernetes.executor.podDeletionCost` config doc
    
    ### What changes were proposed in this pull request?
    
    This is a follow-up to fix `spark.kubernetes.executor.podDeletionCost` 
config doc.
    - #52867
    
    ### Why are the changes needed?
    
    **Apache Spark 4.2.0-preview2**
    ```
    Value to set for the controller.kubernetes.io/pod-deletion-cost annotation 
when Spark asks a deployment-based allocator to remove executor pods. This 
helps Kubernetes pick the same pods Spark selected when the deployment scales 
down.
    This should only be enabled when both 
ConfigEntry(key=spark.kubernetes.allocation.pods.allocator, 
defaultValue=direct, doc=Allocator to use for pods. Possible values are direct 
(the default), statefulset, deployment, or a full class name of a class 
implementing AbstractPodsAllocator. Future version may add Job or replicaset. 
This is a developer API and may change or be removed at anytime., public=true, 
version=3.3.0) is set to deployment, and 
ConfigEntry(key=spark.dynamicAllocation.en [...]
    ```
    
    **THIS PR**
    ```
    Value to set for the controller.kubernetes.io/pod-deletion-cost annotation 
when Spark asks a deployment-based allocator to remove executor pods. This 
helps Kubernetes pick the same pods Spark selected when the deployment scales 
down.
    This should only be enabled when both 
spark.kubernetes.allocation.pods.allocator is set to deployment, and 
spark.dynamicAllocation.enabled is enabled.
    ```
    
    ### Does this PR introduce _any_ user-facing change?
    
    No this is a new feature of Spark 4.2.0 which is not released yet 
officially.
    
    ### How was this patch tested?
    
    Pass the CIs.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    Generated-by: `Gemini 3 Pro (High)` on `Antigravity`
    
    Closes #54271 from dongjoon-hyun/SPARK-54173.
    
    Authored-by: Dongjoon Hyun <[email protected]>
    Signed-off-by: Dongjoon Hyun <[email protected]>
---
 .../core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala      | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
index 011b05f6542a..5c26dea417ac 100644
--- 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
+++ 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
@@ -499,8 +499,8 @@ private[spark] object Config extends Logging {
       .doc("Value to set for the controller.kubernetes.io/pod-deletion-cost" +
         " annotation when Spark asks a deployment-based allocator to remove 
executor pods. This " +
         "helps Kubernetes pick the same pods Spark selected when the 
deployment scales down." +
-        s" This should only be enabled when both 
$KUBERNETES_ALLOCATION_PODS_ALLOCATOR is set to " +
-        s"deployment, and $DYN_ALLOCATION_ENABLED is enabled.")
+        s" This should only be enabled when both 
${KUBERNETES_ALLOCATION_PODS_ALLOCATOR.key} is " +
+        s"set to deployment, and ${DYN_ALLOCATION_ENABLED.key} is enabled.")
       .version("4.2.0")
       .intConf
       .createOptional


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to