attilapiros commented on a change in pull request #33492:
URL: https://github.com/apache/spark/pull/33492#discussion_r675961263



##########
File path: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
##########
@@ -568,6 +568,16 @@ private[spark] object Config extends Logging {
       .checkValue(delay => delay > 0, "delay must be a positive time value")
       .createWithDefaultString("30s")
 
+  val KUBERNETES_MAX_PENDING_PODS =
+    ConfigBuilder("spark.kubernetes.allocation.maxPendingPods")
+      .doc("Maximum number of pending PODs allowed during executor allocation 
for this " +
+        "application. Those newly requested executors which are unknown by 
Kubernetes yet are " +
+        "also counted into this limit as they will change into pending PODs by 
time.")

Review comment:
       I was not intend to change the old behavior. So this new limit is global 
for all the resources profiles within one Spark app because on the Kubernetes 
point of view it is better to have one global  limit than limiting only the 
resource profiles separately. In the latter case for example an earlier limit 
set to N would be multiplied by just changing the app and using an extra 
resource profile. Moreover if some resource profiles just active in a few 
stages then the user might need to choose a lower limit but latter when those 
profiles are not active then the low limit won't be sufficient and the active 
ones cannot use those free slots. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to