dongjoon-hyun commented on a change in pull request #31790:
URL: https://github.com/apache/spark/pull/31790#discussion_r659263233
##########
File path:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
##########
@@ -292,6 +292,14 @@ private[spark] object Config extends Logging {
.checkValue(value => value > 0, "Allocation batch size should be a
positive integer")
.createWithDefault(5)
+ val KUBERNETES_MAX_PENDING_PODS =
+ ConfigBuilder("spark.kubernetes.allocation.max.pendingPods")
+ .doc("Maximum number of pending pods allowed during executor alloction
for this application.")
+ .version("3.2.0")
+ .intConf
+ .checkValue(value => value > 0, "Maximum number of pending pods should
be a positive integer")
+ .createWithDefault(150)
Review comment:
I'd like to propose to disable this feature at Apache Spark 3.2.0 to
remove the side-effect completely. For example, we can use `Int.MaxValue` as
default to disable this feature.
WDYT, @attilapiros and @holdenk ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]