holdenk commented on a change in pull request #33492:
URL: https://github.com/apache/spark/pull/33492#discussion_r675906423
##########
File path:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocator.scala
##########
@@ -293,32 +302,49 @@ private[spark] class ExecutorPodsAllocator(
.withLabelIn(SPARK_EXECUTOR_ID_LABEL,
toDelete.sorted.map(_.toString): _*)
.delete()
newlyCreatedExecutors --= newlyCreatedToDelete
- knownPendingCount -= knownPendingToDelete.size
+ pendingCountForRpId -= pendingToDelete.size
+ notRunningPodCountForRpId -= toDelete.size
}
}
}
-
- if (newlyCreatedExecutorsForRpId.isEmpty
- && knownPodCount < targetNum) {
- requestNewExecutors(targetNum, knownPodCount, applicationId, rpId,
k8sKnownPVCNames)
- }
- totalPendingCount += knownPendingCount
+ totalPendingCount += pendingCountForRpId
+ totalNotRunningPodCount += notRunningPodCountForRpId
// The code below just prints debug messages, which are only useful when
there's a change
// in the snapshot state. Since the messages are a little spammy, avoid
them when we know
// there are no useful updates.
if (log.isDebugEnabled && snapshots.nonEmpty) {
- val outstanding = knownPendingCount + newlyCreatedExecutorsForRpId.size
+ val outstanding = pendingCountForRpId +
newlyCreatedExecutorsForRpId.size
if (currentRunningCount >= targetNum && !dynamicAllocationEnabled) {
logDebug(s"Current number of running executors for ResourceProfile
Id $rpId is " +
"equal to the number of requested executors. Not scaling up
further.")
} else {
- if (outstanding > 0) {
- logDebug(s"Still waiting for $outstanding executors for
ResourceProfile " +
- s"Id $rpId before requesting more.")
+ if (newlyCreatedExecutorsForRpId.nonEmpty) {
+ logDebug(s"Still waiting for ${newlyCreatedExecutorsForRpId.size}
executors for " +
+ s"ResourceProfile Id $rpId before requesting more.")
}
}
}
+ if (newlyCreatedExecutorsForRpId.isEmpty && podCountForRpId < targetNum)
{
+ Some(rpId, podCountForRpId, targetNum)
+ } else {
+ // for this resource profile we do not request more PODs
+ None
+ }
+ }
Review comment:
I get this is the old behaviour, but I would think that we might
allocate up to max pending so requiring newlyCreated to be empty seems strange
to me.
##########
File path:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
##########
@@ -568,6 +568,16 @@ private[spark] object Config extends Logging {
.checkValue(delay => delay > 0, "delay must be a positive time value")
.createWithDefaultString("30s")
+ val KUBERNETES_MAX_PENDING_PODS =
+ ConfigBuilder("spark.kubernetes.allocation.maxPendingPods")
+ .doc("Maximum number of pending PODs allowed during executor allocation
for this " +
+ "application. Those newly requested executors which are unknown by
Kubernetes yet are " +
+ "also counted into this limit as they will change into pending PODs by
time.")
Review comment:
Maybe include a note that if there are pending pods for a given resource
profile we won't allocate more pods for the rpi since on my first reading of
this I assumed this flag changed that behaviour.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]