pan3793 commented on code in PR #53840:
URL: https://github.com/apache/spark/pull/53840#discussion_r2785891881


##########
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocator.scala:
##########
@@ -394,6 +401,9 @@ class ExecutorPodsAllocator(
     // update method when not needed. PODs known by the scheduler backend are 
not counted here as
     // they considered running PODs and they should not block upscaling.
     numOutstandingPods.set(totalPendingCount + newlyCreatedExecutors.size)
+
+    // Check if we've exceeded the failure threshold after this allocation 
cycle
+    checkFailureThreshold(totalFailedPodCreations.get())

Review Comment:
   As I said previously, I prefer not to have additional failure threshold 
check mechanisms outside the `failureTracker`, the current `failureTracker`.
   
   The simplest approach to solve your current problem is just propagate the 
pod creation failure to `lifecycleManager`, like you did, via 
`registerPodCreationFailure()`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to