tgravescs commented on a change in pull request #33941:
URL: https://github.com/apache/spark/pull/33941#discussion_r719490485



##########
File path: core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala
##########
@@ -518,11 +551,32 @@ private[spark] class ExecutorAllocationManager(
     numExecutorsTarget += numExecutorsToAddPerResourceProfileId(rpId)
     // Ensure that our target doesn't exceed what we need at the present 
moment:
     numExecutorsTarget = math.min(numExecutorsTarget, maxNumExecutorsNeeded)
-    // Ensure that our target fits within configured bounds:
-    numExecutorsTarget = math.max(math.min(numExecutorsTarget, 
maxNumExecutors), minNumExecutors)
+    numExecutorsTarget = if (!reuseExecutors) {

Review comment:
        I need to look in more detail perhaps on how to do this, I don't really 
like haven't to add conditionals in so many places




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to