attilapiros commented on a change in pull request #33492:
URL: https://github.com/apache/spark/pull/33492#discussion_r679020395



##########
File path: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocator.scala
##########
@@ -347,14 +373,10 @@ private[spark] class ExecutorPodsAllocator(
   }
 
   private def requestNewExecutors(
-      expected: Int,
-      running: Int,
+      numExecutorsToAllocate: Int,
       applicationId: String,
       resourceProfileId: Int,
       pvcsInUse: Seq[String]): Unit = {
-    val numExecutorsToAllocate = math.min(expected - running, 
podAllocationSize)
-    logInfo(s"Going to request $numExecutorsToAllocate executors from 
Kubernetes for " +
-      s"ResourceProfile Id: $resourceProfileId, target: $expected running: 
$running.")

Review comment:
       My reason to change it was to avoid passing more variables for that 
method. 
   
   So from this
   
   
https://github.com/apache/spark/blob/adc512d4e1837713713fefc6f64af3b0c6c8cdc8/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsAllocator.scala#L343-L346
   
   we would need to pass:
   
   - `targetNum`
   - `podCountForRpId`
   - `sharedSlotFromPendingPods`
   
   and they only needed for the log line and to calculate 
`numExecutorsToAllocate`. 
   
   With the current solution  `numExecutorsToAllocate` is enough and when we 
will extend the current logic to consider more limits to use for allocation 
then `numExecutorsToAllocate` will be still enough.
   
   WDYT?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to