abellina commented on a change in pull request #26078: [SPARK-29151][CORE]
Support fractional resources for task resource scheduling
URL: https://github.com/apache/spark/pull/26078#discussion_r341697161
##########
File path: core/src/main/scala/org/apache/spark/SparkContext.scala
##########
@@ -2790,7 +2790,10 @@ object SparkContext extends Logging {
s" = ${taskReq.amount}")
}
// Compare and update the max slots each executor can provide.
- val resourceNumSlots = execAmount / taskReq.amount
+ // If the configured amount per task was < 1.0, a task is subdividing
+ // executor resources. If the amount per task was > 1.0, the task wants
+ // multiple executor resources.
+ val resourceNumSlots = Math.floor(execAmount * taskReq.numParts /
taskReq.amount).toInt
Review comment:
I think that you could go either way. If you feel strongly that this is
confusing, I can try to change the case class. I like the formula here because
it shows you explicitly how slots are computed.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]