tgravescs commented on a change in pull request #27138: [SPARK-30448][Core] 
accelerator aware scheduling enforce cores as limiting resource
URL: https://github.com/apache/spark/pull/27138#discussion_r364445688
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/SparkContext.scala
 ##########
 @@ -2818,12 +2825,28 @@ object SparkContext extends Logging {
         // multiple executor resources.
         val resourceNumSlots = Math.floor(execAmount * taskReq.numParts / 
taskReq.amount).toInt
         if (resourceNumSlots < numSlots) {
+          if (shouldCheckExecCores) {
+            throw new IllegalArgumentException("The number of slots on an 
executor has to be " +
 
 Review comment:
   I think its safer to always require it just in case there are other places 
in the code that use cores and task cpus to determine slots.  I know in doing 
the stage level sched work there were a bunch of places that did this but I 
would have to go back thru to see if they were only during dynamic allocation. 
   Actually one example of this is #27126
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to