jiangxb1987 commented on a change in pull request #24374: [SPARK-27366][CORE] 
Support GPU Resources in Spark job scheduling
URL: https://github.com/apache/spark/pull/24374#discussion_r288819590
 
 

 ##########
 File path: 
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
 ##########
 @@ -263,7 +272,7 @@ class CoarseGrainedSchedulerBackend(scheduler: 
TaskSchedulerImpl, val rpcEnv: Rp
         val workOffers = activeExecutors.map {
           case (id, executorData) =>
             new WorkerOffer(id, executorData.executorHost, 
executorData.freeCores,
-              Some(executorData.executorAddress.hostPort))
+              Some(executorData.executorAddress.hostPort), 
executorData.availableResources.toMap)
 
 Review comment:
   Actually this works because `executorData.availableResources.toMap` just 
turns `Map[String, ExecutorResourceInfo]` into an immutable map, so we are 
still manifesting on the original ExecutorResourceInfo object. I updated a test 
case in `CoarseGrainedSchedulerBackendSuite` to show the case.
   
   Another thing is, inside `CoarseGrainedSchedulerBackend.launchTasks()`, we 
may skip the task launch (when the size of serializedTask exceeds the max size 
allowed), under this case we shall release the resource addresses linked to the 
task directly (otherwise these resource addresses won't get returned). Under 
this case I prefer to release the `reservedAddresses` instead of 
`allocatedAddresses` because the addresses are not actually allocated.  I added 
another test case in `CoarseGrainedSchedulerBackendSuite` to show it.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to