tgravescs commented on a change in pull request #33941:
URL: https://github.com/apache/spark/pull/33941#discussion_r719482778



##########
File path: 
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
##########
@@ -384,9 +386,18 @@ private[spark] class TaskSchedulerImpl(
       val execId = shuffledOffers(i).executorId
       val host = shuffledOffers(i).host
       val taskSetRpID = taskSet.taskSet.resourceProfileId
-      // make the resource profile id a hard requirement for now - ie only put 
tasksets
-      // on executors where resource profile exactly matches.
-      if (taskSetRpID == shuffledOffers(i).resourceProfileId) {
+
+      val assignTasks = if (reuseExecutors) {
+        val compatibleProfiles = 
sc.resourceProfileManager.getOtherCompatibleProfileIds(taskSetRpID)

Review comment:
       this seems fairly expensive to be calculating every loop through the 
scheduler. we should fine a way to compute and cache and only have to recompute 
when new resource profile is added.

##########
File path: core/src/main/scala/org/apache/spark/internal/config/package.scala
##########
@@ -572,6 +572,12 @@ package object config {
       .booleanConf
       .createWithDefault(false)
 
+  private[spark] val DYN_ALLOCATION_REUSE_EXECUTORS =
+    ConfigBuilder("spark.dynamicAllocation.reuseExecutors")

Review comment:
       I think this is to generic and could confuse users, I would prefer to 
see something in the name specifically using resource profiles.  Really I think 
I would prefer it to be scheduler config like:
   spark.scheduler.reuseCompatibleExecutors
    version below needs updating




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to