ivoson commented on code in PR #37268:
URL: https://github.com/apache/spark/pull/37268#discussion_r944453072


##########
core/src/main/scala/org/apache/spark/resource/ResourceProfile.scala:
##########
@@ -76,6 +76,11 @@ class ResourceProfile(
     executorResources.asJava
   }
 
+  /**
+   * Target executor's resource profile id, used for schedule.
+   */
+  def targetExecutorRpId: Int = id

Review Comment:
   Thanks @Ngone51 
   I think that's pretty much the idea of sharing/re-use executors with the 
policy: sharing any executors which can fulfill task's resource requests. The 
problem may be that do we want to let users to specify re-use policy.
   
   How about we narrow down the scenario here in this PR, schedule tasks with 
`TaskResourceProfile` to executors with `DEFAULT_RESOURCE_PROFILE` directly 
when dynamic allocation off(without checking rpId).
   Even thought the idea is pretty much like reuse compatible executors, it's 
much simpler in this case. And we can still leave 
[SPARK-36699](https://issues.apache.org/jira/browse/SPARK-36699) to handle the 
API change, we don't introduce the API change in this PR.
   
   What do you think? @tgravescs @Ngone51 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to