Ngone51 commented on code in PR #37268:
URL: https://github.com/apache/spark/pull/37268#discussion_r940266667
##########
core/src/main/scala/org/apache/spark/resource/ResourceProfile.scala:
##########
@@ -76,6 +76,11 @@ class ResourceProfile(
executorResources.asJava
}
+ /**
+ * Target executor's resource profile id, used for schedule.
+ */
+ def targetExecutorRpId: Int = id
Review Comment:
Don't think we have the actual task RP id and executor's RP id for now but
they're all resource profile id still.
Should override `_id` as `_id=DEFAULT_RESOURCE_PROFILE_ID` in
`TaskResourceProfile` help get rid of this API?
In this way, the task still limits one executor resource profile. But it
doesn't mean dynamic allocation has to be turned off, right?
If we really want to achieve "match on any number of compatible executors"
(which I think is a good direction), I think we need to separate the resource
profile id into task resource requirement id and executor resource requirement
id completely.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]