viirya commented on pull request #32136:
URL: https://github.com/apache/spark/pull/32136#issuecomment-857298601
> Could you post the code snapshot?
E.g.,
```scala
ResourceProfileManager {
private[spark] def isSupported(rp: ResourceProfile): Boolean = {
...
val YarnOrK8sNotDynAllocAndNotDefaultProfile =
isNotDefaultProfile && (isYarn || isK8s) && !dynamicEnabled // <=
Remove it?
...
}
```
> Add a new type of task location, e.g., StateStoreTaskLocation(host,
executorId, StateStoreProviderId) , and let
BaseStateStoreRDD.getPreferredLocations returns it in string. Then, the
TaskSetManager could establish the “mapping” while building the pending task
list:
Isn't it the mapping still executor id <-> statestore? Executor id could be
changed due to executor lost. More robust mapping, e.g. for our use-case, might
be PVC id <-> statestore.
> I agree with this, its a matter of coming up with the right design to
solve the problem and possibly others (in the case of plugin). If we discuss
alternatives that become to complex we should drop them. But we should have the
discussion like we are.
Yes, agree. Appreciate for the discussion.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]