Ngone51 commented on pull request #32136:
URL: https://github.com/apache/spark/pull/32136#issuecomment-846713976


   > If I understand correctly about stage level scheduling, you still need to 
specify "all" resources needed for "all" tasks in StateRDD; while that may 
block Spark to schedule when some resources are missing (like lost executor 
with PVC), I'm wondering how task level schedule would work as its intention. 
After this, locality is the only one we can deal with, and it's not an 
enforcement so we're back to the origin problem.
   
   That's true. I think we can extend the `ResourceProfile.defaultProfile` by 
adding the state store request to id.
   
   And we may not need to add the state store request to the executor (but task 
only) so the executor doesn't need to load the state store at its launch time 
while using dynamic allocation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to