cloud-fan commented on pull request #32136:
URL: https://github.com/apache/spark/pull/32136#issuecomment-818723560


   > to avoid Spark schedule streaming tasks which use state store (let me call 
them stateful tasks) to arbitrary executors.
   
   I don't think we can guarantee it. It's a best effort and tasks should be 
able to run on any executor, thought tasks can have preferred executors 
(locality). Otherwise, we need to revisit many design decisions like how to 
avoid infinite wait, how to auto-scale, etc.
   
   > current locality seems a hacky approach as we can just blindly assign 
stateful tasks to executors evenly.
   
   Can you elaborate? If it's a problem of delay scheduling let's fix it 
instead.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to