tgravescs commented on pull request #28656:
URL: https://github.com/apache/spark/pull/28656#issuecomment-634790789


   Right when you have no tasks yet it gets set to ANY, but this wasn't 
changed, that was like that before (the changes to locality also)  and when an 
executor was added it never reset it back to the highest, it just called 
recomputeLocality again which would have left it at ANY as well.
   I'm assuming the difference here is that previously it would reset it as 
soon as any task got assigned at a higher locality.  Note that wasn't 
necessarily all the way back to 0 though.
   
   The issue I think on resetting it on every recomputeLocality is when you 
have dynamic allocation and you are adding a bunch of executors.  You basically 
revert back to similar to the old behavior where you could really be delaying 
vs using executors you already have. You are delaying those tasks that could be 
running more than we should and wasting the executor resources you have.
   
   let me look into this a bit more.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to