viirya edited a comment on pull request #32136: URL: https://github.com/apache/spark/pull/32136#issuecomment-819661211
> I think the scheduler already distributes tasks evenly when there's no locality preference as we'll shuffle the executors before scheduling: > > Doesn't it work? For the code snippet, doesn't it depend on if all executors are available during the moment of making offers? It seems to be unreliable due to a problem like race condition. For example, running SS job with state stores, it is easier to see the initial tasks are scheduled to part of all executors. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
