GitHub user 10110346 opened a pull request: https://github.com/apache/spark/pull/20690
[SPARK-23532][Standalone]Improve data locality when launching new executors for dynamic allocation ## What changes were proposed in this pull request? Currently Spark on Yarn supports better data locality by considering the preferred locations of the pending tasks when dynamic allocation is enabled, Refer to _https://issues.apache.org/jira/browse/SPARK-4352_. Mesos alse supports data locality, Refer to _https://issues.apache.org/jira/browse/SPARK-16944_ It would be better that Standalone can also support this feature. ## How was this patch tested? Added a unit test, and manual testing on HDFS You can merge this pull request into a Git repository by running: $ git pull https://github.com/10110346/spark executorlocality Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/20690.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #20690 ---- commit f7efb22ddea3dc8eeccc833086d5a82cbce7e530 Author: liuxian <liu.xian3@...> Date: 2018-02-28T07:33:44Z fix ---- --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org