GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/19832
[SPARK-22628][CORE]Some situationsï¼ the assignment of executors on
workers is not what we expected when `spark.deploy.spreadOut=true`.
## What changes were proposed in this pull request?
For example, cluster has 3 workers(workA, workB, workC), workA has 1 core
left, workB has 1 core left, workC has no cores left.
User requests 3 executors (spark.cores.max = 3, spark.executor.cores = 1),
obviously, workA will be assigned one executor ,and workB will be assigned one
executor.
After a moment,if some apps release cores, and workB has 3 core left, workC
has 2 core left, we should assign one executor on workC,not workB.
Especially for dynamic executors allocation in standalone mode, this
problem is more serious.
This PR reorders by another key for `usableWorkers` to solve this problem.
## How was this patch tested?
Manual test
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/10110346/spark startExecutorsOnWorkers
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/19832.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #19832
----
commit 937277844765604a8698d9f214c0006ecb7e54f8
Author: liuxian <[email protected]>
Date: 2017-11-28T07:01:17Z
fix
----
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]