Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/6394#discussion_r35146872
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationClient.scala ---
@@ -28,7 +28,10 @@ private[spark] trait ExecutorAllocationClient {
* This can result in canceling pending requests or filing additional
requests.
* @return whether the request is acknowledged by the cluster manager.
*/
- private[spark] def requestTotalExecutors(numExecutors: Int): Boolean
+ private[spark] def requestTotalExecutors(
+ numExecutors: Int,
--- End diff --
To be as clear as possible, can you update the documentation for this
method to the following. Also, change `localityAwarePendingTasks` to
`localityAwareTasks` because the tasks need not be pending.
---
Update the cluster manager on our scheduling needs. Three bits of
information are included to help it make decisions.
@param numExecutors
The total number of executors we'd like to have. The cluster manager
shouldn't kill any running executor to reach this number, but, if all existing
executors were to die, this is the number of executors we'd want to be
allocated.
@param localityAwareTasks
The number of tasks in all active stages that have a locality
preferences. This includes running, pending, and completed tasks.
@param hostToLocalTaskCount
A map of hosts to the number of tasks from all active stages that would
like to like to run on that host. This includes running, pending, and
completed tasks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]