Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4168#discussion_r24357759
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -201,18 +201,35 @@ private[spark] class ExecutorAllocationManager(
}
/**
- * If the add time has expired, request new executors and refresh the
add time.
- * If the remove time for an existing executor has expired, kill the
executor.
+ * The number of executors we would have if the cluster manager were to
fulfill all our requests.
+ */
+ private def targetNumExecutors(): Int =
+ numExecutorsPending + executorIds.size
+
+ /**
+ * The maximum number of executors we would need under the current load
to satisfy all running
+ * and pending tasks.
+ */
+ private def maxNumExecutorsNeeded(): Int = {
+ // The maximum number of executors we need under the current load is
the total number of
+ // running or pending tasks, divided by the full task capacity of each
executor, rounded up.
--- End diff --
this comment is a little redundant given the javadocs. I would just add
`rounded up` to the end of the javadoc itself.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]