Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/3765#discussion_r22984790
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
@@ -153,498 +154,241 @@ private[yarn] class YarnAllocator(
}
/**
- * Allocate missing containers based on the number of executors
currently pending and running.
+ * Request resources such that, if YARN gives us all we ask for, we'll
have a number of containers
+ * equal to maxExecutors.
+ *
+ * Deal with any containers YARN has granted to us by possibly launching
executors in them.
*
- * This method prioritizes the allocated container responses from the RM
based on node and
- * rack locality. Additionally, it releases any extra containers
allocated for this application
- * but are not needed. This must be synchronized because variables read
in this block are
- * mutated by other methods.
+ * This must be synchronized because variables read in this method are
mutated by other methods.
*/
def allocateResources(): Unit = synchronized {
- val missing = maxExecutors - numPendingAllocate.get() -
numExecutorsRunning.get()
+ val numPendingAllocate = getNumPendingAllocate
+ val missing = maxExecutors - numPendingAllocate - numExecutorsRunning
--- End diff --
if maxExecutors is the total number of executors that we want to be
running, so there missing = maxExecutors - numPendingAllocate?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]