Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13552#discussion_r66285987
  
    --- Diff: 
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
    @@ -353,7 +353,7 @@ private[yarn] class YarnAllocator(
     
         } else if (missing < 0) {
           val numToCancel = math.min(numPendingAllocate, -missing)
    -      logInfo(s"Canceling requests for $numToCancel executor containers")
    +      logInfo(s"Canceled requests for $numToCancel executor container(s)")
    --- End diff --
    
    The first message is the Spark Driver requesting the spark backend a total 
number of executors based on dynamic allocation.  The cancel message is coming 
in later because that is when the backend allocator actually figures out what 
it needs to ask YARN for.  They are different components. 
    
    In this case we already have the requested number running or pending to be 
allocated (from YARN) so we actually have to cancel some of those reqeusts.  We 
haven't canceled them yet, we are about to, so it shouldn't be changed to 
canceled.
    
    If there is a way to update the log messages to be more clear I"m ok with 
that, otherwise this should be closed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to