Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/9758#discussion_r45127540
  
    --- Diff: yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
    @@ -258,7 +258,8 @@ private[spark] class Client(
         if (executorMem > maxMem) {
           throw new IllegalArgumentException(s"Required executor memory 
(${args.executorMemory}" +
             s"+$executorMemoryOverhead MB) is above the max threshold ($maxMem 
MB) of this cluster! " +
    -        "Please increase the value of 
'yarn.scheduler.maximum-allocation-mb'.")
    +        "Please increase the value of 
'yarn.scheduler.maximum-allocation-mb' and" +
    --- End diff --
    
    hmm, it would definitely and/or.. but I kind of question why we need this 
at all?  The scheduler max is really what its compared to.  If your 
nodemanagers aren't configured to have enough memory to meet that then I would 
say you have misconfigured your cluster.  I'm fine with leaving the error in 
there if people fine is useful but I agree with @srowen I think this should be 
reworded to say something like check the values of ... rather then please 
increase the values.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to