Github user dongjoon-hyun commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20735#discussion_r172661657
  
    --- Diff: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala
 ---
    @@ -736,7 +736,8 @@ private object YarnAllocator {
       def memLimitExceededLogMessage(diagnostics: String, pattern: Pattern): 
String = {
         val matcher = pattern.matcher(diagnostics)
         val diag = if (matcher.find()) " " + matcher.group() + "." else ""
    -    ("Container killed by YARN for exceeding memory limits." + diag
    -      + " Consider boosting spark.yarn.executor.memoryOverhead.")
    +    s"Container killed by YARN for exceeding memory limits. $diag " +
    +      "Consider boosting spark.yarn.executor.memoryOverhead or " +
    +      "disable yarn.nodemanager.vmem-check-enabled because of YARN-4714."
    --- End diff --
    
    I also met this situation in my docker cluster environment and this was a 
workaround for that.
    But, I'm not sure it's a recommendable way in Apache Spark warning.
    How do you think about this, @jerryshao ?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to