Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/23030#discussion_r234390445
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala
---
@@ -598,13 +597,20 @@ private[yarn] class YarnAllocator(
(false, s"Container ${containerId}${onHostStr} was preempted.")
// Should probably still count memory exceeded exit codes
towards task failures
case VMEM_EXCEEDED_EXIT_CODE =>
- (true, memLimitExceededLogMessage(
- completedContainer.getDiagnostics,
- VMEM_EXCEEDED_PATTERN))
+ val vmemExceededPattern = raw"$MEM_REGEX of $MEM_REGEX virtual
memory used".r
+ val diag =
vmemExceededPattern.findFirstIn(completedContainer.getDiagnostics)
+ .map(_.concat(".")).getOrElse("")
+ val message = "Container killed by YARN for exceeding virtual
memory limits. " +
+ s"$diag Consider boosting ${EXECUTOR_MEMORY_OVERHEAD.key} or
disabling " +
+ s"${YarnConfiguration.NM_VMEM_CHECK_ENABLED} because of
YARN-4714."
--- End diff --
Now you removed the other config option I asked you to add... :-/
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]