Github user darabos commented on a diff in the pull request:
https://github.com/apache/spark/pull/9355#discussion_r43393025
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala ---
@@ -238,7 +238,7 @@ object YarnSparkHadoopUtil {
if (Utils.isWindows) {
escapeForShell("-XX:OnOutOfMemoryError=taskkill /F /PID %%%%p")
} else {
- "-XX:OnOutOfMemoryError='kill %p'"
+ "-XX:OnOutOfMemoryError='echo OnOutOfMemoryError; kill %p'"
--- End diff --
Sorry, I should have mentioned I didn't test this on an actual executor. I
tested the flag on the Scala interpreter, which is easier to OOM. It produced
this output:
```
#
# java.lang.OutOfMemoryError: GC overhead limit exceeded
# -XX:OnOutOfMemoryError="echo hi; kill %p"
# Executing /bin/sh -c "echo OnOutOfMemoryError"...
OnOutOfMemoryError
# Executing /bin/sh -c "kill 32523"...
```
I'm not sure where the lines with `#` come from. I did not see this in the
output of the executor I lost. (So maybe it was killed by something else after
all?)
I guess it would be best if I tested this with actual executors on actual
YARN...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]