[
https://issues.apache.org/jira/browse/SPARK-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011127#comment-15011127
]
Apache Spark commented on SPARK-11799:
--------------------------------------
User 'vundela' has created a pull request for this issue:
https://github.com/apache/spark/pull/9809
> Make it explicit in executor logs that uncaught exceptions are thrown during
> executor shutdown
> ----------------------------------------------------------------------------------------------
>
> Key: SPARK-11799
> URL: https://issues.apache.org/jira/browse/SPARK-11799
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 1.5.1
> Reporter: Srinivasa Reddy Vundela
> Priority: Minor
>
> Here is some background for the issue.
> Customer got OOM exception in one of the task and executor got killed with
> kill %p. Few shutdown hooks are registered with ShutDownHookManager to do the
> hadoop temp directory cleanup. During this shutdown phase other tasks are
> throwing uncaught exception and executor logs are filled up with so many of
> them.
> Since it is unclear for the customer in driver logs/ Spark UI why the
> container was lost customer is going through the executor logs and he see lot
> of uncaught exception.
> It would be clear to the customer if we can prepend the uncaught exceptions
> with some message like [Container is in shutdown mode] so that he can skip
> those.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]