[
https://issues.apache.org/jira/browse/SPARK-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14659864#comment-14659864
]
Alberto commented on SPARK-4783:
--------------------------------
Still having this issue. We've found out that the exception throw by
TaskSchedulerImpl is being caught by SparkUncaughtException which is calling
System.exit() again.
Would it make sense just logging the error and not throwing the exception? See
https://github.com/apache/spark/pull/7993
> System.exit() calls in SparkContext disrupt applications embedding Spark
> ------------------------------------------------------------------------
>
> Key: SPARK-4783
> URL: https://issues.apache.org/jira/browse/SPARK-4783
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Reporter: David Semeria
> Assignee: Sean Owen
> Priority: Minor
> Fix For: 1.4.0
>
>
> A common architectural choice for integrating Spark within a larger
> application is to employ a gateway to handle Spark jobs. The gateway is a
> server which contains one or more long-running sparkcontexts.
> A typical server is created with the following pseudo code:
> var continue = true
> while (continue){
> try {
> server.run()
> } catch (e) {
> continue = log_and_examine_error(e)
> }
> The problem is that sparkcontext frequently calls System.exit when it
> encounters a problem which means the server can only be re-spawned at the
> process level, which is much more messy than the simple code above.
> Therefore, I believe it makes sense to replace all System.exit calls in
> sparkcontext with the throwing of a fatal error.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]