[ 
https://issues.apache.org/jira/browse/SPARK-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-6804.
------------------------------
    Resolution: Duplicate

> System.exit(1) on error
> -----------------------
>
>                 Key: SPARK-6804
>                 URL: https://issues.apache.org/jira/browse/SPARK-6804
>             Project: Spark
>          Issue Type: Improvement
>            Reporter: Alberto
>
> We are developing a web application that is using Spark under the hood. 
> Testing our app we have found out that when our spark master is not up and 
> running and we try to connect with it, Spark is killing our app. 
> We've been having a look at the code and we have noticed that the 
> TaskSchedulerImpl class is just killing the JVM and our web application is 
> obviously also killed. See following the code snippet I am talking about:
> {code}
> else {
>         // No task sets are active but we still got an error. Just exit since 
> this
>         // must mean the error is during registration.
>         // It might be good to do something smarter here in the future.
>         logError("Exiting due to error from cluster scheduler: " + message)
>         System.exit(1)
>       }
> {code}
> IMHO this guy should not invoke System.exit(1). Instead, it should throw an 
> exception so the applications will be able to handle the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to