[ 
https://issues.apache.org/jira/browse/SPARK-8643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yin Huai updated SPARK-8643:
----------------------------
    Attachment: HiveSparkSubmitSuite (SPARK-8368).txt

> local-cluster may not shutdown SparkContext gracefully
> ------------------------------------------------------
>
>                 Key: SPARK-8643
>                 URL: https://issues.apache.org/jira/browse/SPARK-8643
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: Yin Huai
>         Attachments: HiveSparkSubmitSuite (SPARK-8368).txt
>
>
> When I was debugging SPARK-8567, I found that when I was using local-cluster, 
> at the end of an application, executors were first killed and then launched 
> again. From the log (attached), seems the master/driver side does not know 
> it's in the shutdown process. So, it detected executor loss and then called 
> the worker to launch new executors.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to