[ 
https://issues.apache.org/jira/browse/SPARK-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14725637#comment-14725637
 ] 

Alberto Miorin commented on SPARK-9008:
---------------------------------------

I have the same problem, but with spark mesos cluster mode. I tried to 
spark-submit --kill but the driver is always restarted
by the dispatcher.
I think there should be a subcommand spark-submit --unsupervise

> Stop and remove driver from supervised mode in spark-master interface
> ---------------------------------------------------------------------
>
>                 Key: SPARK-9008
>                 URL: https://issues.apache.org/jira/browse/SPARK-9008
>             Project: Spark
>          Issue Type: New Feature
>          Components: Deploy
>            Reporter: Jesper Lundgren
>            Priority: Minor
>
> The cluster will automatically restart failing drivers when launched in 
> supervised cluster mode. However there is no official way for a operation 
> team to stop and remove a driver from restarting in case  it is 
> malfunctioning. 
> I know there is "bin/spark-class org.apache.spark.deploy.Client kill" but 
> this is undocumented and does not always work so well.
> It would be great if there was a way to remove supervised mode to allow kill 
> -9 to work on a driver program.
> The documentation surrounding this could also see some improvements. It would 
> be nice to have some best practice examples on how to work with supervised 
> mode, how to manage graceful shutdown and catch TERM signals. (TERM signal 
> will end with an exit code that triggers restart in supervised mode unless 
> you change the exit code in the application logic)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to