[ https://issues.apache.org/jira/browse/SPARK-13143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen resolved SPARK-13143. ------------------------------- Resolution: Won't Fix I think we're not managing EC2 issues in Spark anymore as the support is no longer in Spark > EC2 cluster silently not destroyed for non-default regions > ---------------------------------------------------------- > > Key: SPARK-13143 > URL: https://issues.apache.org/jira/browse/SPARK-13143 > Project: Spark > Issue Type: Bug > Components: EC2 > Affects Versions: 1.5.0 > Reporter: Theodore Vasiloudis > Priority: Minor > Labels: trivial > > If you start a cluster in a non-default region using the EC2 scripts and then > try to destroy it, you get the message: > {quote} > Terminating master... > Terminating slaves... > {quote} > after which the script terminates with no further info. > This leaves the instances still running without ever informing the user. > The reason this happens is that the destroy action in {{spark_ec2.py}} calls > {{get_existing_cluster}} with the {{die_on_error}} argument set to {{False}} > for some reason. > I'll submit a PR for this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org