[
https://issues.apache.org/jira/browse/SPARK-6692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Andrew Or closed SPARK-6692.
----------------------------
Resolution: Won't Fix
> Add an option for client to kill AM when it is killed
> -----------------------------------------------------
>
> Key: SPARK-6692
> URL: https://issues.apache.org/jira/browse/SPARK-6692
> Project: Spark
> Issue Type: Improvement
> Components: YARN
> Affects Versions: 1.3.0
> Reporter: Cheolsoo Park
> Assignee: Cheolsoo Park
> Priority: Minor
> Labels: yarn
>
> I understand that the yarn-cluster mode is designed for fire-and-forget
> model; therefore, terminating the yarn client doesn't kill AM.
> However, it is very common that users submit Spark jobs via job scheduler
> (e.g. Apache Oozie) or remote job server (e.g. Netflix Genie) where it is
> expected that killing the yarn client will terminate AM.
> It is true that the yarn-client mode can be used in such cases. But then, the
> yarn client sometimes needs lots of heap memory for big jobs if it runs in
> the yarn-client mode. In fact, the yarn-cluster mode is ideal for big jobs
> because AM can be given arbitrary heap memory unlike the yarn client. So it
> would be very useful to make it possible to kill AM even in the yarn-cluster
> mode.
> In addition, Spark jobs often become zombie jobs if users ctrl-c them as soon
> as they're accepted (but not yet running). Although they're eventually
> shutdown after AM timeout, it would be nice if AM could immediately get
> killed in such cases too.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]