[
https://issues.apache.org/jira/browse/SPARK-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14233321#comment-14233321
]
Marcelo Vanzin commented on SPARK-4694:
---------------------------------------
To answer your question, you can call System.exit() if you want. It's just
recommended that it's done after you properly shutdown the SparkContext,
otherwise Yarn won't report your app status correctly. But it seems your patch
doesn't use System.exit(), so this is kinda moot.
> Long-run user thread(such as HiveThriftServer2) causes the 'process leak' in
> yarn-client mode
> ---------------------------------------------------------------------------------------------
>
> Key: SPARK-4694
> URL: https://issues.apache.org/jira/browse/SPARK-4694
> Project: Spark
> Issue Type: Bug
> Components: YARN
> Reporter: SaintBacchus
>
> Recently when I use the Yarn HA mode to test the HiveThriftServer2 I found a
> problem that the driver can't exit by itself.
> To reappear it, you can do as fellow:
> 1.use yarn HA mode and set am.maxAttemp = 1for convenience
> 2.kill the active resouce manager in cluster
> The expect result is just failed, because the maxAttemp was 1.
> But the actual result is that: all executor was ended but the driver was
> still there and never close.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]