[
https://issues.apache.org/jira/browse/SPARK-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14231864#comment-14231864
]
Marcelo Vanzin commented on SPARK-4694:
---------------------------------------
I'm not sure I understand the bug or the context, but there must be some code
that manages both the SparkContext and the HiveThriftServer2 thread. That code
is responsible for stopping the context and shutting down the HiveThriftServer2
thread; if it can't do it cleanly because of some deficiency of the API, it can
use Thread.stop() or some other less kosher approach.
Using {{System.exit()}} is not recommended because there's no way for the
backend to detect that without severe performance implications. Apps will
always be reported as "succeeded" when using that approach.
> Long-run user thread(such as HiveThriftServer2) causes the 'process leak' in
> yarn-client mode
> ---------------------------------------------------------------------------------------------
>
> Key: SPARK-4694
> URL: https://issues.apache.org/jira/browse/SPARK-4694
> Project: Spark
> Issue Type: Bug
> Components: YARN
> Reporter: SaintBacchus
>
> Recently when I use the Yarn HA mode to test the HiveThriftServer2 I found a
> problem that the driver can't exit by itself.
> To reappear it, you can do as fellow:
> 1.use yarn HA mode and set am.maxAttemp = 1for convenience
> 2.kill the active resouce manager in cluster
> The expect result is just failed, because the maxAttemp was 1.
> But the actual result is that: all executor was ended but the driver was
> still there and never close.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]