[
https://issues.apache.org/jira/browse/HIVE-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rui Li updated HIVE-16459:
--------------------------
Description: In SparkTask, we try to get job infos after the query
finishes. Assume the job finishes due to remote side crashes and thus closes
the RPC. There's a race condition: if we try to get job info before we notice
the RPC is closed, the SparkTask waits for {{hive.spark.client.future.timeout}}
(default 60s) before it returns, even though we already know the job has failed.
> Cancel outstanding RPCs when channel closes
> -------------------------------------------
>
> Key: HIVE-16459
> URL: https://issues.apache.org/jira/browse/HIVE-16459
> Project: Hive
> Issue Type: Bug
> Reporter: Rui Li
> Assignee: Rui Li
>
> In SparkTask, we try to get job infos after the query finishes. Assume the
> job finishes due to remote side crashes and thus closes the RPC. There's a
> race condition: if we try to get job info before we notice the RPC is closed,
> the SparkTask waits for {{hive.spark.client.future.timeout}} (default 60s)
> before it returns, even though we already know the job has failed.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)