[ 
https://issues.apache.org/jira/browse/HIVE-16459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HIVE-16459:
--------------------------
       Resolution: Fixed
    Fix Version/s: 2.4.0
                   3.0.0
                   2.3.0
           Status: Resolved  (was: Patch Available)

Pushed to master, branch-2 and branch-2.3. Thanks Xuefu for review.

> Forward channelInactive to RpcDispatcher
> ----------------------------------------
>
>                 Key: HIVE-16459
>                 URL: https://issues.apache.org/jira/browse/HIVE-16459
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Rui Li
>            Assignee: Rui Li
>             Fix For: 2.3.0, 3.0.0, 2.4.0
>
>         Attachments: HIVE-16459.1.patch, HIVE-16459.2.patch, 
> HIVE-16459.2.patch
>
>
> In SparkTask, we try to get job infos after the query finishes. Assume the 
> job finishes due to remote side crashes and thus closes the RPC. There's a 
> race condition: if we try to get job info before we notice the RPC is closed, 
> the SparkTask waits for {{hive.spark.client.future.timeout}} (default 60s) 
> before it returns, even though we already know the job has failed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to