[
https://issues.apache.org/jira/browse/HIVE-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195943#comment-16195943
]
Sahil Takiar commented on HIVE-15860:
-------------------------------------
[~lirui], is there a real perf issue with running
{{Preconditions.checkState(sparkJobStatus.isRemoteActive()}} during each
iteration of the {{while}} loop? There shouldn't be much overhead, the check
just checks the value of a {{boolean}} and if a connection is open or closed
(which is also done for every RPC call made too).
I think the benefit is that it is safer, in case there are other edge cases we
aren't considering, and it allows us to fail faster in the case where the
{{RemoteSparkJobMonitor}} is in the {{QUEUED}} / {{SENT}} state. If its stuck
in that state, it won't fail until it hits the monitor timeout (by default 1
minute), even though we already know the connection has died. The error message
that is thrown is also a little imprecise, it says there could be queue
contention, even though we know the real reason is that the connection was lost.
What do you think? If you agree, I can make the change in a follow up JIRA.
> RemoteSparkJobMonitor may hang when RemoteDriver exits abnormally
> -----------------------------------------------------------------
>
> Key: HIVE-15860
> URL: https://issues.apache.org/jira/browse/HIVE-15860
> Project: Hive
> Issue Type: Bug
> Reporter: Rui Li
> Assignee: Rui Li
> Fix For: 2.3.0
>
> Attachments: HIVE-15860.1.patch, HIVE-15860.2.patch,
> HIVE-15860.2.patch
>
>
> It happens when RemoteDriver crashes between {{JobStarted}} and
> {{JobSubmitted}}, e.g. killed by {{kill -9}}. RemoteSparkJobMonitor will
> consider the job has started, however it can't get the job info because it
> hasn't received the JobId. Then the monitor will loop forever.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)