[
https://issues.apache.org/jira/browse/HIVE-20273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16615124#comment-16615124
]
Sahil Takiar commented on HIVE-20273:
-------------------------------------
This patch makes the following fixes:
* Fixes the "double-nesting" issue by removing the second clause of the if
statement mentioned above
* Adds proper and consistent handling of interrupts to {{getWebUIURL}} and
{{getAppID}} in {{RemoteSparkJobStatus}}
* Adds several unit tests that validate that {{killJob}} is invoked whenever an
RPC call is interrupted
> Spark jobs aren't cancelled if getSparkJobInfo or getSparkStagesInfo
> --------------------------------------------------------------------
>
> Key: HIVE-20273
> URL: https://issues.apache.org/jira/browse/HIVE-20273
> Project: Hive
> Issue Type: Sub-task
> Components: Spark
> Reporter: Sahil Takiar
> Assignee: Sahil Takiar
> Priority: Major
> Attachments: HIVE-20273.1.patch
>
>
> HIVE-19053 and HIVE-19733 added handling of {{InterruptedException}} to
> {{RemoteSparkJobStatus#getSparkJobInfo}} and
> {{RemoteSparkJobStatus#getSparkStagesInfo}}. Now, these methods catch
> {{InterruptedException}} and wrap the exception in a {{HiveException}} and
> then throw the new {{HiveException}}.
> This new {{HiveException}} is then caught in
> {{RemoteSparkJobMonitor#startMonitor}} which then looks for exceptions that
> match the condition:
> {code:java}
> if (e instanceof InterruptedException ||
> (e instanceof HiveException && e.getCause() instanceof
> InterruptedException))
> {code}
> If this condition is met (in this case it is), the exception will again be
> wrapped in another {{HiveException}} and is thrown again. So the final
> exception is a {{HiveException}} that wraps a {{HiveException}} that wraps an
> {{InterruptedException}}.
> The double nesting of hive exception causes the logic in
> {{SparkTask#setSparkException}} to break, and doesn't cause {{killJob}} to
> get triggered.
> This causes interrupted Hive queries to not kill their corresponding Spark
> jobs.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)