[ https://issues.apache.org/jira/browse/HIVE-19053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502449#comment-16502449 ]
Aihua Xu edited comment on HIVE-19053 at 6/5/18 8:50 PM: --------------------------------------------------------- That makes sense. Can you check the patch-2? was (Author: aihuaxu): That makes sense. Let me upload a new patch. > RemoteSparkJobStatus#getSparkJobInfo treats all exceptions as timeout errors > ---------------------------------------------------------------------------- > > Key: HIVE-19053 > URL: https://issues.apache.org/jira/browse/HIVE-19053 > Project: Hive > Issue Type: Sub-task > Components: Spark > Reporter: Sahil Takiar > Assignee: Aihua Xu > Priority: Major > Attachments: HIVE-19053.1.patch, HIVE-19053.2.patch > > > {code} > Future<SparkJobInfo> getJobInfo = sparkClient.run( > new GetJobInfoJob(jobHandle.getClientJobId(), sparkJobId)); > try { > return getJobInfo.get(sparkClientTimeoutInSeconds, TimeUnit.SECONDS); > } catch (Exception e) { > LOG.warn("Failed to get job info.", e); > throw new HiveException(e, ErrorMsg.SPARK_GET_JOB_INFO_TIMEOUT, > Long.toString(sparkClientTimeoutInSeconds)); > } > {code} > It should only throw {{ErrorMsg.SPARK_GET_JOB_INFO_TIMEOUT}} if a > {{TimeoutException}} is thrown. Other exceptions should be handled > independently. -- This message was sent by Atlassian JIRA (v7.6.3#76005)