[ 
https://issues.apache.org/jira/browse/HIVE-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14516272#comment-14516272
 ] 

Chao Sun commented on HIVE-10476:
---------------------------------

OK, how about this:

{code}
...
  sparkMemoryAndCores = sparkSession.getMemoryAndCores();
} catch (HiveException e) {
  throw new SemanticException("Failed to get a spark session: " + e);
} catch (Exception e) {
  LOG.warn("Failed to get spark memory/core info", e);
}
...
{code}

I think it should still continue if failed to get mem/core info.


> Hive query should fail when it fails to initialize a session in 
> SetSparkReducerParallelism [Spark Branch]
> ---------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-10476
>                 URL: https://issues.apache.org/jira/browse/HIVE-10476
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>    Affects Versions: spark-branch
>            Reporter: Chao Sun
>            Assignee: Chao Sun
>            Priority: Minor
>         Attachments: HIVE-10476.1-spark.patch
>
>
> Currently, for a Hive query HoS need to get a session
> a session twice, once in SparkSetReducerParallelism, and another when 
> submitting the actual job.
> The issue is that sometimes there's problem when launching a Yarn application 
> (e.g., don't have permission), then user will have to wait for two timeouts, 
> because both session initializations will fail. This turned out to happen 
> frequently.
> This JIRA proposes to fail the query in SparkSetReducerParallelism, when it 
> cannot initialize the session.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to