[ 
https://issues.apache.org/jira/browse/HIVE-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14274528#comment-14274528
 ] 

Xuefu Zhang commented on HIVE-9258:
-----------------------------------

[~jxiang], thanks for looking into this. Looking at the code, I see that it 
uses SparkSession instance which is indeed shared with regular queries. Since 
this is confirmed, please close this as "not a problem". 

BTW, I noticed that we have local cache for sparkMemoryAndCores as in:
{code}
        if (sparkMemoryAndCores == null) {
{code}
This would mean that we wouldn't update the value in the entire user session. 
However, this value can change dynamically. Do you think we should not cache 
the value?

> Explain query shouldn't launch a Spark application [Spark Branch]
> -----------------------------------------------------------------
>
>                 Key: HIVE-9258
>                 URL: https://issues.apache.org/jira/browse/HIVE-9258
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Xuefu Zhang
>            Assignee: Jimmy Xiang
>
> Currently for Hive on Spark, query plan includes the number of reducers, 
> which is determined partly by the Spark cluster. Thus, explain query will 
> need to launch a Spark application (Spark remote context), which is costly. 
> To make things worse, the application is discarded right way.
> Ideally, we shouldn't launch a Spark application even for an explain query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to