[
https://issues.apache.org/jira/browse/HIVE-12538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15041360#comment-15041360
]
Nemon Lou commented on HIVE-12538:
----------------------------------
[~jxiang] Please feel free to take it over if you plan to support more than
one spark session. :)
And here is my considerations:
I think it's bad practice to use one single hive connection for multiple
threads on client side,
when each thread set different session level parameters and then submit queries
separately.
Users shall be encouraged to set session level parameters serially.
For asynchronous queries submitted from a single thread and from a single hive
connection,there is no
mechanism to promise their query parameters does not influence each
other(,unless these parameters
are set by confOverlay?).I haven't seen this use case for now.
> After set spark related config, SparkSession never get reused
> -------------------------------------------------------------
>
> Key: HIVE-12538
> URL: https://issues.apache.org/jira/browse/HIVE-12538
> Project: Hive
> Issue Type: Bug
> Components: Spark
> Affects Versions: 1.3.0
> Reporter: Nemon Lou
> Assignee: Nemon Lou
> Attachments: HIVE-12538.1.patch, HIVE-12538.2.patch,
> HIVE-12538.3.patch, HIVE-12538.4.patch, HIVE-12538.patch
>
>
> Hive on Spark yarn-cluster mode.
> After setting "set spark.yarn.queue=QueueA;" ,
> run the query "select count(*) from test" 3 times and you will find 3
> different yarn applications.
> Two of the yarn applications in FINISHED & SUCCEEDED state,and one in RUNNING
> & UNDEFINED state waiting for next work.
> And if you submit one more "select count(*) from test" ,the third one will be
> in FINISHED & SUCCEEDED state and a new yarn application will start up.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)