[
https://issues.apache.org/jira/browse/HIVE-12538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15030451#comment-15030451
]
Nemon Lou commented on HIVE-12538:
----------------------------------
Actually,there are two bugs:
1, Property isSparkConfigUpdated of HiveConf.java always get updated when
setting values from client side(beeline).
set spark.yarn.queue=QueueA; ---> isSparkConfigUpdated =true
set hive.execution.engine=spark; ---> isSparkConfigUpdated =false
2, SparkTask uses an operation level conf object other than session level conf.
That makes "conf.setSparkConfigUpdated(false);" in SparkUtilities meaningless
from session's view.
> After set spark related config, SparkSession never get reused
> -------------------------------------------------------------
>
> Key: HIVE-12538
> URL: https://issues.apache.org/jira/browse/HIVE-12538
> Project: Hive
> Issue Type: Bug
> Components: Spark
> Affects Versions: 1.3.0
> Reporter: Nemon Lou
>
> Hive on Spark yarn-cluster mode.
> After setting "set spark.yarn.queue=QueueA;" ,
> run the query "select count(*) from test" 3 times and you will find 3
> different yarn applications.
> Two of the yarn applications in FINISHED & SUCCEEDED state,and one in RUNNING
> & UNDEFINED state waiting for next work.
> And if you submit one more "select count(*) from test" ,the third one will be
> in FINISHED & SUCCEEDED state and a new yarn application will start up.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)