[
https://issues.apache.org/jira/browse/HIVE-10291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xuefu Zhang updated HIVE-10291:
-------------------------------
Fix Version/s: (was: spark-branch)
1.2.0
> Hive on Spark job configuration needs to be logged [Spark Branch]
> -----------------------------------------------------------------
>
> Key: HIVE-10291
> URL: https://issues.apache.org/jira/browse/HIVE-10291
> Project: Hive
> Issue Type: Sub-task
> Components: Spark
> Affects Versions: 1.1.0
> Reporter: Szehon Ho
> Assignee: Szehon Ho
> Fix For: 1.2.0
>
> Attachments: HIVE-10291-spark.patch, HIVE-10291.2-spark.patch,
> HIVE-10291.3-spark.patch
>
>
> In a Hive on MR job, all the job properties are put into the JobConf, which
> can then be viewed via the MR2 HistoryServer's Job UI.
> However, in Hive on Spark we are submitting an application that is
> long-lived. Hence, we only put properties into the SparkConf relevant to
> application submission (spark and yarn properties). Only these are viewable
> through the Spark HistoryServer Application UI.
> It is the Hive application code (RemoteDriver, aka RemoteSparkContext) that
> is responsible for serializing and deserializing the job.xml per job (ie,
> query) within the application. Thus, for supportability we also need to give
> an equivalent mechanism to print the job.xml per job.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)