[
https://issues.apache.org/jira/browse/AMBARI-10859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14521353#comment-14521353
]
Hudson commented on AMBARI-10859:
---------------------------------
SUCCESS: Integrated in Ambari-trunk-Commit #2487 (See
[https://builds.apache.org/job/Ambari-trunk-Commit/2487/])
AMBARI-10859. hive-site.xml packaged under /etc/spark/conf is not correct
(aonishuk) (aonishuk:
http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=24f2548ae0085d137b94bc1c59b4ae09c2806f07)
*
ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/package/scripts/params.py
*
ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/package/scripts/setup_spark.py
> hive-site.xml packaged under /etc/spark/conf is not correct
> -----------------------------------------------------------
>
> Key: AMBARI-10859
> URL: https://issues.apache.org/jira/browse/AMBARI-10859
> Project: Ambari
> Issue Type: Bug
> Reporter: Andrew Onischuk
> Assignee: Andrew Onischuk
> Fix For: 2.1.0
>
>
> Ambari-2.1.0 for Dal is putting a lot more properties in /etc/spark/conf/hive-
> site.xml than desired. Its leading to unnecessary exceptions while trying to
> load HiveContext on Spark shell. Here is the error:
>
>
> 15/04/21 08:37:44 INFO ParseDriver: Parsing command: show tables
> 15/04/21 08:37:44 INFO ParseDriver: Parse Completed
> java.lang.RuntimeException: java.lang.NumberFormatException: For input
> string: "5s"
> at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
> at
> org.apache.spark.sql.hive.HiveContext$$anonfun$4.apply(HiveContext.scala:237)
> at
> org.apache.spark.sql.hive.HiveContext$$anonfun$4.apply(HiveContext.scala:233)
> at scala.Option.orElse(Option.scala:257)
> at
> org.apache.spark.sql.hive.HiveContext.x$3$lzycompute(HiveContext.scala:233)
> at org.apache.spark.sql.hive.HiveContext.x$3(HiveContext.scala:231)
> at
> org.apache.spark.sql.hive.HiveContext.hiveconf$lzycompute(HiveContext.scala:231)
> at
> org.apache.spark.sql.hive.HiveContext.hiveconf(HiveContext.scala:231)
> at
> org.apache.spark.sql.hive.HiveMetastoreCatalog.<init>(HiveMetastoreCatalog.scala:56)
> at
> org.apache.spark.sql.hive.HiveContext$$anon$2.<init>(HiveContext.scala:255)
> at
> org.apache.spark.sql.hive.HiveContext.catalog$lzycompute(HiveContext.scala:255)
> at
> org.apache.spark.sql.hive.HiveContext.catalog(HiveContext.scala:255)
> at
> org.apache.spark.sql.hive.HiveContext$$anon$4.<init>(HiveContext.scala:265)
> ....
>
> In previous Ambari release we were adding only a handful of properties (< 10)
> now 150+ (attached), we should revert to the old behavior.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)