[ 
https://issues.apache.org/jira/browse/SPARK-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16930078#comment-16930078
 ] 

JP Bordenave commented on SPARK-13446:
--------------------------------------

i have issue HIVE_STATS_JDBC_TIMEOUT with hive 2.3.6,   hadoop 2.7.7 spark 
2.4.4   mysql 5.7.27

hive work with  fine spark engine and hadoop

but when i go spark-shell  and i run 

spark.sql("show databases").show
{noformat}
java.lang.NoSuchFieldError: HIVE_STATS_JDBC_TIMEOUT
  at 
org.apache.spark.sql.hive.HiveUtils$.formatTimeVarsForHiveClient(HiveUtils.scala:204)
  at 
org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:285)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:215)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:215)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:215)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
  at 
org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:214)
  at 
org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
  at 
org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
  at 
org.apache.spark.sql.hive.HiveSessionStateBuilder.org$apache$spark$sql$hive$HiveSessionStateBuilder$$externalCatalog(HiveSessionStateBuilder.scala:39)
  at 
org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$1.apply(HiveSessionStateBuilder.scala:54)
  at 
org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$1.apply(HiveSessionStateBuilder.scala:54)
  at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog$lzycompute(SessionCatalog.scala:90)
  at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog(SessionCatalog.scala:90)
  at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.listDatabases(SessionCatalog.scala:247)
  at 
org.apache.spark.sql.execution.command.ShowDatabasesCommand$$anonfun$2.apply(databases.scala:44)
  at 
org.apache.spark.sql.execution.command.ShowDatabasesCommand$$anonfun$2.apply(databases.scala:44)
  at scala.Option.getOrElse(Option.scala:121)
  at 
org.apache.spark.sql.execution.command.ShowDatabasesCommand.run(databases.scala:44)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
  at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
  at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
  at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
  at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370)
  at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
  at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
  at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369)
  at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194)
  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)
  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
 {noformat}
end of hive-site.xml 

<property>
 <name>hive.execution.engine</name>
 <value>spark</value>
 <description>Use Map Reduce as default execution engine</description>
 </property>
 <property>
 <name>spark.execution.memory</name>
 <value>8g</value>
 <description>mr memory</description>
</property>
<property>
 <name>spark.master</name>
 <value>spark://192.168.0.30:7077</value>
 </property>
<property>
 <name>spark.eventLog.enabled</name>
 <value>true</value>
 </property>
<property>
 <name>spark.eventLog.dir</name>
 <value>/tmp</value>
 </property>
<property>
 <name>spark.serializer</name>
 <value>org.apache.spark.serializer.KryoSerializer</value>
 </property>
<property>
 <name>spark.yarn.jars</name>
 <value>hdfs://192.168.0.30:54310/spark-jars/*</value>
</property>
 <property>
 <name>spark.sql.hive.metastore.jars</name>
 <value>/opt/spark/conf/hive-site.xml,/opt/spark/jars/*</value>
 </property>
 <property>
 <name>spark.sql.hive.metastore.version</name>
 <value>2.3.6</value>
 </property>
 <property>
 <name>hive.stats.dbclass</name>
 <value>fs</value>
 </property>
 <property>
 <name>hive.stats.fetch.column.stats</name>
 <value>false</value>
 </property>
 <property>
 <name>hive.stats.jdbc.timeout</name>
 <value>0</value>
 </property>

> Spark need to support reading data from Hive 2.0.0 metastore
> ------------------------------------------------------------
>
>                 Key: SPARK-13446
>                 URL: https://issues.apache.org/jira/browse/SPARK-13446
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 1.6.0
>            Reporter: Lifeng Wang
>            Assignee: Xiao Li
>            Priority: Major
>             Fix For: 2.2.0
>
>
> Spark provided HIveContext class to read data from hive metastore directly. 
> While it only supports hive 1.2.1 version and older. Since hive 2.0.0 has 
> released, it's better to upgrade to support Hive 2.0.0.
> {noformat}
> 16/02/23 02:35:02 INFO metastore: Trying to connect to metastore with URI 
> thrift://hsw-node13:9083
> 16/02/23 02:35:02 INFO metastore: Opened a connection to metastore, current 
> connections: 1
> 16/02/23 02:35:02 INFO metastore: Connected to metastore.
> Exception in thread "main" java.lang.NoSuchFieldError: HIVE_STATS_JDBC_TIMEOUT
>         at 
> org.apache.spark.sql.hive.HiveContext.configure(HiveContext.scala:473)
>         at 
> org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:192)
>         at 
> org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:185)
>         at 
> org.apache.spark.sql.hive.HiveContext$$anon$1.<init>(HiveContext.scala:422)
>         at 
> org.apache.spark.sql.hive.HiveContext.catalog$lzycompute(HiveContext.scala:422)
>         at 
> org.apache.spark.sql.hive.HiveContext.catalog(HiveContext.scala:421)
>         at org.apache.spark.sql.hive.HiveContext.catalog(HiveContext.scala:72)
>         at org.apache.spark.sql.SQLContext.table(SQLContext.scala:739)
>         at org.apache.spark.sql.SQLContext.table(SQLContext.scala:735)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to