Please ensure your hive-site.xml is pointing to a HiveServer2 endpoint vs 
HiveServer1

From: JiajiaJing<mailto:jj.jing0...@gmail.com>
Sent: ?Thursday?, ?July? ?17?, ?2014 ?8?:?48? ?PM
To: u...@spark.incubator.apache.org<mailto:u...@spark.incubator.apache.org>

Hello Spark Users,

I am new to Spark SQL and now trying to first get the HiveFromSpark example
working.
However, I got the following error when running HiveFromSpark.scala program.
May I get some help on this please?

ERROR MESSAGE:

org.apache.thrift.TApplicationException: Invalid method name: 'get_table'
 at
org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
 at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
 at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table(ThriftHiveMetastore.java:936)
 at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table(ThriftHiveMetastore.java:922)
 at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:854)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
 at $Proxy9.getTable(Unknown Source)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:950)
 at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:905)
 at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeCreateTable(SemanticAnalyzer.java:8999)
 at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8313)
 at
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:441)
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:342)
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:977)
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
 at org.apache.spark.sql.hive.HiveContext.runHive(HiveContext.scala:186)
 at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:160)
 at
org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:250)
 at
org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:247)
 at org.apache.spark.sql.hive.HiveContext.hiveql(HiveContext.scala:85)
 at org.apache.spark.sql.hive.HiveContext.hql(HiveContext.scala:90)
 at HiveFromSpark$.main(HiveFromSpark.scala:38)
 at HiveFromSpark.main(HiveFromSpark.scala)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:292)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)



Thank you very much!

JJing



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Hive-From-Spark-tp10110.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to