Hi, I have the following issue trying reading an existing Phoenix table:
import org.apache.spark.SparkContext import org.apache.spark.SparkConf import org.apache.phoenix.spark._ import org.apache.hadoop.hbase.HBaseConfiguration import org.apache.hadoop.hbase.HColumnDescriptor import org.apache.hadoop.hbase.HTableDescriptor import org.apache.hadoop.hbase.MasterNotRunningException import org.apache.hadoop.hbase.client.HBaseAdmin import org.apache.hadoop.hbase.client.HTable val HiveContext = new org.apache.spark.sql.hive.HiveContext(sc) //Read the phoenix table that already exists on top of Hbase table scala> val df = HiveContext.load( | "org.apache.phoenix.spark", | Map("table" -> "tsco", "zkUrl" -> "rhes564:2181") | ) warning: there was one deprecation warning; re-run with -deprecation for details java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.HBaseAdmin.<init>(Lorg/apache/hadoop/hbase/client/HConnection;)V at org.apache.phoenix.query.ConnectionQueryServicesImpl.getAdmin(ConnectionQueryServicesImpl.java:2902) at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2328) at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2296) at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78) at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2296) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:231) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144) at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:98) at org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:57) at org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:45) at org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:277) at org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:106) at org.apache.phoenix.spark.PhoenixRelation.schema(PhoenixRelation.scala:59) at org.apache.spark.sql.execution.datasources.LogicalRelation.<init>(LogicalRelation.scala:40) at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:382) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:143) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:122) at org.apache.spark.sql.SQLContext.load(SQLContext.scala:958) ... 53 elided Any ideas appreciated as I have run out of finding out why it cannot find the jar file. Thanks Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>* http://talebzadehmich.wordpress.com *Disclaimer:* Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction.