I am using this code-example
http://www.vidyasource.com/blog/Programming/Scala/Java/Data/Hadoop/Analytics/2014/01/25/lighting-a-spark-with-hbase
to
read a hbase table using Spark with the only change of adding the
hbase.zookeeper.quorum through code as it is not picking it from the
hbase-site.xml.
Spark 1.5.3
HBase 0.98.0
Facing this error -
16/01/20 12:56:59 WARN
client.ConnectionManager$HConnectionImplementation: Encountered
problems when prefetch hbase:meta table:
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
attempts=3, exceptions:Wed Jan 20 12:56:58 GMT-07:00 2016,
org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e,
java.lang.IllegalAccessError: class
com.google.protobuf.HBaseZeroCopyByteString cannot access its
superclass com.google.protobuf.LiteralByteStringWed Jan 20 12:56:58
GMT-07:00 2016,
org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e,
java.lang.IllegalAccessError:
com/google/protobuf/HBaseZeroCopyByteStringWed Jan 20 12:56:59
GMT-07:00 2016,
org.apache.hadoop.hbase.client.RpcRetryingCaller@111585e,
java.lang.IllegalAccessError:
com/google/protobuf/HBaseZeroCopyByteString
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:751)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:147)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.prefetchRegionCache(ConnectionManager.java:1215)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1280)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1128)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1111)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1070)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:347)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:201)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:159)
at
org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:101)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:111)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1281)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
at org.apache.spark.rdd.RDD.take(RDD.scala:1276)
I tried adding the hbase protocol jar on spar-defaults.conf and in the
driver-classpath as suggested here
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalAccessError-class-com-google-protobuf-HBaseZeroCopyByteString-cannot-access-its-supg-td24303.html
but
no success.
Any suggestions ?
--aj