Can you verify whether the hbase release you're using has the following fix
?
HBASE-11118 non environment variable solution for "IllegalAccessError"

Cheers

On Tue, Apr 28, 2015 at 10:47 PM, Tridib Samanta <tridib.sama...@live.com>
wrote:

> I turned on the TRACE and I see lot of following exception:
>
> java.lang.IllegalAccessError: com/google/protobuf/ZeroCopyLiteralByteString
>  at
> org.apache.hadoop.hbase.protobuf.RequestConverter.buildRegionSpecifier(RequestConverter.java:897)
>  at
> org.apache.hadoop.hbase.protobuf.RequestConverter.buildGetRowOrBeforeRequest(RequestConverter.java:131)
>  at
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1402)
>  at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:701)
>  at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:699)
>  at
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
>  at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:705)
>  at
> org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:144)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1102)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1162)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1054)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1011)
>  at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:326)
>  at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:192)
>
> Thanks
> Tridib
>
> ------------------------------
> From: d...@ocirs.com
> Date: Tue, 28 Apr 2015 22:24:39 -0700
> Subject: Re: HBase HTable constructor hangs
> To: tridib.sama...@live.com
>
> In that case, something else is failing and the reason HBase looks like it
> hangs is that the hbase timeout or retry count is too high.
>
> Try setting the following conf and hbase will only hang for a few mins max
> and return a helpful error message.
>
> hbaseConf.set(HConstants.HBASE_CLIENT_RETRIES_NUMBER, "2")
>
>
>
>
> --
> Dean Chen
>
> On Tue, Apr 28, 2015 at 10:18 PM, Tridib Samanta <tridib.sama...@live.com>
> wrote:
>
> Nope, my hbase is unsecured.
>
> ------------------------------
> From: d...@ocirs.com
> Date: Tue, 28 Apr 2015 22:09:51 -0700
> Subject: Re: HBase HTable constructor hangs
> To: tridib.sama...@live.com
>
>
> Hi Tridib,
>
> Are you running this on a secure Hadoop/HBase cluster? I ran in to a
> similar issue where the HBase client can successfully connect in local mode
> and in the yarn-client driver but not on remote executors. The problem is
> that Spark doesn't distribute the hbase auth key, see the following Jira
> ticket and PR.
>
> https://issues.apache.org/jira/browse/SPARK-6918
>
>
> --
> Dean Chen
>
> On Tue, Apr 28, 2015 at 9:34 PM, Tridib Samanta <tridib.sama...@live.com>
> wrote:
>
> I am 100% sure how it's picking up the configuration. I copied the
> hbase-site.xml in hdfs/spark cluster (single machine). I also included
> hbase-site.xml in spark-job jar files. spark-job jar file also have
> yarn-site and mapred-site and core-site.xml in it.
>
> One interesting thing is, when I run the spark-job jar as standalone and
> execute the HBase client from a main method, it works fine. Same client
> unable to connect/hangs when the jar is distributed in spark.
>
> Thanks
> Tridib
>
> ------------------------------
> Date: Tue, 28 Apr 2015 21:25:41 -0700
>
> Subject: Re: HBase HTable constructor hangs
> From: yuzhih...@gmail.com
> To: tridib.sama...@live.com
> CC: user@spark.apache.org
>
> How did you distribute hbase-site.xml to the nodes ?
>
> Looks like HConnectionManager couldn't find the hbase:meta server.
>
> Cheers
>
> On Tue, Apr 28, 2015 at 9:19 PM, Tridib Samanta <tridib.sama...@live.com>
> wrote:
>
> I am using Spark 1.2.0 and HBase 0.98.1-cdh5.1.0.
>
> Here is the jstack trace. Complete stack trace attached.
>
> "Executor task launch worker-1" #58 daemon prio=5 os_prio=0
> tid=0x00007fd3d0445000 nid=0x488 waiting on condition [0x00007fd4507d9000]
>    java.lang.Thread.State: TIMED_WAITING (sleeping)
>  at java.lang.Thread.sleep(Native Method)
>  at
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:152)
>  - locked <0x00000000f8cb7258> (a
> org.apache.hadoop.hbase.client.RpcRetryingCaller)
>  at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:705)
>  at
> org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:144)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1102)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1162)
>  - locked <0x00000000f84ac0b0> (a java.lang.Object)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1054)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1011)
>  at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:326)
>  at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:192)
>  at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:150)
>  at com.mypackage.storeTuples(CubeStoreService.java:59)
>  at
> com.mypackage.StorePartitionToHBaseStoreFunction.call(StorePartitionToHBaseStoreFunction.java:23)
>  at
> com.mypackage.StorePartitionToHBaseStoreFunction.call(StorePartitionToHBaseStoreFunction.java:13)
>  at
> org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:195)
>  at
> org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:195)
>  at
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:773)
>  at
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:773)
>  at
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
>  at
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
>  at org.apache.spark.scheduler.Task.run(Task.scala:56)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>  at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
> "Executor task launch worker-0" #57 daemon prio=5 os_prio=0
> tid=0x00007fd3d0443800 nid=0x487 waiting for monitor entry
> [0x00007fd4506d8000]
>    java.lang.Thread.State: BLOCKED (on object monitor)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1156)
>  - waiting to lock <0x00000000f84ac0b0> (a java.lang.Object)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1054)
>  at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1011)
>  at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:326)
>  at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:192)
>  at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:150)
>  at com.mypackage.storeTuples(CubeStoreService.java:59)
>  at
> com.mypackage.StorePartitionToHBaseStoreFunction.call(StorePartitionToHBaseStoreFunction.java:23)
>  at
> com.mypackage.StorePartitionToHBaseStoreFunction.call(StorePartitionToHBaseStoreFunction.java:13)
>  at
> org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:195)
>  at
> org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.apply(JavaRDDLike.scala:195)
>  at
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:773)
>  at
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:773)
>  at
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
>  at
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
>  at org.apache.spark.scheduler.Task.run(Task.scala:56)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>  at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
>
> ------------------------------
> Date: Tue, 28 Apr 2015 19:35:26 -0700
> Subject: Re: HBase HTable constructor hangs
> From: yuzhih...@gmail.com
> To: tridib.sama...@live.com
> CC: user@spark.apache.org
>
> Can you give us more information ?
> Such as hbase release, Spark release.
>
> If you can pastebin jstack of the hanging HTable process, that would help.
>
> BTW I used
> http://search-hadoop.com/?q=spark+HBase+HTable+constructor+hangs and saw
> a very old thread with this subject.
>
> Cheers
>
> On Tue, Apr 28, 2015 at 7:12 PM, tridib <tridib.sama...@live.com> wrote:
>
> I am exactly having same issue. I am running hbase and spark in docker
> container.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/HBase-HTable-constructor-hangs-tp4926p22696.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
>
>
>
>

Reply via email to