[jira] [Commented] (HBASE-23135) NoSuchMethodError: org.apache.hadoop.hbase.CellComparator.getInstance() while trying to bulk load in hbase using spark

2019-10-08 Thread Bikkumala Karthik (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16946933#comment-16946933
 ] 

Bikkumala Karthik commented on HBASE-23135:
---

[~openinx] Thanks for the explanation. I observed from the dependency tree that 
class CellComparatorImpl and CellComparator are part of  
org.apache.hbase:hbase-shaded-client:jar:2.2.1 . I am not adding it explicitly. 
So, I do not understand how there can be conflict. Still, to rule it out I 
excluded and included hbase-common:2.x explicitly. Still, I faced the same 
issue.

Let me know if I am doing anything wrong.

> NoSuchMethodError: org.apache.hadoop.hbase.CellComparator.getInstance() while 
> trying to bulk load in hbase using spark
> --
>
> Key: HBASE-23135
> URL: https://issues.apache.org/jira/browse/HBASE-23135
> Project: HBase
>  Issue Type: Bug
>Reporter: Bikkumala Karthik
>Priority: Major
>
> I am trying to Bulk Load data from HDFS to HBase. I used the following 
> example 
> [https://github.com/apache/hbase-connectors/blob/master/spark/hbase-spark/src/main/java/org/apache/hadoop/hbase/spark/example/hbasecontext/JavaHBaseBulkLoadExample.java]
>  
> I built the module with following command 
> mvn -Dspark.version=2.4.3 -Dscala.version=2.11.7 -Dscala.binary.version=2.11 
> clean install
> when i tried to run the example using the spark-submit command, i am getting 
> the following error: 
> {quote}Caused by: java.lang.NoSuchMethodError: 
> org.apache.hadoop.hbase.CellComparator.getInstance()Lorg/apache/hadoop/hbase/CellComparator;
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileWriter$Builder.(StoreFileWriter.java:348)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext.org$apache$hadoop$hbase$spark$HBaseContext$$getNewHFileWriter(HBaseContext.scala:928)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$2.apply(HBaseContext.scala:1023)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$2.apply(HBaseContext.scala:972)
> at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:79)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext.org$apache$hadoop$hbase$spark$HBaseContext$$writeValueToHFile(HBaseContext.scala:972)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$bulkLoad$3$$anonfun$apply$7.apply(HBaseContext.scala:677)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$bulkLoad$3$$anonfun$apply$7.apply(HBaseContext.scala:675)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$bulkLoad$3.apply(HBaseContext.scala:675)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$bulkLoad$3.apply(HBaseContext.scala:664)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext.org$apache$hadoop$hbase$spark$HBaseContext$$hbaseForeachPartition(HBaseContext.scala:490)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$foreachPartition$1.apply(HBaseContext.scala:106)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$foreachPartition$1.apply(HBaseContext.scala:106)
> at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
> at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
> at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
> at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
> at org.apache.spark.scheduler.Task.run(Task.scala:121)
> at 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
> at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {quote}
>  
> Please find the code here (pom.xml, mvn dependency tree, source file): 
> [https://gist.github.com/bikkumala/d2e349c7bfaffc673e8a641ff3ec9d33]
> I tried with the following versions
> Spark : 2.4.x
> HBase : 2.0.x
> Hadoop : 2.7.x
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23135) NoSuchMethodError: org.apache.hadoop.hbase.CellComparator.getInstance() while trying to bulk load in hbase using spark

2019-10-08 Thread Zheng Hu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16946847#comment-16946847
 ] 

Zheng Hu commented on HBASE-23135:
--

I've checked the code and found that the CellComparator.java in branch-2.0 has 
the getInstance method while the branch-1.x has not,  so I guess there're some 
hbase-common jar conflicts problem in your  project, say one hbase-common with 
version 1.x and one with version 2.x, Please have a check.

It's not a bug, so please don't file a JIRA when have a question, please send a 
email to user mail list instead :-) .  I'll close this issue as Not a Problem.

> NoSuchMethodError: org.apache.hadoop.hbase.CellComparator.getInstance() while 
> trying to bulk load in hbase using spark
> --
>
> Key: HBASE-23135
> URL: https://issues.apache.org/jira/browse/HBASE-23135
> Project: HBase
>  Issue Type: Bug
>Reporter: Bikkumala Karthik
>Priority: Major
>
> I am trying to Bulk Load data from HDFS to HBase. I used the following 
> example 
> [https://github.com/apache/hbase-connectors/blob/master/spark/hbase-spark/src/main/java/org/apache/hadoop/hbase/spark/example/hbasecontext/JavaHBaseBulkLoadExample.java]
>  
> I built the module with following command 
> mvn -Dspark.version=2.4.3 -Dscala.version=2.11.7 -Dscala.binary.version=2.11 
> clean install
> when i tried to run the example using the spark-submit command, i am getting 
> the following error: 
> {quote}Caused by: java.lang.NoSuchMethodError: 
> org.apache.hadoop.hbase.CellComparator.getInstance()Lorg/apache/hadoop/hbase/CellComparator;
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileWriter$Builder.(StoreFileWriter.java:348)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext.org$apache$hadoop$hbase$spark$HBaseContext$$getNewHFileWriter(HBaseContext.scala:928)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$2.apply(HBaseContext.scala:1023)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$2.apply(HBaseContext.scala:972)
> at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:79)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext.org$apache$hadoop$hbase$spark$HBaseContext$$writeValueToHFile(HBaseContext.scala:972)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$bulkLoad$3$$anonfun$apply$7.apply(HBaseContext.scala:677)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$bulkLoad$3$$anonfun$apply$7.apply(HBaseContext.scala:675)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$bulkLoad$3.apply(HBaseContext.scala:675)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$bulkLoad$3.apply(HBaseContext.scala:664)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext.org$apache$hadoop$hbase$spark$HBaseContext$$hbaseForeachPartition(HBaseContext.scala:490)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$foreachPartition$1.apply(HBaseContext.scala:106)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext$$anonfun$foreachPartition$1.apply(HBaseContext.scala:106)
> at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
> at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
> at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
> at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
> at org.apache.spark.scheduler.Task.run(Task.scala:121)
> at 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
> at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {quote}
>  
> Please find the code here (pom.xml, mvn dependency tree, source file): 
> [https://gist.github.com/bikkumala/d2e349c7bfaffc673e8a641ff3ec9d33]
> I tried with the following versions
> Spark : 2.4.x
> HBase : 2.0.x
> Hadoop : 2.7.x
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)